Nov 1 01:01:11.050544 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 01:01:11.050568 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 01:01:11.050578 kernel: BIOS-provided physical RAM map: Nov 1 01:01:11.050586 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 01:01:11.050592 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 1 01:01:11.050600 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 1 01:01:11.050608 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Nov 1 01:01:11.050616 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 1 01:01:11.050623 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 1 01:01:11.050630 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 1 01:01:11.050638 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 1 01:01:11.050644 kernel: printk: bootconsole [earlyser0] enabled Nov 1 01:01:11.050649 kernel: NX (Execute Disable) protection: active Nov 1 01:01:11.050658 kernel: efi: EFI v2.70 by Microsoft Nov 1 01:01:11.050670 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c7a98 RNG=0x3ffd1018 Nov 1 01:01:11.050679 kernel: random: crng init done Nov 1 01:01:11.050685 kernel: SMBIOS 3.1.0 present. Nov 1 01:01:11.050692 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 1 01:01:11.050701 kernel: Hypervisor detected: Microsoft Hyper-V Nov 1 01:01:11.050707 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 1 01:01:11.050716 kernel: Hyper-V Host Build:20348-10.0-1-0.1827 Nov 1 01:01:11.050723 kernel: Hyper-V: Nested features: 0x1e0101 Nov 1 01:01:11.050731 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 1 01:01:11.050740 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 1 01:01:11.050747 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 1 01:01:11.050756 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 1 01:01:11.050763 kernel: tsc: Detected 2593.905 MHz processor Nov 1 01:01:11.050770 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 01:01:11.050780 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 01:01:11.050786 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 1 01:01:11.050794 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 01:01:11.050802 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 1 01:01:11.050810 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 1 01:01:11.050819 kernel: Using GB pages for direct mapping Nov 1 01:01:11.050826 kernel: Secure boot disabled Nov 1 01:01:11.050834 kernel: ACPI: Early table checksum verification disabled Nov 1 01:01:11.050842 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 1 01:01:11.050849 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 01:01:11.050857 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 01:01:11.050865 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 1 01:01:11.050880 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 1 01:01:11.050887 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 01:01:11.050893 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 01:01:11.050903 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 01:01:11.050910 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 01:01:11.050920 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 01:01:11.050929 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 01:01:11.050938 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 01:01:11.050946 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 1 01:01:11.050955 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 1 01:01:11.050963 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 1 01:01:11.050970 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 1 01:01:11.050978 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 1 01:01:11.050987 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 1 01:01:11.050999 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 1 01:01:11.051006 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 1 01:01:11.051012 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 1 01:01:11.051021 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 1 01:01:11.051029 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 01:01:11.051038 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 01:01:11.051047 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 1 01:01:11.051053 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 1 01:01:11.051064 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 1 01:01:11.051073 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 1 01:01:11.051083 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 1 01:01:11.056150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 1 01:01:11.056174 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 1 01:01:11.056188 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 1 01:01:11.056201 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 1 01:01:11.056215 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 1 01:01:11.056228 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 1 01:01:11.056241 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 1 01:01:11.056259 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 1 01:01:11.056271 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 1 01:01:11.056285 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 1 01:01:11.056298 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 1 01:01:11.056311 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 1 01:01:11.056324 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 1 01:01:11.056338 kernel: Zone ranges: Nov 1 01:01:11.056350 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 01:01:11.056364 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 01:01:11.056379 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 1 01:01:11.056392 kernel: Movable zone start for each node Nov 1 01:01:11.056405 kernel: Early memory node ranges Nov 1 01:01:11.056418 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 01:01:11.056431 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 1 01:01:11.056444 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 1 01:01:11.056457 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 1 01:01:11.056470 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 1 01:01:11.056483 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 01:01:11.056499 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 01:01:11.056511 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 1 01:01:11.056524 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 1 01:01:11.056537 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 1 01:01:11.056551 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 1 01:01:11.056563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 01:01:11.056576 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 01:01:11.056589 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 1 01:01:11.056602 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 01:01:11.056617 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 1 01:01:11.056630 kernel: Booting paravirtualized kernel on Hyper-V Nov 1 01:01:11.056644 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 01:01:11.056657 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Nov 1 01:01:11.056670 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Nov 1 01:01:11.056683 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Nov 1 01:01:11.056696 kernel: pcpu-alloc: [0] 0 1 Nov 1 01:01:11.056709 kernel: Hyper-V: PV spinlocks enabled Nov 1 01:01:11.056722 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 01:01:11.056737 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 1 01:01:11.056750 kernel: Policy zone: Normal Nov 1 01:01:11.056765 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 01:01:11.056778 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 01:01:11.056791 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 1 01:01:11.056804 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 01:01:11.056818 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 01:01:11.056831 kernel: Memory: 8071680K/8387460K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 315520K reserved, 0K cma-reserved) Nov 1 01:01:11.056847 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 01:01:11.056860 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 01:01:11.056882 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 01:01:11.056899 kernel: rcu: Hierarchical RCU implementation. Nov 1 01:01:11.056913 kernel: rcu: RCU event tracing is enabled. Nov 1 01:01:11.056927 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 01:01:11.056941 kernel: Rude variant of Tasks RCU enabled. Nov 1 01:01:11.056954 kernel: Tracing variant of Tasks RCU enabled. Nov 1 01:01:11.056968 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 01:01:11.056982 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 01:01:11.056996 kernel: Using NULL legacy PIC Nov 1 01:01:11.057012 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 1 01:01:11.057026 kernel: Console: colour dummy device 80x25 Nov 1 01:01:11.057039 kernel: printk: console [tty1] enabled Nov 1 01:01:11.057053 kernel: printk: console [ttyS0] enabled Nov 1 01:01:11.057066 kernel: printk: bootconsole [earlyser0] disabled Nov 1 01:01:11.057082 kernel: ACPI: Core revision 20210730 Nov 1 01:01:11.059586 kernel: Failed to register legacy timer interrupt Nov 1 01:01:11.059599 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 01:01:11.059611 kernel: Hyper-V: Using IPI hypercalls Nov 1 01:01:11.059622 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Nov 1 01:01:11.059634 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 01:01:11.059646 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 01:01:11.059660 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 01:01:11.059673 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 01:01:11.059684 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 01:01:11.059700 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 1 01:01:11.059712 kernel: RETBleed: Vulnerable Nov 1 01:01:11.059721 kernel: Speculative Store Bypass: Vulnerable Nov 1 01:01:11.059731 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 01:01:11.059739 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 01:01:11.059747 kernel: active return thunk: its_return_thunk Nov 1 01:01:11.059758 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 01:01:11.059767 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 01:01:11.059775 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 01:01:11.059783 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 01:01:11.059795 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 1 01:01:11.059803 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 1 01:01:11.059814 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 1 01:01:11.059821 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 01:01:11.059829 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 1 01:01:11.059839 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 1 01:01:11.059848 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 1 01:01:11.059857 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 1 01:01:11.059864 kernel: Freeing SMP alternatives memory: 32K Nov 1 01:01:11.059874 kernel: pid_max: default: 32768 minimum: 301 Nov 1 01:01:11.059883 kernel: LSM: Security Framework initializing Nov 1 01:01:11.059893 kernel: SELinux: Initializing. Nov 1 01:01:11.059903 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 01:01:11.059910 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 01:01:11.059919 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 1 01:01:11.059928 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 1 01:01:11.059936 kernel: signal: max sigframe size: 3632 Nov 1 01:01:11.059946 kernel: rcu: Hierarchical SRCU implementation. Nov 1 01:01:11.059954 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 01:01:11.059961 kernel: smp: Bringing up secondary CPUs ... Nov 1 01:01:11.059970 kernel: x86: Booting SMP configuration: Nov 1 01:01:11.059979 kernel: .... node #0, CPUs: #1 Nov 1 01:01:11.059993 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 1 01:01:11.060001 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 01:01:11.060010 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 01:01:11.060020 kernel: smpboot: Max logical packages: 1 Nov 1 01:01:11.060027 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Nov 1 01:01:11.060038 kernel: devtmpfs: initialized Nov 1 01:01:11.060045 kernel: x86/mm: Memory block size: 128MB Nov 1 01:01:11.060054 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 1 01:01:11.060065 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 01:01:11.060076 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 01:01:11.060083 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 01:01:11.060099 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 01:01:11.060108 kernel: audit: initializing netlink subsys (disabled) Nov 1 01:01:11.060119 kernel: audit: type=2000 audit(1761958869.025:1): state=initialized audit_enabled=0 res=1 Nov 1 01:01:11.060126 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 01:01:11.060134 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 01:01:11.060144 kernel: cpuidle: using governor menu Nov 1 01:01:11.060156 kernel: ACPI: bus type PCI registered Nov 1 01:01:11.060165 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 01:01:11.060172 kernel: dca service started, version 1.12.1 Nov 1 01:01:11.060183 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 01:01:11.060191 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 01:01:11.060201 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 01:01:11.060208 kernel: ACPI: Added _OSI(Module Device) Nov 1 01:01:11.060215 kernel: ACPI: Added _OSI(Processor Device) Nov 1 01:01:11.060228 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 01:01:11.060239 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 01:01:11.060248 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 01:01:11.060255 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 01:01:11.060266 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 01:01:11.060275 kernel: ACPI: Interpreter enabled Nov 1 01:01:11.060285 kernel: ACPI: PM: (supports S0 S5) Nov 1 01:01:11.060293 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 01:01:11.060303 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 01:01:11.060311 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 1 01:01:11.060324 kernel: iommu: Default domain type: Translated Nov 1 01:01:11.060332 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 01:01:11.060339 kernel: vgaarb: loaded Nov 1 01:01:11.060351 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 01:01:11.060359 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 01:01:11.060369 kernel: PTP clock support registered Nov 1 01:01:11.060377 kernel: Registered efivars operations Nov 1 01:01:11.060384 kernel: PCI: Using ACPI for IRQ routing Nov 1 01:01:11.060395 kernel: PCI: System does not support PCI Nov 1 01:01:11.060406 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 1 01:01:11.060414 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 01:01:11.060422 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 01:01:11.060433 kernel: pnp: PnP ACPI init Nov 1 01:01:11.060441 kernel: pnp: PnP ACPI: found 3 devices Nov 1 01:01:11.060451 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 01:01:11.060459 kernel: NET: Registered PF_INET protocol family Nov 1 01:01:11.060468 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 01:01:11.060477 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 1 01:01:11.060491 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 01:01:11.060499 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:01:11.060507 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Nov 1 01:01:11.060517 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 1 01:01:11.060527 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 01:01:11.060536 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 01:01:11.060543 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 01:01:11.060554 kernel: NET: Registered PF_XDP protocol family Nov 1 01:01:11.060562 kernel: PCI: CLS 0 bytes, default 64 Nov 1 01:01:11.060574 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 01:01:11.060581 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Nov 1 01:01:11.060591 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 01:01:11.060600 kernel: Initialise system trusted keyrings Nov 1 01:01:11.060610 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 1 01:01:11.060618 kernel: Key type asymmetric registered Nov 1 01:01:11.060626 kernel: Asymmetric key parser 'x509' registered Nov 1 01:01:11.060637 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 01:01:11.060648 kernel: io scheduler mq-deadline registered Nov 1 01:01:11.060658 kernel: io scheduler kyber registered Nov 1 01:01:11.060666 kernel: io scheduler bfq registered Nov 1 01:01:11.060676 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 01:01:11.060685 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 01:01:11.060694 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 01:01:11.060701 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 01:01:11.060712 kernel: i8042: PNP: No PS/2 controller found. Nov 1 01:01:11.060855 kernel: rtc_cmos 00:02: registered as rtc0 Nov 1 01:01:11.060951 kernel: rtc_cmos 00:02: setting system clock to 2025-11-01T01:01:10 UTC (1761958870) Nov 1 01:01:11.061036 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 1 01:01:11.061046 kernel: intel_pstate: CPU model not supported Nov 1 01:01:11.061054 kernel: efifb: probing for efifb Nov 1 01:01:11.061064 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 1 01:01:11.061072 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 1 01:01:11.061080 kernel: efifb: scrolling: redraw Nov 1 01:01:11.061098 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 1 01:01:11.061107 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 01:01:11.061119 kernel: fb0: EFI VGA frame buffer device Nov 1 01:01:11.061128 kernel: pstore: Registered efi as persistent store backend Nov 1 01:01:11.061138 kernel: NET: Registered PF_INET6 protocol family Nov 1 01:01:11.061146 kernel: Segment Routing with IPv6 Nov 1 01:01:11.061154 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 01:01:11.061164 kernel: NET: Registered PF_PACKET protocol family Nov 1 01:01:11.061173 kernel: Key type dns_resolver registered Nov 1 01:01:11.061183 kernel: IPI shorthand broadcast: enabled Nov 1 01:01:11.061190 kernel: sched_clock: Marking stable (816914600, 23257900)->(1039783800, -199611300) Nov 1 01:01:11.061200 kernel: registered taskstats version 1 Nov 1 01:01:11.061207 kernel: Loading compiled-in X.509 certificates Nov 1 01:01:11.061218 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 01:01:11.061227 kernel: Key type .fscrypt registered Nov 1 01:01:11.061236 kernel: Key type fscrypt-provisioning registered Nov 1 01:01:11.061244 kernel: pstore: Using crash dump compression: deflate Nov 1 01:01:11.061253 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 01:01:11.061262 kernel: ima: Allocated hash algorithm: sha1 Nov 1 01:01:11.061274 kernel: ima: No architecture policies found Nov 1 01:01:11.061282 kernel: clk: Disabling unused clocks Nov 1 01:01:11.061289 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 01:01:11.061300 kernel: Write protecting the kernel read-only data: 28672k Nov 1 01:01:11.061308 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 01:01:11.061315 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 01:01:11.061325 kernel: Run /init as init process Nov 1 01:01:11.061334 kernel: with arguments: Nov 1 01:01:11.061341 kernel: /init Nov 1 01:01:11.061350 kernel: with environment: Nov 1 01:01:11.061362 kernel: HOME=/ Nov 1 01:01:11.061371 kernel: TERM=linux Nov 1 01:01:11.061378 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 01:01:11.061388 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 01:01:11.061400 systemd[1]: Detected virtualization microsoft. Nov 1 01:01:11.061409 systemd[1]: Detected architecture x86-64. Nov 1 01:01:11.061419 systemd[1]: Running in initrd. Nov 1 01:01:11.061429 systemd[1]: No hostname configured, using default hostname. Nov 1 01:01:11.061437 systemd[1]: Hostname set to . Nov 1 01:01:11.061447 systemd[1]: Initializing machine ID from random generator. Nov 1 01:01:11.061455 systemd[1]: Queued start job for default target initrd.target. Nov 1 01:01:11.061463 systemd[1]: Started systemd-ask-password-console.path. Nov 1 01:01:11.061473 systemd[1]: Reached target cryptsetup.target. Nov 1 01:01:11.061481 systemd[1]: Reached target paths.target. Nov 1 01:01:11.061489 systemd[1]: Reached target slices.target. Nov 1 01:01:11.061496 systemd[1]: Reached target swap.target. Nov 1 01:01:11.061509 systemd[1]: Reached target timers.target. Nov 1 01:01:11.061519 systemd[1]: Listening on iscsid.socket. Nov 1 01:01:11.061529 systemd[1]: Listening on iscsiuio.socket. Nov 1 01:01:11.061536 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 01:01:11.061546 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 01:01:11.061556 systemd[1]: Listening on systemd-journald.socket. Nov 1 01:01:11.061567 systemd[1]: Listening on systemd-networkd.socket. Nov 1 01:01:11.061577 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 01:01:11.061588 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 01:01:11.061596 systemd[1]: Reached target sockets.target. Nov 1 01:01:11.061607 systemd[1]: Starting kmod-static-nodes.service... Nov 1 01:01:11.061615 systemd[1]: Finished network-cleanup.service. Nov 1 01:01:11.061623 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 01:01:11.061634 systemd[1]: Starting systemd-journald.service... Nov 1 01:01:11.061644 systemd[1]: Starting systemd-modules-load.service... Nov 1 01:01:11.061653 systemd[1]: Starting systemd-resolved.service... Nov 1 01:01:11.061664 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 01:01:11.061673 systemd[1]: Finished kmod-static-nodes.service. Nov 1 01:01:11.061682 kernel: audit: type=1130 audit(1761958871.059:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.061696 systemd-journald[183]: Journal started Nov 1 01:01:11.061747 systemd-journald[183]: Runtime Journal (/run/log/journal/fbf266d1b1054a9eafd1052c47a8ea63) is 8.0M, max 159.0M, 151.0M free. Nov 1 01:01:11.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.053393 systemd-modules-load[184]: Inserted module 'overlay' Nov 1 01:01:11.079139 systemd[1]: Started systemd-journald.service. Nov 1 01:01:11.084858 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 01:01:11.106248 kernel: audit: type=1130 audit(1761958871.084:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.107167 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 01:01:11.117264 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 01:01:11.124192 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 01:01:11.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.158150 kernel: audit: type=1130 audit(1761958871.105:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.158199 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 01:01:11.158215 kernel: audit: type=1130 audit(1761958871.111:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.148803 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 01:01:11.153303 systemd-resolved[185]: Positive Trust Anchors: Nov 1 01:01:11.153311 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:01:11.153347 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 01:01:11.156199 systemd-resolved[185]: Defaulting to hostname 'linux'. Nov 1 01:01:11.198345 systemd[1]: Started systemd-resolved.service. Nov 1 01:01:11.204808 systemd[1]: Reached target nss-lookup.target. Nov 1 01:01:11.208578 kernel: Bridge firewalling registered Nov 1 01:01:11.205127 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 1 01:01:11.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.240273 kernel: audit: type=1130 audit(1761958871.198:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.240323 kernel: audit: type=1130 audit(1761958871.204:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.240737 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 01:01:11.245342 systemd[1]: Starting dracut-cmdline.service... Nov 1 01:01:11.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.259678 dracut-cmdline[200]: dracut-dracut-053 Nov 1 01:01:11.270612 kernel: audit: type=1130 audit(1761958871.242:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.270642 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 01:01:11.302114 kernel: SCSI subsystem initialized Nov 1 01:01:11.327665 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 01:01:11.327741 kernel: device-mapper: uevent: version 1.0.3 Nov 1 01:01:11.332948 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 01:01:11.338151 kernel: Loading iSCSI transport class v2.0-870. Nov 1 01:01:11.337530 systemd-modules-load[184]: Inserted module 'dm_multipath' Nov 1 01:01:11.341356 systemd[1]: Finished systemd-modules-load.service. Nov 1 01:01:11.363406 kernel: audit: type=1130 audit(1761958871.344:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.345286 systemd[1]: Starting systemd-sysctl.service... Nov 1 01:01:11.366346 systemd[1]: Finished systemd-sysctl.service. Nov 1 01:01:11.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.385114 kernel: audit: type=1130 audit(1761958871.369:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.393115 kernel: iscsi: registered transport (tcp) Nov 1 01:01:11.420240 kernel: iscsi: registered transport (qla4xxx) Nov 1 01:01:11.420319 kernel: QLogic iSCSI HBA Driver Nov 1 01:01:11.449711 systemd[1]: Finished dracut-cmdline.service. Nov 1 01:01:11.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.454926 systemd[1]: Starting dracut-pre-udev.service... Nov 1 01:01:11.506117 kernel: raid6: avx512x4 gen() 18364 MB/s Nov 1 01:01:11.526111 kernel: raid6: avx512x4 xor() 8438 MB/s Nov 1 01:01:11.546108 kernel: raid6: avx512x2 gen() 18382 MB/s Nov 1 01:01:11.566109 kernel: raid6: avx512x2 xor() 29729 MB/s Nov 1 01:01:11.586104 kernel: raid6: avx512x1 gen() 18402 MB/s Nov 1 01:01:11.606104 kernel: raid6: avx512x1 xor() 26585 MB/s Nov 1 01:01:11.627108 kernel: raid6: avx2x4 gen() 18405 MB/s Nov 1 01:01:11.647104 kernel: raid6: avx2x4 xor() 7548 MB/s Nov 1 01:01:11.667116 kernel: raid6: avx2x2 gen() 18227 MB/s Nov 1 01:01:11.687106 kernel: raid6: avx2x2 xor() 22108 MB/s Nov 1 01:01:11.707104 kernel: raid6: avx2x1 gen() 13694 MB/s Nov 1 01:01:11.727104 kernel: raid6: avx2x1 xor() 19406 MB/s Nov 1 01:01:11.747107 kernel: raid6: sse2x4 gen() 11682 MB/s Nov 1 01:01:11.767114 kernel: raid6: sse2x4 xor() 7389 MB/s Nov 1 01:01:11.788101 kernel: raid6: sse2x2 gen() 12798 MB/s Nov 1 01:01:11.808110 kernel: raid6: sse2x2 xor() 7451 MB/s Nov 1 01:01:11.828106 kernel: raid6: sse2x1 gen() 11490 MB/s Nov 1 01:01:11.850954 kernel: raid6: sse2x1 xor() 5917 MB/s Nov 1 01:01:11.850995 kernel: raid6: using algorithm avx2x4 gen() 18405 MB/s Nov 1 01:01:11.851010 kernel: raid6: .... xor() 7548 MB/s, rmw enabled Nov 1 01:01:11.854356 kernel: raid6: using avx512x2 recovery algorithm Nov 1 01:01:11.874116 kernel: xor: automatically using best checksumming function avx Nov 1 01:01:11.971119 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 01:01:11.979379 systemd[1]: Finished dracut-pre-udev.service. Nov 1 01:01:11.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:11.985000 audit: BPF prog-id=7 op=LOAD Nov 1 01:01:11.985000 audit: BPF prog-id=8 op=LOAD Nov 1 01:01:11.985764 systemd[1]: Starting systemd-udevd.service... Nov 1 01:01:12.000433 systemd-udevd[383]: Using default interface naming scheme 'v252'. Nov 1 01:01:12.005185 systemd[1]: Started systemd-udevd.service. Nov 1 01:01:12.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:12.012818 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 01:01:12.029860 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Nov 1 01:01:12.061843 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 01:01:12.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:12.067273 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 01:01:12.102136 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 01:01:12.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:12.150115 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 01:01:12.160111 kernel: hv_vmbus: Vmbus version:5.2 Nov 1 01:01:12.177111 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 1 01:01:12.188075 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 01:01:12.188143 kernel: AES CTR mode by8 optimization enabled Nov 1 01:01:12.207108 kernel: hv_vmbus: registering driver hv_storvsc Nov 1 01:01:12.226223 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 1 01:01:12.226278 kernel: scsi host1: storvsc_host_t Nov 1 01:01:12.226315 kernel: scsi host0: storvsc_host_t Nov 1 01:01:12.235119 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 1 01:01:12.235189 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 1 01:01:12.246110 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 01:01:12.246161 kernel: hv_vmbus: registering driver hv_netvsc Nov 1 01:01:12.266114 kernel: hv_vmbus: registering driver hid_hyperv Nov 1 01:01:12.277354 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 1 01:01:12.277416 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 1 01:01:12.285648 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 1 01:01:12.292679 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 01:01:12.292708 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 1 01:01:12.324679 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 1 01:01:12.347263 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 1 01:01:12.347456 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 01:01:12.347638 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 1 01:01:12.347752 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 1 01:01:12.347852 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:01:12.347869 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 01:01:12.426380 kernel: hv_netvsc 7c1e522e-80e2-7c1e-522e-80e27c1e522e eth0: VF slot 1 added Nov 1 01:01:12.426654 kernel: hv_netvsc 7c1e522e-80e2-7c1e-522e-80e27c1e522e eth0: VF slot 1 removed Nov 1 01:01:12.439111 kernel: hv_vmbus: registering driver hv_pci Nov 1 01:01:12.707573 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 01:01:12.731114 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (436) Nov 1 01:01:12.744554 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 01:01:12.895851 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 01:01:12.930978 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 01:01:12.938217 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 01:01:12.944518 systemd[1]: Starting disk-uuid.service... Nov 1 01:01:12.959115 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:01:12.970115 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:01:12.977115 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:01:13.500440 kernel: hv_netvsc 7c1e522e-80e2-7c1e-522e-80e27c1e522e eth0: VF slot 1 added Nov 1 01:01:13.507241 kernel: hv_pci 57c1acf7-2483-4d02-aada-5dd4f9682d8f: PCI VMBus probing: Using version 0x10004 Nov 1 01:01:13.565722 kernel: hv_pci 57c1acf7-2483-4d02-aada-5dd4f9682d8f: PCI host bridge to bus 2483:00 Nov 1 01:01:13.565889 kernel: pci_bus 2483:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 1 01:01:13.566057 kernel: pci_bus 2483:00: No busn resource found for root bus, will use [bus 00-ff] Nov 1 01:01:13.566236 kernel: pci 2483:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 1 01:01:13.566417 kernel: pci 2483:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 1 01:01:13.566576 kernel: pci 2483:00:02.0: enabling Extended Tags Nov 1 01:01:13.566730 kernel: pci 2483:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2483:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 1 01:01:13.566886 kernel: pci_bus 2483:00: busn_res: [bus 00-ff] end is updated to 00 Nov 1 01:01:13.567033 kernel: pci 2483:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 1 01:01:13.659321 kernel: mlx5_core 2483:00:02.0: enabling device (0000 -> 0002) Nov 1 01:01:13.914353 kernel: mlx5_core 2483:00:02.0: firmware version: 14.30.5006 Nov 1 01:01:13.914533 kernel: mlx5_core 2483:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Nov 1 01:01:13.914638 kernel: mlx5_core 2483:00:02.0: Supported tc offload range - chains: 1, prios: 1 Nov 1 01:01:13.914737 kernel: mlx5_core 2483:00:02.0: mlx5e_tc_post_act_init:40:(pid 187): firmware level support is missing Nov 1 01:01:13.914836 kernel: hv_netvsc 7c1e522e-80e2-7c1e-522e-80e27c1e522e eth0: VF registering: eth1 Nov 1 01:01:13.914931 kernel: mlx5_core 2483:00:02.0 eth1: joined to eth0 Nov 1 01:01:13.922140 kernel: mlx5_core 2483:00:02.0 enP9347s1: renamed from eth1 Nov 1 01:01:13.981116 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 01:01:13.981415 disk-uuid[555]: The operation has completed successfully. Nov 1 01:01:14.059668 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 01:01:14.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:14.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:14.059777 systemd[1]: Finished disk-uuid.service. Nov 1 01:01:14.071021 systemd[1]: Starting verity-setup.service... Nov 1 01:01:14.103108 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 01:01:14.373497 systemd[1]: Found device dev-mapper-usr.device. Nov 1 01:01:14.377912 systemd[1]: Mounting sysusr-usr.mount... Nov 1 01:01:14.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:14.383483 systemd[1]: Finished verity-setup.service. Nov 1 01:01:14.463946 systemd[1]: Mounted sysusr-usr.mount. Nov 1 01:01:14.469573 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 01:01:14.466293 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 01:01:14.467054 systemd[1]: Starting ignition-setup.service... Nov 1 01:01:14.478128 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 01:01:14.504180 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:01:14.504240 kernel: BTRFS info (device sda6): using free space tree Nov 1 01:01:14.504261 kernel: BTRFS info (device sda6): has skinny extents Nov 1 01:01:14.549774 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 01:01:14.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:14.552000 audit: BPF prog-id=9 op=LOAD Nov 1 01:01:14.553534 systemd[1]: Starting systemd-networkd.service... Nov 1 01:01:14.580286 systemd-networkd[832]: lo: Link UP Nov 1 01:01:14.580295 systemd-networkd[832]: lo: Gained carrier Nov 1 01:01:14.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:14.580851 systemd-networkd[832]: Enumeration completed Nov 1 01:01:14.581216 systemd[1]: Started systemd-networkd.service. Nov 1 01:01:14.583821 systemd-networkd[832]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:01:14.584632 systemd[1]: Reached target network.target. Nov 1 01:01:14.588879 systemd[1]: Starting iscsiuio.service... Nov 1 01:01:14.606680 systemd[1]: Started iscsiuio.service. Nov 1 01:01:14.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:14.611064 systemd[1]: Starting iscsid.service... Nov 1 01:01:14.614637 iscsid[838]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 01:01:14.614637 iscsid[838]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 01:01:14.614637 iscsid[838]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 01:01:14.614637 iscsid[838]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 01:01:14.614637 iscsid[838]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 01:01:14.614637 iscsid[838]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 01:01:14.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:14.616267 systemd[1]: Started iscsid.service. Nov 1 01:01:14.632231 systemd[1]: Starting dracut-initqueue.service... Nov 1 01:01:14.656047 systemd[1]: Finished dracut-initqueue.service. Nov 1 01:01:14.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:14.658248 systemd[1]: Reached target remote-fs-pre.target. Nov 1 01:01:14.674166 kernel: mlx5_core 2483:00:02.0 enP9347s1: Link up Nov 1 01:01:14.674349 kernel: buffer_size[0]=0 is not enough for lossless buffer Nov 1 01:01:14.665894 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 01:01:14.672920 systemd[1]: Reached target remote-fs.target. Nov 1 01:01:14.680742 systemd[1]: Starting dracut-pre-mount.service... Nov 1 01:01:14.693142 systemd[1]: Finished dracut-pre-mount.service. Nov 1 01:01:14.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:14.700812 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 01:01:14.712110 kernel: hv_netvsc 7c1e522e-80e2-7c1e-522e-80e27c1e522e eth0: Data path switched to VF: enP9347s1 Nov 1 01:01:14.712308 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:01:14.716702 systemd-networkd[832]: enP9347s1: Link UP Nov 1 01:01:14.716849 systemd-networkd[832]: eth0: Link UP Nov 1 01:01:14.717073 systemd-networkd[832]: eth0: Gained carrier Nov 1 01:01:14.723045 systemd-networkd[832]: enP9347s1: Gained carrier Nov 1 01:01:14.754201 systemd-networkd[832]: eth0: DHCPv4 address 10.200.4.9/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 1 01:01:14.875251 systemd[1]: Finished ignition-setup.service. Nov 1 01:01:14.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:14.879819 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 01:01:16.349315 systemd-networkd[832]: eth0: Gained IPv6LL Nov 1 01:01:18.996988 ignition[857]: Ignition 2.14.0 Nov 1 01:01:18.997002 ignition[857]: Stage: fetch-offline Nov 1 01:01:18.997084 ignition[857]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:01:18.997142 ignition[857]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 01:01:19.136776 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 01:01:19.136965 ignition[857]: parsed url from cmdline: "" Nov 1 01:01:19.136969 ignition[857]: no config URL provided Nov 1 01:01:19.136975 ignition[857]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 01:01:19.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:19.141870 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 01:01:19.166522 kernel: kauditd_printk_skb: 18 callbacks suppressed Nov 1 01:01:19.166557 kernel: audit: type=1130 audit(1761958879.143:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:19.136983 ignition[857]: no config at "/usr/lib/ignition/user.ign" Nov 1 01:01:19.146842 systemd[1]: Starting ignition-fetch.service... Nov 1 01:01:19.136989 ignition[857]: failed to fetch config: resource requires networking Nov 1 01:01:19.139643 ignition[857]: Ignition finished successfully Nov 1 01:01:19.155340 ignition[863]: Ignition 2.14.0 Nov 1 01:01:19.155347 ignition[863]: Stage: fetch Nov 1 01:01:19.155455 ignition[863]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:01:19.155479 ignition[863]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 01:01:19.183837 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 01:01:19.184447 ignition[863]: parsed url from cmdline: "" Nov 1 01:01:19.184451 ignition[863]: no config URL provided Nov 1 01:01:19.184458 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 01:01:19.184468 ignition[863]: no config at "/usr/lib/ignition/user.ign" Nov 1 01:01:19.184503 ignition[863]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 1 01:01:19.336481 ignition[863]: GET result: OK Nov 1 01:01:19.336677 ignition[863]: config has been read from IMDS userdata Nov 1 01:01:19.336693 ignition[863]: parsing config with SHA512: 2ad4a75327e3811863632bb9fda1ac11801d73044ba5714eb588e40fa408ae72356f6becafa979920bdd6b0471638c8275ba45a09addc536b41ae8d01423ca88 Nov 1 01:01:19.342417 unknown[863]: fetched base config from "system" Nov 1 01:01:19.344576 unknown[863]: fetched base config from "system" Nov 1 01:01:19.344595 unknown[863]: fetched user config from "azure" Nov 1 01:01:19.399787 ignition[863]: fetch: fetch complete Nov 1 01:01:19.399801 ignition[863]: fetch: fetch passed Nov 1 01:01:19.403080 ignition[863]: Ignition finished successfully Nov 1 01:01:19.406534 systemd[1]: Finished ignition-fetch.service. Nov 1 01:01:19.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:19.423116 kernel: audit: type=1130 audit(1761958879.408:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:19.421510 systemd[1]: Starting ignition-kargs.service... Nov 1 01:01:19.432971 ignition[869]: Ignition 2.14.0 Nov 1 01:01:19.432982 ignition[869]: Stage: kargs Nov 1 01:01:19.433133 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:01:19.433167 ignition[869]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 01:01:19.437850 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 01:01:19.439186 ignition[869]: kargs: kargs passed Nov 1 01:01:19.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:19.442468 systemd[1]: Finished ignition-kargs.service. Nov 1 01:01:19.459879 kernel: audit: type=1130 audit(1761958879.445:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:19.439239 ignition[869]: Ignition finished successfully Nov 1 01:01:19.462220 systemd[1]: Starting ignition-disks.service... Nov 1 01:01:19.466013 ignition[875]: Ignition 2.14.0 Nov 1 01:01:19.467135 ignition[875]: Stage: disks Nov 1 01:01:19.467267 ignition[875]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:01:19.467305 ignition[875]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 01:01:19.474461 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 01:01:19.500190 kernel: audit: type=1130 audit(1761958879.477:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:19.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:19.475842 ignition[875]: disks: disks passed Nov 1 01:01:19.477316 systemd[1]: Finished ignition-disks.service. Nov 1 01:01:19.475890 ignition[875]: Ignition finished successfully Nov 1 01:01:19.478587 systemd[1]: Reached target initrd-root-device.target. Nov 1 01:01:19.478949 systemd[1]: Reached target local-fs-pre.target. Nov 1 01:01:19.479356 systemd[1]: Reached target local-fs.target. Nov 1 01:01:19.479849 systemd[1]: Reached target sysinit.target. Nov 1 01:01:19.480252 systemd[1]: Reached target basic.target. Nov 1 01:01:19.494183 systemd[1]: Starting systemd-fsck-root.service... Nov 1 01:01:19.562472 systemd-fsck[883]: ROOT: clean, 637/7326000 files, 481088/7359488 blocks Nov 1 01:01:19.567075 systemd[1]: Finished systemd-fsck-root.service. Nov 1 01:01:19.585547 kernel: audit: type=1130 audit(1761958879.569:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:19.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:19.583083 systemd[1]: Mounting sysroot.mount... Nov 1 01:01:19.607112 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 01:01:19.607376 systemd[1]: Mounted sysroot.mount. Nov 1 01:01:19.610837 systemd[1]: Reached target initrd-root-fs.target. Nov 1 01:01:19.648004 systemd[1]: Mounting sysroot-usr.mount... Nov 1 01:01:19.654012 systemd[1]: Starting flatcar-metadata-hostname.service... Nov 1 01:01:19.658499 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 01:01:19.658540 systemd[1]: Reached target ignition-diskful.target. Nov 1 01:01:19.666385 systemd[1]: Mounted sysroot-usr.mount. Nov 1 01:01:19.714597 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 01:01:19.719967 systemd[1]: Starting initrd-setup-root.service... Nov 1 01:01:19.737113 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (894) Nov 1 01:01:19.742473 initrd-setup-root[899]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 01:01:19.752892 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:01:19.752915 kernel: BTRFS info (device sda6): using free space tree Nov 1 01:01:19.752925 kernel: BTRFS info (device sda6): has skinny extents Nov 1 01:01:19.760371 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory Nov 1 01:01:19.778273 initrd-setup-root[931]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 01:01:19.798038 initrd-setup-root[939]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 01:01:19.919704 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 01:01:20.310779 systemd[1]: Finished initrd-setup-root.service. Nov 1 01:01:20.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:20.316344 systemd[1]: Starting ignition-mount.service... Nov 1 01:01:20.329116 kernel: audit: type=1130 audit(1761958880.315:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:20.332838 systemd[1]: Starting sysroot-boot.service... Nov 1 01:01:20.335929 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Nov 1 01:01:20.336036 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Nov 1 01:01:20.362513 systemd[1]: Finished sysroot-boot.service. Nov 1 01:01:20.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:20.377106 kernel: audit: type=1130 audit(1761958880.364:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:20.775019 ignition[964]: INFO : Ignition 2.14.0 Nov 1 01:01:20.775019 ignition[964]: INFO : Stage: mount Nov 1 01:01:20.778913 ignition[964]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:01:20.778913 ignition[964]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 01:01:20.792577 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 01:01:20.796373 ignition[964]: INFO : mount: mount passed Nov 1 01:01:20.798253 ignition[964]: INFO : Ignition finished successfully Nov 1 01:01:20.801048 systemd[1]: Finished ignition-mount.service. Nov 1 01:01:20.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:20.816112 kernel: audit: type=1130 audit(1761958880.802:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:21.043536 coreos-metadata[893]: Nov 01 01:01:21.043 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 1 01:01:21.059775 coreos-metadata[893]: Nov 01 01:01:21.059 INFO Fetch successful Nov 1 01:01:21.094661 coreos-metadata[893]: Nov 01 01:01:21.094 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 1 01:01:21.109497 coreos-metadata[893]: Nov 01 01:01:21.109 INFO Fetch successful Nov 1 01:01:21.127057 coreos-metadata[893]: Nov 01 01:01:21.127 INFO wrote hostname ci-3510.3.8-n-16445aab1e to /sysroot/etc/hostname Nov 1 01:01:21.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:21.129354 systemd[1]: Finished flatcar-metadata-hostname.service. Nov 1 01:01:21.149133 kernel: audit: type=1130 audit(1761958881.134:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:21.135778 systemd[1]: Starting ignition-files.service... Nov 1 01:01:21.155715 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 01:01:21.171116 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (972) Nov 1 01:01:21.181781 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:01:21.181821 kernel: BTRFS info (device sda6): using free space tree Nov 1 01:01:21.181833 kernel: BTRFS info (device sda6): has skinny extents Nov 1 01:01:21.392957 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 01:01:21.406349 ignition[991]: INFO : Ignition 2.14.0 Nov 1 01:01:21.406349 ignition[991]: INFO : Stage: files Nov 1 01:01:21.410224 ignition[991]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:01:21.410224 ignition[991]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 01:01:21.423509 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 01:01:21.452605 ignition[991]: DEBUG : files: compiled without relabeling support, skipping Nov 1 01:01:21.456315 ignition[991]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 01:01:21.456315 ignition[991]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 01:01:21.504164 ignition[991]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 01:01:21.508287 ignition[991]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 01:01:21.518223 unknown[991]: wrote ssh authorized keys file for user: core Nov 1 01:01:21.520994 ignition[991]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 01:01:21.536945 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Nov 1 01:01:21.541570 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 01:01:21.546006 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:01:21.550555 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:01:21.555427 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 01:01:21.555427 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 01:01:21.555427 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Nov 1 01:01:21.555427 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 01:01:21.579520 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem398844257" Nov 1 01:01:21.579520 ignition[991]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem398844257": device or resource busy Nov 1 01:01:21.579520 ignition[991]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem398844257", trying btrfs: device or resource busy Nov 1 01:01:21.579520 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem398844257" Nov 1 01:01:21.579520 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem398844257" Nov 1 01:01:21.579520 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem398844257" Nov 1 01:01:21.579520 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem398844257" Nov 1 01:01:21.579520 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Nov 1 01:01:21.579520 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 01:01:21.579520 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 01:01:21.579520 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem420168246" Nov 1 01:01:21.579520 ignition[991]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem420168246": device or resource busy Nov 1 01:01:21.579520 ignition[991]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem420168246", trying btrfs: device or resource busy Nov 1 01:01:21.579520 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem420168246" Nov 1 01:01:21.568426 systemd[1]: mnt-oem398844257.mount: Deactivated successfully. Nov 1 01:01:21.660391 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem420168246" Nov 1 01:01:21.660391 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem420168246" Nov 1 01:01:21.660391 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem420168246" Nov 1 01:01:21.660391 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 01:01:21.660391 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 01:01:21.660391 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 01:01:21.937691 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Nov 1 01:01:22.108687 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 01:01:22.108687 ignition[991]: INFO : files: op(f): [started] processing unit "waagent.service" Nov 1 01:01:22.108687 ignition[991]: INFO : files: op(f): [finished] processing unit "waagent.service" Nov 1 01:01:22.108687 ignition[991]: INFO : files: op(10): [started] processing unit "nvidia.service" Nov 1 01:01:22.108687 ignition[991]: INFO : files: op(10): [finished] processing unit "nvidia.service" Nov 1 01:01:22.129269 ignition[991]: INFO : files: op(11): [started] setting preset to enabled for "nvidia.service" Nov 1 01:01:22.129269 ignition[991]: INFO : files: op(11): [finished] setting preset to enabled for "nvidia.service" Nov 1 01:01:22.129269 ignition[991]: INFO : files: op(12): [started] setting preset to enabled for "waagent.service" Nov 1 01:01:22.129269 ignition[991]: INFO : files: op(12): [finished] setting preset to enabled for "waagent.service" Nov 1 01:01:22.129269 ignition[991]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:01:22.129269 ignition[991]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:01:22.129269 ignition[991]: INFO : files: files passed Nov 1 01:01:22.129269 ignition[991]: INFO : Ignition finished successfully Nov 1 01:01:22.167549 kernel: audit: type=1130 audit(1761958882.132:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.115160 systemd[1]: Finished ignition-files.service. Nov 1 01:01:22.135568 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 01:01:22.151534 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 01:01:22.180275 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:01:22.160710 systemd[1]: Starting ignition-quench.service... Nov 1 01:01:22.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.176129 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 01:01:22.191763 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 01:01:22.194252 systemd[1]: Finished ignition-quench.service. Nov 1 01:01:22.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.198302 systemd[1]: Reached target ignition-complete.target. Nov 1 01:01:22.203602 systemd[1]: Starting initrd-parse-etc.service... Nov 1 01:01:22.217545 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 01:01:22.217660 systemd[1]: Finished initrd-parse-etc.service. Nov 1 01:01:22.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.224238 systemd[1]: Reached target initrd-fs.target. Nov 1 01:01:22.228163 systemd[1]: Reached target initrd.target. Nov 1 01:01:22.232244 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 01:01:22.236241 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 01:01:22.247138 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 01:01:22.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.252440 systemd[1]: Starting initrd-cleanup.service... Nov 1 01:01:22.262632 systemd[1]: Stopped target nss-lookup.target. Nov 1 01:01:22.264800 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 01:01:22.269080 systemd[1]: Stopped target timers.target. Nov 1 01:01:22.273254 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 01:01:22.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.273415 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 01:01:22.277564 systemd[1]: Stopped target initrd.target. Nov 1 01:01:22.282776 systemd[1]: Stopped target basic.target. Nov 1 01:01:22.289156 systemd[1]: Stopped target ignition-complete.target. Nov 1 01:01:22.293602 systemd[1]: Stopped target ignition-diskful.target. Nov 1 01:01:22.295856 systemd[1]: Stopped target initrd-root-device.target. Nov 1 01:01:22.299964 systemd[1]: Stopped target remote-fs.target. Nov 1 01:01:22.304321 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 01:01:22.308487 systemd[1]: Stopped target sysinit.target. Nov 1 01:01:22.312419 systemd[1]: Stopped target local-fs.target. Nov 1 01:01:22.316408 systemd[1]: Stopped target local-fs-pre.target. Nov 1 01:01:22.320516 systemd[1]: Stopped target swap.target. Nov 1 01:01:22.324282 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 01:01:22.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.324435 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 01:01:22.328172 systemd[1]: Stopped target cryptsetup.target. Nov 1 01:01:22.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.332439 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 01:01:22.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.332593 systemd[1]: Stopped dracut-initqueue.service. Nov 1 01:01:22.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.336830 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 01:01:22.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.336966 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 01:01:22.341330 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 01:01:22.341463 systemd[1]: Stopped ignition-files.service. Nov 1 01:01:22.345708 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 01:01:22.345833 systemd[1]: Stopped flatcar-metadata-hostname.service. Nov 1 01:01:22.366313 iscsid[838]: iscsid shutting down. Nov 1 01:01:22.366738 ignition[1029]: INFO : Ignition 2.14.0 Nov 1 01:01:22.366738 ignition[1029]: INFO : Stage: umount Nov 1 01:01:22.366738 ignition[1029]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 01:01:22.366738 ignition[1029]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 01:01:22.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.351008 systemd[1]: Stopping ignition-mount.service... Nov 1 01:01:22.390073 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 01:01:22.390073 ignition[1029]: INFO : umount: umount passed Nov 1 01:01:22.390073 ignition[1029]: INFO : Ignition finished successfully Nov 1 01:01:22.364664 systemd[1]: Stopping iscsid.service... Nov 1 01:01:22.370372 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 01:01:22.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.370572 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 01:01:22.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.397791 systemd[1]: Stopping sysroot-boot.service... Nov 1 01:01:22.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.401305 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 01:01:22.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.401491 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 01:01:22.404205 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 01:01:22.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.404355 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 01:01:22.408907 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 01:01:22.409018 systemd[1]: Stopped iscsid.service. Nov 1 01:01:22.416357 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 01:01:22.416457 systemd[1]: Stopped ignition-mount.service. Nov 1 01:01:22.419375 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 01:01:22.419471 systemd[1]: Stopped ignition-disks.service. Nov 1 01:01:22.422935 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 01:01:22.422986 systemd[1]: Stopped ignition-kargs.service. Nov 1 01:01:22.425046 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 01:01:22.425109 systemd[1]: Stopped ignition-fetch.service. Nov 1 01:01:22.427379 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 01:01:22.427419 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 01:01:22.433727 systemd[1]: Stopped target paths.target. Nov 1 01:01:22.437746 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 01:01:22.443779 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 01:01:22.446633 systemd[1]: Stopped target slices.target. Nov 1 01:01:22.448543 systemd[1]: Stopped target sockets.target. Nov 1 01:01:22.451809 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 01:01:22.454569 systemd[1]: Closed iscsid.socket. Nov 1 01:01:22.485685 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 01:01:22.485768 systemd[1]: Stopped ignition-setup.service. Nov 1 01:01:22.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.492159 systemd[1]: Stopping iscsiuio.service... Nov 1 01:01:22.496909 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 01:01:22.499630 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 01:01:22.501994 systemd[1]: Stopped iscsiuio.service. Nov 1 01:01:22.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.505832 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 01:01:22.508088 systemd[1]: Finished initrd-cleanup.service. Nov 1 01:01:22.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.512894 systemd[1]: Stopped target network.target. Nov 1 01:01:22.516643 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 01:01:22.516696 systemd[1]: Closed iscsiuio.socket. Nov 1 01:01:22.522477 systemd[1]: Stopping systemd-networkd.service... Nov 1 01:01:22.526538 systemd[1]: Stopping systemd-resolved.service... Nov 1 01:01:22.527136 systemd-networkd[832]: eth0: DHCPv6 lease lost Nov 1 01:01:22.532711 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 01:01:22.532827 systemd[1]: Stopped systemd-networkd.service. Nov 1 01:01:22.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.540552 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 01:01:22.542826 systemd[1]: Stopped systemd-resolved.service. Nov 1 01:01:22.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.547000 audit: BPF prog-id=9 op=UNLOAD Nov 1 01:01:22.547000 audit: BPF prog-id=6 op=UNLOAD Nov 1 01:01:22.547453 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 01:01:22.547497 systemd[1]: Closed systemd-networkd.socket. Nov 1 01:01:22.554431 systemd[1]: Stopping network-cleanup.service... Nov 1 01:01:22.558440 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 01:01:22.558503 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 01:01:22.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.563617 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 01:01:22.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.565543 systemd[1]: Stopped systemd-sysctl.service. Nov 1 01:01:22.570265 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 01:01:22.571989 systemd[1]: Stopped systemd-modules-load.service. Nov 1 01:01:22.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.578581 systemd[1]: Stopping systemd-udevd.service... Nov 1 01:01:22.584563 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 01:01:22.589320 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 01:01:22.592188 systemd[1]: Stopped systemd-udevd.service. Nov 1 01:01:22.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.597265 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 01:01:22.597340 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 01:01:22.601780 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 01:01:22.601829 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 01:01:22.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.606146 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 01:01:22.606201 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 01:01:22.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.612850 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 01:01:22.612903 systemd[1]: Stopped dracut-cmdline.service. Nov 1 01:01:22.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.618985 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:01:22.619035 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 01:01:22.630192 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 01:01:22.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.634710 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:01:22.634782 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 01:01:22.652084 kernel: hv_netvsc 7c1e522e-80e2-7c1e-522e-80e27c1e522e eth0: Data path switched from VF: enP9347s1 Nov 1 01:01:22.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.642142 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 01:01:22.642238 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 01:01:22.666398 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 01:01:22.668775 systemd[1]: Stopped network-cleanup.service. Nov 1 01:01:22.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.795261 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 01:01:22.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.795380 systemd[1]: Stopped sysroot-boot.service. Nov 1 01:01:22.799815 systemd[1]: Reached target initrd-switch-root.target. Nov 1 01:01:22.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:22.804341 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 01:01:22.804404 systemd[1]: Stopped initrd-setup-root.service. Nov 1 01:01:22.809594 systemd[1]: Starting initrd-switch-root.service... Nov 1 01:01:22.824227 systemd[1]: Switching root. Nov 1 01:01:22.848163 systemd-journald[183]: Journal stopped Nov 1 01:01:42.282738 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 1 01:01:42.282764 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 01:01:42.282775 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 01:01:42.282783 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 01:01:42.282791 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 01:01:42.282799 kernel: SELinux: policy capability open_perms=1 Nov 1 01:01:42.282810 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 01:01:42.282818 kernel: SELinux: policy capability always_check_network=0 Nov 1 01:01:42.282827 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 01:01:42.282835 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 01:01:42.282843 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 01:01:42.282850 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 01:01:42.282858 kernel: kauditd_printk_skb: 42 callbacks suppressed Nov 1 01:01:42.282867 kernel: audit: type=1403 audit(1761958885.162:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 01:01:42.282882 systemd[1]: Successfully loaded SELinux policy in 260.156ms. Nov 1 01:01:42.282892 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.412ms. Nov 1 01:01:42.282902 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 01:01:42.282911 systemd[1]: Detected virtualization microsoft. Nov 1 01:01:42.282922 systemd[1]: Detected architecture x86-64. Nov 1 01:01:42.282931 systemd[1]: Detected first boot. Nov 1 01:01:42.282943 systemd[1]: Hostname set to . Nov 1 01:01:42.282954 systemd[1]: Initializing machine ID from random generator. Nov 1 01:01:42.282965 kernel: audit: type=1400 audit(1761958885.831:82): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 01:01:42.282975 kernel: audit: type=1400 audit(1761958885.846:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 01:01:42.282987 kernel: audit: type=1400 audit(1761958885.846:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 01:01:42.283001 kernel: audit: type=1334 audit(1761958885.860:85): prog-id=10 op=LOAD Nov 1 01:01:42.283010 kernel: audit: type=1334 audit(1761958885.860:86): prog-id=10 op=UNLOAD Nov 1 01:01:42.283018 kernel: audit: type=1334 audit(1761958885.872:87): prog-id=11 op=LOAD Nov 1 01:01:42.283027 kernel: audit: type=1334 audit(1761958885.872:88): prog-id=11 op=UNLOAD Nov 1 01:01:42.283039 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 01:01:42.283051 kernel: audit: type=1400 audit(1761958887.296:89): avc: denied { associate } for pid=1063 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 01:01:42.283061 kernel: audit: type=1300 audit(1761958887.296:89): arch=c000003e syscall=188 success=yes exit=0 a0=c00018a1f2 a1=c000190210 a2=c000192600 a3=32 items=0 ppid=1046 pid=1063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:42.283075 systemd[1]: Populated /etc with preset unit settings. Nov 1 01:01:42.283086 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:01:42.283105 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:01:42.283119 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:01:42.283130 kernel: kauditd_printk_skb: 7 callbacks suppressed Nov 1 01:01:42.283139 kernel: audit: type=1334 audit(1761958901.469:91): prog-id=12 op=LOAD Nov 1 01:01:42.283148 kernel: audit: type=1334 audit(1761958901.469:92): prog-id=3 op=UNLOAD Nov 1 01:01:42.283162 kernel: audit: type=1334 audit(1761958901.475:93): prog-id=13 op=LOAD Nov 1 01:01:42.283177 kernel: audit: type=1334 audit(1761958901.480:94): prog-id=14 op=LOAD Nov 1 01:01:42.283186 kernel: audit: type=1334 audit(1761958901.480:95): prog-id=4 op=UNLOAD Nov 1 01:01:42.283196 kernel: audit: type=1334 audit(1761958901.480:96): prog-id=5 op=UNLOAD Nov 1 01:01:42.283207 kernel: audit: type=1334 audit(1761958901.485:97): prog-id=15 op=LOAD Nov 1 01:01:42.283218 kernel: audit: type=1334 audit(1761958901.485:98): prog-id=12 op=UNLOAD Nov 1 01:01:42.283230 kernel: audit: type=1334 audit(1761958901.504:99): prog-id=16 op=LOAD Nov 1 01:01:42.283239 kernel: audit: type=1334 audit(1761958901.509:100): prog-id=17 op=LOAD Nov 1 01:01:42.283251 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 01:01:42.283265 systemd[1]: Stopped initrd-switch-root.service. Nov 1 01:01:42.283275 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 01:01:42.283290 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 01:01:42.283302 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 01:01:42.283313 systemd[1]: Created slice system-getty.slice. Nov 1 01:01:42.283322 systemd[1]: Created slice system-modprobe.slice. Nov 1 01:01:42.283334 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 01:01:42.283345 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 01:01:42.283360 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 01:01:42.283369 systemd[1]: Created slice user.slice. Nov 1 01:01:42.283382 systemd[1]: Started systemd-ask-password-console.path. Nov 1 01:01:42.283394 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 01:01:42.283404 systemd[1]: Set up automount boot.automount. Nov 1 01:01:42.283415 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 01:01:42.283427 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 01:01:42.283440 systemd[1]: Stopped target initrd-fs.target. Nov 1 01:01:42.283449 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 01:01:42.283464 systemd[1]: Reached target integritysetup.target. Nov 1 01:01:42.283477 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 01:01:42.283488 systemd[1]: Reached target remote-fs.target. Nov 1 01:01:42.283500 systemd[1]: Reached target slices.target. Nov 1 01:01:42.283511 systemd[1]: Reached target swap.target. Nov 1 01:01:42.283522 systemd[1]: Reached target torcx.target. Nov 1 01:01:42.283534 systemd[1]: Reached target veritysetup.target. Nov 1 01:01:42.283547 systemd[1]: Listening on systemd-coredump.socket. Nov 1 01:01:42.283560 systemd[1]: Listening on systemd-initctl.socket. Nov 1 01:01:42.283570 systemd[1]: Listening on systemd-networkd.socket. Nov 1 01:01:42.283583 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 01:01:42.283595 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 01:01:42.283608 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 01:01:42.283620 systemd[1]: Mounting dev-hugepages.mount... Nov 1 01:01:42.283631 systemd[1]: Mounting dev-mqueue.mount... Nov 1 01:01:42.283644 systemd[1]: Mounting media.mount... Nov 1 01:01:42.283654 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:01:42.283666 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 01:01:42.283679 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 01:01:42.283689 systemd[1]: Mounting tmp.mount... Nov 1 01:01:42.283703 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 01:01:42.283717 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 01:01:42.283729 systemd[1]: Starting kmod-static-nodes.service... Nov 1 01:01:42.283742 systemd[1]: Starting modprobe@configfs.service... Nov 1 01:01:42.283753 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 01:01:42.283765 systemd[1]: Starting modprobe@drm.service... Nov 1 01:01:42.283776 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 01:01:42.283789 systemd[1]: Starting modprobe@fuse.service... Nov 1 01:01:42.283802 systemd[1]: Starting modprobe@loop.service... Nov 1 01:01:42.283812 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 01:01:42.283834 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 01:01:42.283846 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 01:01:42.283857 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 01:01:42.283869 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 01:01:42.283880 systemd[1]: Stopped systemd-journald.service. Nov 1 01:01:42.283894 systemd[1]: Starting systemd-journald.service... Nov 1 01:01:42.283905 kernel: loop: module loaded Nov 1 01:01:42.283916 systemd[1]: Starting systemd-modules-load.service... Nov 1 01:01:42.283929 systemd[1]: Starting systemd-network-generator.service... Nov 1 01:01:42.283942 systemd[1]: Starting systemd-remount-fs.service... Nov 1 01:01:42.283954 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 01:01:42.283968 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 01:01:42.283978 systemd[1]: Stopped verity-setup.service. Nov 1 01:01:42.283991 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:01:42.284004 systemd[1]: Mounted dev-hugepages.mount. Nov 1 01:01:42.284014 systemd[1]: Mounted dev-mqueue.mount. Nov 1 01:01:42.284027 systemd[1]: Mounted media.mount. Nov 1 01:01:42.284043 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 01:01:42.284055 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 01:01:42.284067 systemd[1]: Mounted tmp.mount. Nov 1 01:01:42.284080 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 01:01:42.284118 systemd[1]: Finished kmod-static-nodes.service. Nov 1 01:01:42.284135 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:01:42.284156 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 01:01:42.284169 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:01:42.284183 systemd[1]: Finished modprobe@drm.service. Nov 1 01:01:42.284193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:01:42.284207 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 01:01:42.284223 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:01:42.284233 systemd[1]: Finished modprobe@loop.service. Nov 1 01:01:42.284243 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 01:01:42.284253 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 01:01:42.284265 systemd[1]: Finished modprobe@configfs.service. Nov 1 01:01:42.284276 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 01:01:42.284289 systemd[1]: Finished systemd-remount-fs.service. Nov 1 01:01:42.284299 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 01:01:42.284313 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 01:01:42.284327 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 01:01:42.284339 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:01:42.284353 systemd[1]: Starting systemd-random-seed.service... Nov 1 01:01:42.284366 systemd[1]: Starting systemd-sysusers.service... Nov 1 01:01:42.284376 systemd[1]: Finished systemd-network-generator.service. Nov 1 01:01:42.284391 systemd-journald[1163]: Journal started Nov 1 01:01:42.284441 systemd-journald[1163]: Runtime Journal (/run/log/journal/ef0e4a91ee7a4230b66c4cf191547b2a) is 8.0M, max 159.0M, 151.0M free. Nov 1 01:01:25.162000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 01:01:25.831000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 01:01:25.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 01:01:25.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 01:01:25.860000 audit: BPF prog-id=10 op=LOAD Nov 1 01:01:25.860000 audit: BPF prog-id=10 op=UNLOAD Nov 1 01:01:25.872000 audit: BPF prog-id=11 op=LOAD Nov 1 01:01:25.872000 audit: BPF prog-id=11 op=UNLOAD Nov 1 01:01:27.296000 audit[1063]: AVC avc: denied { associate } for pid=1063 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 01:01:27.296000 audit[1063]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00018a1f2 a1=c000190210 a2=c000192600 a3=32 items=0 ppid=1046 pid=1063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:27.296000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 01:01:27.305000 audit[1063]: AVC avc: denied { associate } for pid=1063 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 01:01:27.305000 audit[1063]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018a2c9 a2=1ed a3=0 items=2 ppid=1046 pid=1063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:27.305000 audit: CWD cwd="/" Nov 1 01:01:27.305000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:27.305000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:27.305000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 01:01:41.469000 audit: BPF prog-id=12 op=LOAD Nov 1 01:01:41.469000 audit: BPF prog-id=3 op=UNLOAD Nov 1 01:01:41.475000 audit: BPF prog-id=13 op=LOAD Nov 1 01:01:41.480000 audit: BPF prog-id=14 op=LOAD Nov 1 01:01:41.480000 audit: BPF prog-id=4 op=UNLOAD Nov 1 01:01:41.480000 audit: BPF prog-id=5 op=UNLOAD Nov 1 01:01:41.485000 audit: BPF prog-id=15 op=LOAD Nov 1 01:01:41.485000 audit: BPF prog-id=12 op=UNLOAD Nov 1 01:01:41.504000 audit: BPF prog-id=16 op=LOAD Nov 1 01:01:41.509000 audit: BPF prog-id=17 op=LOAD Nov 1 01:01:41.509000 audit: BPF prog-id=13 op=UNLOAD Nov 1 01:01:41.509000 audit: BPF prog-id=14 op=UNLOAD Nov 1 01:01:41.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:41.529000 audit: BPF prog-id=15 op=UNLOAD Nov 1 01:01:41.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:41.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:41.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:41.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:41.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:41.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:41.875000 audit: BPF prog-id=18 op=LOAD Nov 1 01:01:41.875000 audit: BPF prog-id=19 op=LOAD Nov 1 01:01:41.875000 audit: BPF prog-id=20 op=LOAD Nov 1 01:01:41.875000 audit: BPF prog-id=16 op=UNLOAD Nov 1 01:01:41.875000 audit: BPF prog-id=17 op=UNLOAD Nov 1 01:01:41.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.279000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 01:01:42.279000 audit[1163]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd1b7b2e00 a2=4000 a3=7ffd1b7b2e9c items=0 ppid=1 pid=1163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:42.279000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 01:01:42.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:27.186731 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 01:01:41.467768 systemd[1]: Queued start job for default target multi-user.target. Nov 1 01:01:27.187493 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 01:01:41.467781 systemd[1]: Unnecessary job was removed for dev-sda6.device. Nov 1 01:01:27.187515 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 01:01:41.509940 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 01:01:27.187554 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 01:01:27.187566 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 01:01:27.187618 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 01:01:27.187633 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 01:01:27.187868 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 01:01:27.187913 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 01:01:27.187927 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 01:01:27.276449 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 01:01:27.276514 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 01:01:27.276584 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 01:01:27.276611 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 01:01:27.276643 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 01:01:27.276659 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:27Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 01:01:39.265504 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:39Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 01:01:39.265735 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:39Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 01:01:39.265876 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:39Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 01:01:39.266086 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:39Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 01:01:39.266165 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:39Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 01:01:39.266221 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2025-11-01T01:01:39Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 01:01:42.292985 systemd[1]: Started systemd-journald.service. Nov 1 01:01:42.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.301261 systemd[1]: Reached target network-pre.target. Nov 1 01:01:42.308334 systemd[1]: Starting systemd-journal-flush.service... Nov 1 01:01:42.310749 kernel: fuse: init (API version 7.34) Nov 1 01:01:42.311264 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 01:01:42.311459 systemd[1]: Finished modprobe@fuse.service. Nov 1 01:01:42.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.314819 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 01:01:42.320485 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 01:01:42.408339 systemd-journald[1163]: Time spent on flushing to /var/log/journal/ef0e4a91ee7a4230b66c4cf191547b2a is 14.705ms for 1137 entries. Nov 1 01:01:42.408339 systemd-journald[1163]: System Journal (/var/log/journal/ef0e4a91ee7a4230b66c4cf191547b2a) is 8.0M, max 2.6G, 2.6G free. Nov 1 01:01:43.942957 systemd-journald[1163]: Received client request to flush runtime journal. Nov 1 01:01:42.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:43.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:43.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:42.686452 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 01:01:43.944165 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 01:01:42.690410 systemd[1]: Starting systemd-udev-settle.service... Nov 1 01:01:42.843780 systemd[1]: Finished systemd-modules-load.service. Nov 1 01:01:42.849466 systemd[1]: Starting systemd-sysctl.service... Nov 1 01:01:43.042562 systemd[1]: Finished systemd-random-seed.service. Nov 1 01:01:43.045249 systemd[1]: Reached target first-boot-complete.target. Nov 1 01:01:43.294910 systemd[1]: Finished systemd-sysctl.service. Nov 1 01:01:43.944356 systemd[1]: Finished systemd-journal-flush.service. Nov 1 01:01:43.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:44.997649 systemd[1]: Finished systemd-sysusers.service. Nov 1 01:01:45.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:46.451880 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 01:01:46.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:46.454000 audit: BPF prog-id=21 op=LOAD Nov 1 01:01:46.454000 audit: BPF prog-id=22 op=LOAD Nov 1 01:01:46.454000 audit: BPF prog-id=7 op=UNLOAD Nov 1 01:01:46.454000 audit: BPF prog-id=8 op=UNLOAD Nov 1 01:01:46.455726 systemd[1]: Starting systemd-udevd.service... Nov 1 01:01:46.473674 systemd-udevd[1189]: Using default interface naming scheme 'v252'. Nov 1 01:01:46.987184 systemd[1]: Started systemd-udevd.service. Nov 1 01:01:47.009758 kernel: kauditd_printk_skb: 47 callbacks suppressed Nov 1 01:01:47.009885 kernel: audit: type=1130 audit(1761958906.989:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:46.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:46.991836 systemd[1]: Starting systemd-networkd.service... Nov 1 01:01:47.019653 kernel: audit: type=1334 audit(1761958906.990:147): prog-id=23 op=LOAD Nov 1 01:01:46.990000 audit: BPF prog-id=23 op=LOAD Nov 1 01:01:47.046432 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Nov 1 01:01:47.280244 kernel: audit: type=1334 audit(1761958907.270:148): prog-id=24 op=LOAD Nov 1 01:01:47.280373 kernel: audit: type=1334 audit(1761958907.275:149): prog-id=25 op=LOAD Nov 1 01:01:47.270000 audit: BPF prog-id=24 op=LOAD Nov 1 01:01:47.275000 audit: BPF prog-id=25 op=LOAD Nov 1 01:01:47.280000 audit: BPF prog-id=26 op=LOAD Nov 1 01:01:47.281107 systemd[1]: Starting systemd-userdbd.service... Nov 1 01:01:47.284668 kernel: audit: type=1334 audit(1761958907.280:150): prog-id=26 op=LOAD Nov 1 01:01:47.468490 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 01:01:47.468608 kernel: audit: type=1400 audit(1761958907.449:151): avc: denied { confidentiality } for pid=1190 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 01:01:47.468638 kernel: hv_vmbus: registering driver hv_balloon Nov 1 01:01:47.449000 audit[1190]: AVC avc: denied { confidentiality } for pid=1190 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 01:01:47.477115 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 1 01:01:47.449000 audit[1190]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a15aabd360 a1=f83c a2=7f8717656bc5 a3=5 items=12 ppid=1189 pid=1190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:47.498249 kernel: audit: type=1300 audit(1761958907.449:151): arch=c000003e syscall=175 success=yes exit=0 a0=55a15aabd360 a1=f83c a2=7f8717656bc5 a3=5 items=12 ppid=1189 pid=1190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:47.449000 audit: CWD cwd="/" Nov 1 01:01:47.517398 kernel: audit: type=1307 audit(1761958907.449:151): cwd="/" Nov 1 01:01:47.517520 kernel: audit: type=1302 audit(1761958907.449:151): item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:47.449000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:47.449000 audit: PATH item=1 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:47.449000 audit: PATH item=2 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:47.531162 kernel: audit: type=1302 audit(1761958907.449:151): item=1 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:47.531200 kernel: hv_vmbus: registering driver hyperv_fb Nov 1 01:01:47.449000 audit: PATH item=3 name=(null) inode=15506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:47.449000 audit: PATH item=4 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:47.449000 audit: PATH item=5 name=(null) inode=15507 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:47.449000 audit: PATH item=6 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:47.449000 audit: PATH item=7 name=(null) inode=15508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:47.449000 audit: PATH item=8 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:47.449000 audit: PATH item=9 name=(null) inode=15509 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:47.449000 audit: PATH item=10 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:47.449000 audit: PATH item=11 name=(null) inode=15510 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:47.449000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 01:01:47.535222 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 1 01:01:47.545368 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 1 01:01:47.545161 systemd[1]: Started systemd-userdbd.service. Nov 1 01:01:47.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:47.550935 kernel: Console: switching to colour dummy device 80x25 Nov 1 01:01:47.560839 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 01:01:47.582587 kernel: hv_utils: Registering HyperV Utility Driver Nov 1 01:01:47.582677 kernel: hv_vmbus: registering driver hv_utils Nov 1 01:01:47.593899 kernel: hv_utils: Shutdown IC version 3.2 Nov 1 01:01:47.593981 kernel: hv_utils: Heartbeat IC version 3.0 Nov 1 01:01:47.594021 kernel: hv_utils: TimeSync IC version 4.0 Nov 1 01:01:48.417275 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Nov 1 01:01:48.435351 systemd-networkd[1195]: lo: Link UP Nov 1 01:01:48.435364 systemd-networkd[1195]: lo: Gained carrier Nov 1 01:01:48.436005 systemd-networkd[1195]: Enumeration completed Nov 1 01:01:48.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:48.436117 systemd[1]: Started systemd-networkd.service. Nov 1 01:01:48.441704 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 01:01:48.445624 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 01:01:48.449761 systemd-networkd[1195]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:01:48.504282 kernel: mlx5_core 2483:00:02.0 enP9347s1: Link up Nov 1 01:01:48.504591 kernel: buffer_size[0]=0 is not enough for lossless buffer Nov 1 01:01:48.530083 systemd-networkd[1195]: enP9347s1: Link UP Nov 1 01:01:48.530347 kernel: hv_netvsc 7c1e522e-80e2-7c1e-522e-80e27c1e522e eth0: Data path switched to VF: enP9347s1 Nov 1 01:01:48.530274 systemd-networkd[1195]: eth0: Link UP Nov 1 01:01:48.530296 systemd-networkd[1195]: eth0: Gained carrier Nov 1 01:01:48.536578 systemd-networkd[1195]: enP9347s1: Gained carrier Nov 1 01:01:48.544608 systemd[1]: Finished systemd-udev-settle.service. Nov 1 01:01:48.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:48.548676 systemd[1]: Starting lvm2-activation-early.service... Nov 1 01:01:48.561387 systemd-networkd[1195]: eth0: DHCPv4 address 10.200.4.9/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 1 01:01:48.883745 lvm[1267]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:01:48.925403 systemd[1]: Finished lvm2-activation-early.service. Nov 1 01:01:48.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:48.928134 systemd[1]: Reached target cryptsetup.target. Nov 1 01:01:48.931719 systemd[1]: Starting lvm2-activation.service... Nov 1 01:01:48.938103 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:01:48.960372 systemd[1]: Finished lvm2-activation.service. Nov 1 01:01:48.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:48.963186 systemd[1]: Reached target local-fs-pre.target. Nov 1 01:01:48.965496 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 01:01:48.965533 systemd[1]: Reached target local-fs.target. Nov 1 01:01:48.967773 systemd[1]: Reached target machines.target. Nov 1 01:01:48.971285 systemd[1]: Starting ldconfig.service... Nov 1 01:01:48.973692 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 01:01:48.973794 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:01:48.975069 systemd[1]: Starting systemd-boot-update.service... Nov 1 01:01:48.978233 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 01:01:48.982035 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 01:01:48.985517 systemd[1]: Starting systemd-sysext.service... Nov 1 01:01:49.032721 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1270 (bootctl) Nov 1 01:01:49.034164 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 01:01:49.539266 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 01:01:49.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:49.572903 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 01:01:49.593734 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 01:01:49.593969 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 01:01:49.622268 kernel: loop0: detected capacity change from 0 to 219144 Nov 1 01:01:49.650407 systemd-networkd[1195]: eth0: Gained IPv6LL Nov 1 01:01:49.656098 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 01:01:49.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:49.666265 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 01:01:49.690336 kernel: loop1: detected capacity change from 0 to 219144 Nov 1 01:01:49.704359 (sd-sysext)[1283]: Using extensions 'kubernetes'. Nov 1 01:01:49.704848 (sd-sysext)[1283]: Merged extensions into '/usr'. Nov 1 01:01:49.721229 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:01:49.723051 systemd[1]: Mounting usr-share-oem.mount... Nov 1 01:01:49.724386 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 01:01:49.728585 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 01:01:49.731859 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 01:01:49.734992 systemd[1]: Starting modprobe@loop.service... Nov 1 01:01:49.736959 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 01:01:49.737131 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:01:49.737312 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:01:49.740051 systemd[1]: Mounted usr-share-oem.mount. Nov 1 01:01:49.742436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:01:49.742600 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 01:01:49.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:49.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:49.746047 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:01:49.746202 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 01:01:49.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:49.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:49.748982 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:01:49.749140 systemd[1]: Finished modprobe@loop.service. Nov 1 01:01:49.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:49.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:49.751883 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:01:49.752031 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 01:01:49.753263 systemd[1]: Finished systemd-sysext.service. Nov 1 01:01:49.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:49.756843 systemd[1]: Starting ensure-sysext.service... Nov 1 01:01:49.760459 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 01:01:49.768322 systemd[1]: Reloading. Nov 1 01:01:49.834318 /usr/lib/systemd/system-generators/torcx-generator[1310]: time="2025-11-01T01:01:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 01:01:49.834356 /usr/lib/systemd/system-generators/torcx-generator[1310]: time="2025-11-01T01:01:49Z" level=info msg="torcx already run" Nov 1 01:01:49.933847 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:01:49.933870 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:01:49.936769 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 01:01:49.950684 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:01:50.033000 audit: BPF prog-id=27 op=LOAD Nov 1 01:01:50.033000 audit: BPF prog-id=23 op=UNLOAD Nov 1 01:01:50.034000 audit: BPF prog-id=28 op=LOAD Nov 1 01:01:50.034000 audit: BPF prog-id=24 op=UNLOAD Nov 1 01:01:50.034000 audit: BPF prog-id=29 op=LOAD Nov 1 01:01:50.034000 audit: BPF prog-id=30 op=LOAD Nov 1 01:01:50.034000 audit: BPF prog-id=25 op=UNLOAD Nov 1 01:01:50.034000 audit: BPF prog-id=26 op=UNLOAD Nov 1 01:01:50.035000 audit: BPF prog-id=31 op=LOAD Nov 1 01:01:50.035000 audit: BPF prog-id=18 op=UNLOAD Nov 1 01:01:50.036000 audit: BPF prog-id=32 op=LOAD Nov 1 01:01:50.036000 audit: BPF prog-id=33 op=LOAD Nov 1 01:01:50.036000 audit: BPF prog-id=19 op=UNLOAD Nov 1 01:01:50.036000 audit: BPF prog-id=20 op=UNLOAD Nov 1 01:01:50.036000 audit: BPF prog-id=34 op=LOAD Nov 1 01:01:50.036000 audit: BPF prog-id=35 op=LOAD Nov 1 01:01:50.036000 audit: BPF prog-id=21 op=UNLOAD Nov 1 01:01:50.036000 audit: BPF prog-id=22 op=UNLOAD Nov 1 01:01:50.050726 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:01:50.051063 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 01:01:50.052414 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 01:01:50.055130 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 01:01:50.057722 systemd[1]: Starting modprobe@loop.service... Nov 1 01:01:50.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.062922 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 01:01:50.063141 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 01:01:50.063468 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:01:50.063748 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:01:50.065669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:01:50.065860 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 01:01:50.069737 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:01:50.069900 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 01:01:50.074341 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:01:50.074488 systemd[1]: Finished modprobe@loop.service. Nov 1 01:01:50.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.076968 systemd[1]: Finished ensure-sysext.service. Nov 1 01:01:50.079425 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:01:50.079682 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 01:01:50.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.080753 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 01:01:50.083070 systemd[1]: Starting modprobe@drm.service... Nov 1 01:01:50.085780 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 01:01:50.086909 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 01:01:50.086991 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:01:50.087124 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:01:50.088205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:01:50.088419 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 01:01:50.089916 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 01:01:50.091807 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:01:50.092069 systemd[1]: Finished modprobe@drm.service. Nov 1 01:01:50.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.095118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:01:50.095416 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 01:01:50.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.096585 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:01:50.150865 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 01:01:50.151528 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 01:01:50.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.169982 systemd-fsck[1277]: fsck.fat 4.2 (2021-01-31) Nov 1 01:01:50.169982 systemd-fsck[1277]: /dev/sda1: 790 files, 120773/258078 clusters Nov 1 01:01:50.172546 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 01:01:50.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.177130 systemd[1]: Mounting boot.mount... Nov 1 01:01:50.185075 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 01:01:50.188387 systemd[1]: Mounted boot.mount. Nov 1 01:01:50.208977 systemd[1]: Finished systemd-boot-update.service. Nov 1 01:01:50.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.952606 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 01:01:50.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:50.956405 systemd[1]: Starting audit-rules.service... Nov 1 01:01:50.959782 systemd[1]: Starting clean-ca-certificates.service... Nov 1 01:01:50.963576 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 01:01:50.966000 audit: BPF prog-id=36 op=LOAD Nov 1 01:01:50.968532 systemd[1]: Starting systemd-resolved.service... Nov 1 01:01:50.970000 audit: BPF prog-id=37 op=LOAD Nov 1 01:01:50.972883 systemd[1]: Starting systemd-timesyncd.service... Nov 1 01:01:50.979222 systemd[1]: Starting systemd-update-utmp.service... Nov 1 01:01:51.030000 audit[1390]: SYSTEM_BOOT pid=1390 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 01:01:51.039177 systemd[1]: Finished systemd-update-utmp.service. Nov 1 01:01:51.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:51.054983 systemd[1]: Started systemd-timesyncd.service. Nov 1 01:01:51.058825 systemd[1]: Reached target time-set.target. Nov 1 01:01:51.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:51.107802 systemd[1]: Finished clean-ca-certificates.service. Nov 1 01:01:51.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:51.111268 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:01:51.126828 systemd-resolved[1387]: Positive Trust Anchors: Nov 1 01:01:51.126841 systemd-resolved[1387]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:01:51.126891 systemd-resolved[1387]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 01:01:51.191262 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 01:01:51.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:51.208254 systemd-resolved[1387]: Using system hostname 'ci-3510.3.8-n-16445aab1e'. Nov 1 01:01:51.209861 systemd[1]: Started systemd-resolved.service. Nov 1 01:01:51.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:51.212187 systemd[1]: Reached target network.target. Nov 1 01:01:51.214399 systemd[1]: Reached target network-online.target. Nov 1 01:01:51.217052 systemd[1]: Reached target nss-lookup.target. Nov 1 01:01:51.233907 systemd-timesyncd[1388]: Contacted time server 176.58.127.131:123 (0.flatcar.pool.ntp.org). Nov 1 01:01:51.233979 systemd-timesyncd[1388]: Initial clock synchronization to Sat 2025-11-01 01:01:51.234490 UTC. Nov 1 01:01:51.326000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 01:01:51.326000 audit[1405]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe19d6b5b0 a2=420 a3=0 items=0 ppid=1384 pid=1405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:51.326000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 01:01:51.328586 augenrules[1405]: No rules Nov 1 01:01:51.329572 systemd[1]: Finished audit-rules.service. Nov 1 01:01:56.746737 ldconfig[1269]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 01:01:56.758712 systemd[1]: Finished ldconfig.service. Nov 1 01:01:56.763166 systemd[1]: Starting systemd-update-done.service... Nov 1 01:01:56.770319 systemd[1]: Finished systemd-update-done.service. Nov 1 01:01:56.772800 systemd[1]: Reached target sysinit.target. Nov 1 01:01:56.775063 systemd[1]: Started motdgen.path. Nov 1 01:01:56.777017 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 01:01:56.779873 systemd[1]: Started logrotate.timer. Nov 1 01:01:56.781773 systemd[1]: Started mdadm.timer. Nov 1 01:01:56.783430 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 01:01:56.785562 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 01:01:56.785597 systemd[1]: Reached target paths.target. Nov 1 01:01:56.787428 systemd[1]: Reached target timers.target. Nov 1 01:01:56.789628 systemd[1]: Listening on dbus.socket. Nov 1 01:01:56.792556 systemd[1]: Starting docker.socket... Nov 1 01:01:56.797177 systemd[1]: Listening on sshd.socket. Nov 1 01:01:56.799502 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:01:56.799947 systemd[1]: Listening on docker.socket. Nov 1 01:01:56.802043 systemd[1]: Reached target sockets.target. Nov 1 01:01:56.804158 systemd[1]: Reached target basic.target. Nov 1 01:01:56.806123 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 01:01:56.806157 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 01:01:56.807128 systemd[1]: Starting containerd.service... Nov 1 01:01:56.811392 systemd[1]: Starting dbus.service... Nov 1 01:01:56.814025 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 01:01:56.817354 systemd[1]: Starting extend-filesystems.service... Nov 1 01:01:56.819785 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 01:01:56.833811 systemd[1]: Starting kubelet.service... Nov 1 01:01:56.837221 systemd[1]: Starting motdgen.service... Nov 1 01:01:56.840440 systemd[1]: Started nvidia.service. Nov 1 01:01:56.843664 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 01:01:56.847314 systemd[1]: Starting sshd-keygen.service... Nov 1 01:01:56.852931 systemd[1]: Starting systemd-logind.service... Nov 1 01:01:56.854867 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 01:01:56.854980 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 01:01:56.856100 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 01:01:56.857227 systemd[1]: Starting update-engine.service... Nov 1 01:01:56.861707 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 01:01:56.882342 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 01:01:56.882560 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 01:01:56.894083 jq[1415]: false Nov 1 01:01:56.894385 jq[1428]: true Nov 1 01:01:56.894480 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 01:01:56.894719 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 01:01:56.913731 extend-filesystems[1416]: Found loop1 Nov 1 01:01:56.914619 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 01:01:56.924339 jq[1435]: true Nov 1 01:01:56.914787 systemd[1]: Finished motdgen.service. Nov 1 01:01:56.935403 extend-filesystems[1416]: Found sda Nov 1 01:01:56.943398 extend-filesystems[1416]: Found sda1 Nov 1 01:01:56.945949 extend-filesystems[1416]: Found sda2 Nov 1 01:01:56.949836 extend-filesystems[1416]: Found sda3 Nov 1 01:01:56.949836 extend-filesystems[1416]: Found usr Nov 1 01:01:56.949836 extend-filesystems[1416]: Found sda4 Nov 1 01:01:56.949836 extend-filesystems[1416]: Found sda6 Nov 1 01:01:56.949836 extend-filesystems[1416]: Found sda7 Nov 1 01:01:56.949836 extend-filesystems[1416]: Found sda9 Nov 1 01:01:56.949836 extend-filesystems[1416]: Checking size of /dev/sda9 Nov 1 01:01:56.987260 env[1433]: time="2025-11-01T01:01:56.987198272Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 01:01:57.006037 env[1433]: time="2025-11-01T01:01:57.005944492Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 01:01:57.006268 env[1433]: time="2025-11-01T01:01:57.006231201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:01:57.009002 env[1433]: time="2025-11-01T01:01:57.008960787Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:01:57.009002 env[1433]: time="2025-11-01T01:01:57.008991588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:01:57.009864 env[1433]: time="2025-11-01T01:01:57.009830315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:01:57.009864 env[1433]: time="2025-11-01T01:01:57.009854715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 01:01:57.009986 env[1433]: time="2025-11-01T01:01:57.009872316Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 01:01:57.009986 env[1433]: time="2025-11-01T01:01:57.009890717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 01:01:57.010807 env[1433]: time="2025-11-01T01:01:57.010777645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:01:57.011923 env[1433]: time="2025-11-01T01:01:57.011901180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:01:57.012190 env[1433]: time="2025-11-01T01:01:57.012161288Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:01:57.012322 env[1433]: time="2025-11-01T01:01:57.012303793Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 01:01:57.012462 env[1433]: time="2025-11-01T01:01:57.012442997Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 01:01:57.012548 env[1433]: time="2025-11-01T01:01:57.012532700Z" level=info msg="metadata content store policy set" policy=shared Nov 1 01:01:57.048832 extend-filesystems[1416]: Old size kept for /dev/sda9 Nov 1 01:01:57.056007 extend-filesystems[1416]: Found sr0 Nov 1 01:01:57.051966 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 01:01:57.052316 systemd[1]: Finished extend-filesystems.service. Nov 1 01:01:57.092871 systemd-logind[1426]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 01:01:57.098033 systemd-logind[1426]: New seat seat0. Nov 1 01:01:57.098229 env[1433]: time="2025-11-01T01:01:57.098147804Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 01:01:57.098323 env[1433]: time="2025-11-01T01:01:57.098266308Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 01:01:57.098323 env[1433]: time="2025-11-01T01:01:57.098291109Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 01:01:57.098403 env[1433]: time="2025-11-01T01:01:57.098349311Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 01:01:57.100474 env[1433]: time="2025-11-01T01:01:57.098374311Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 01:01:57.100474 env[1433]: time="2025-11-01T01:01:57.098527516Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 01:01:57.100474 env[1433]: time="2025-11-01T01:01:57.098545817Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 01:01:57.100474 env[1433]: time="2025-11-01T01:01:57.098565517Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 01:01:57.100474 env[1433]: time="2025-11-01T01:01:57.098597618Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 01:01:57.100474 env[1433]: time="2025-11-01T01:01:57.098617819Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 01:01:57.100474 env[1433]: time="2025-11-01T01:01:57.098636320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 01:01:57.100474 env[1433]: time="2025-11-01T01:01:57.098669421Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 01:01:57.100474 env[1433]: time="2025-11-01T01:01:57.098830326Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 01:01:57.100474 env[1433]: time="2025-11-01T01:01:57.098938829Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 01:01:57.100474 env[1433]: time="2025-11-01T01:01:57.099307641Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 01:01:57.100474 env[1433]: time="2025-11-01T01:01:57.099357542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 01:01:57.100474 env[1433]: time="2025-11-01T01:01:57.099376543Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 01:01:57.100474 env[1433]: time="2025-11-01T01:01:57.099441345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 01:01:57.103013 env[1433]: time="2025-11-01T01:01:57.099458246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 01:01:57.103013 env[1433]: time="2025-11-01T01:01:57.099474646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 01:01:57.103013 env[1433]: time="2025-11-01T01:01:57.099500747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 01:01:57.103013 env[1433]: time="2025-11-01T01:01:57.099516747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 01:01:57.103013 env[1433]: time="2025-11-01T01:01:57.099533448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 01:01:57.103013 env[1433]: time="2025-11-01T01:01:57.099548948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 01:01:57.103013 env[1433]: time="2025-11-01T01:01:57.099574949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 01:01:57.103013 env[1433]: time="2025-11-01T01:01:57.099594150Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 01:01:57.103013 env[1433]: time="2025-11-01T01:01:57.099766255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 01:01:57.103013 env[1433]: time="2025-11-01T01:01:57.099809057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 01:01:57.103013 env[1433]: time="2025-11-01T01:01:57.099846758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 01:01:57.103013 env[1433]: time="2025-11-01T01:01:57.099862858Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 01:01:57.103013 env[1433]: time="2025-11-01T01:01:57.099895759Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 01:01:57.103013 env[1433]: time="2025-11-01T01:01:57.099911960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 01:01:57.102824 systemd[1]: Started containerd.service. Nov 1 01:01:57.103638 env[1433]: time="2025-11-01T01:01:57.099936161Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 01:01:57.103638 env[1433]: time="2025-11-01T01:01:57.099987562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 01:01:57.103710 env[1433]: time="2025-11-01T01:01:57.100292372Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 01:01:57.103710 env[1433]: time="2025-11-01T01:01:57.100376975Z" level=info msg="Connect containerd service" Nov 1 01:01:57.103710 env[1433]: time="2025-11-01T01:01:57.100434376Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 01:01:57.103710 env[1433]: time="2025-11-01T01:01:57.101292103Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:01:57.103710 env[1433]: time="2025-11-01T01:01:57.101594513Z" level=info msg="Start subscribing containerd event" Nov 1 01:01:57.103710 env[1433]: time="2025-11-01T01:01:57.101649515Z" level=info msg="Start recovering state" Nov 1 01:01:57.103710 env[1433]: time="2025-11-01T01:01:57.101620514Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 01:01:57.103710 env[1433]: time="2025-11-01T01:01:57.101805220Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 01:01:57.103710 env[1433]: time="2025-11-01T01:01:57.101862421Z" level=info msg="containerd successfully booted in 0.116666s" Nov 1 01:01:57.138143 env[1433]: time="2025-11-01T01:01:57.107312794Z" level=info msg="Start event monitor" Nov 1 01:01:57.138143 env[1433]: time="2025-11-01T01:01:57.107352795Z" level=info msg="Start snapshots syncer" Nov 1 01:01:57.138143 env[1433]: time="2025-11-01T01:01:57.107368295Z" level=info msg="Start cni network conf syncer for default" Nov 1 01:01:57.138143 env[1433]: time="2025-11-01T01:01:57.107379296Z" level=info msg="Start streaming server" Nov 1 01:01:57.130707 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 01:01:57.139630 bash[1464]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:01:57.152609 dbus-daemon[1414]: [system] SELinux support is enabled Nov 1 01:01:57.152808 systemd[1]: Started dbus.service. Nov 1 01:01:57.159553 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 01:01:57.160817 systemd[1]: Reached target system-config.target. Nov 1 01:01:57.163520 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 01:01:57.163543 systemd[1]: Reached target user-config.target. Nov 1 01:01:57.175326 systemd[1]: Started systemd-logind.service. Nov 1 01:01:57.178341 dbus-daemon[1414]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 01:01:57.199955 systemd[1]: nvidia.service: Deactivated successfully. Nov 1 01:01:57.968862 update_engine[1427]: I1101 01:01:57.967864 1427 main.cc:92] Flatcar Update Engine starting Nov 1 01:01:58.039689 systemd[1]: Started update-engine.service. Nov 1 01:01:58.042338 update_engine[1427]: I1101 01:01:58.041136 1427 update_check_scheduler.cc:74] Next update check in 10m8s Nov 1 01:01:58.045585 systemd[1]: Started locksmithd.service. Nov 1 01:01:58.277852 systemd[1]: Started kubelet.service. Nov 1 01:01:58.761105 sshd_keygen[1440]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 01:01:58.781548 systemd[1]: Finished sshd-keygen.service. Nov 1 01:01:58.785862 systemd[1]: Starting issuegen.service... Nov 1 01:01:58.789470 systemd[1]: Started waagent.service. Nov 1 01:01:58.794273 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 01:01:58.794485 systemd[1]: Finished issuegen.service. Nov 1 01:01:58.799047 systemd[1]: Starting systemd-user-sessions.service... Nov 1 01:01:58.827654 systemd[1]: Finished systemd-user-sessions.service. Nov 1 01:01:58.831834 systemd[1]: Started getty@tty1.service. Nov 1 01:01:58.835888 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 01:01:58.838657 systemd[1]: Reached target getty.target. Nov 1 01:01:58.841366 systemd[1]: Reached target multi-user.target. Nov 1 01:01:58.846016 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 01:01:58.857004 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 01:01:58.857197 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 01:01:58.860094 systemd[1]: Startup finished in 803ms (firmware) + 15.457s (loader) + 987ms (kernel) + 13.986s (initrd) + 33.679s (userspace) = 1min 4.914s. Nov 1 01:01:58.975739 kubelet[1525]: E1101 01:01:58.975684 1525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:01:58.977436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:01:58.977604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:01:58.977895 systemd[1]: kubelet.service: Consumed 1.095s CPU time. Nov 1 01:01:59.275878 login[1549]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Nov 1 01:01:59.276726 login[1548]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:01:59.300018 systemd[1]: Created slice user-500.slice. Nov 1 01:01:59.301981 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 01:01:59.304549 systemd-logind[1426]: New session 2 of user core. Nov 1 01:01:59.313229 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 01:01:59.315742 systemd[1]: Starting user@500.service... Nov 1 01:01:59.332647 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:01:59.540381 systemd[1552]: Queued start job for default target default.target. Nov 1 01:01:59.540999 systemd[1552]: Reached target paths.target. Nov 1 01:01:59.541027 systemd[1552]: Reached target sockets.target. Nov 1 01:01:59.541044 systemd[1552]: Reached target timers.target. Nov 1 01:01:59.541060 systemd[1552]: Reached target basic.target. Nov 1 01:01:59.541184 systemd[1]: Started user@500.service. Nov 1 01:01:59.542452 systemd[1]: Started session-2.scope. Nov 1 01:01:59.542986 systemd[1552]: Reached target default.target. Nov 1 01:01:59.543179 systemd[1552]: Startup finished in 203ms. Nov 1 01:01:59.645331 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 01:02:00.278124 login[1549]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:02:00.283543 systemd-logind[1426]: New session 1 of user core. Nov 1 01:02:00.284104 systemd[1]: Started session-1.scope. Nov 1 01:02:04.682060 waagent[1543]: 2025-11-01T01:02:04.681949Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Nov 1 01:02:04.686233 waagent[1543]: 2025-11-01T01:02:04.686155Z INFO Daemon Daemon OS: flatcar 3510.3.8 Nov 1 01:02:04.688847 waagent[1543]: 2025-11-01T01:02:04.688780Z INFO Daemon Daemon Python: 3.9.16 Nov 1 01:02:04.691458 waagent[1543]: 2025-11-01T01:02:04.691384Z INFO Daemon Daemon Run daemon Nov 1 01:02:04.695812 waagent[1543]: 2025-11-01T01:02:04.695739Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Nov 1 01:02:04.709484 waagent[1543]: 2025-11-01T01:02:04.709359Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Nov 1 01:02:04.717943 waagent[1543]: 2025-11-01T01:02:04.717826Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 1 01:02:04.723127 waagent[1543]: 2025-11-01T01:02:04.723058Z INFO Daemon Daemon cloud-init is enabled: False Nov 1 01:02:04.725895 waagent[1543]: 2025-11-01T01:02:04.725832Z INFO Daemon Daemon Using waagent for provisioning Nov 1 01:02:04.729167 waagent[1543]: 2025-11-01T01:02:04.729101Z INFO Daemon Daemon Activate resource disk Nov 1 01:02:04.731771 waagent[1543]: 2025-11-01T01:02:04.731711Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 1 01:02:04.742375 waagent[1543]: 2025-11-01T01:02:04.742307Z INFO Daemon Daemon Found device: None Nov 1 01:02:04.745013 waagent[1543]: 2025-11-01T01:02:04.744945Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 1 01:02:04.749352 waagent[1543]: 2025-11-01T01:02:04.749292Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 1 01:02:04.757648 waagent[1543]: 2025-11-01T01:02:04.757579Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 1 01:02:04.760985 waagent[1543]: 2025-11-01T01:02:04.760920Z INFO Daemon Daemon Running default provisioning handler Nov 1 01:02:04.772093 waagent[1543]: 2025-11-01T01:02:04.771954Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Nov 1 01:02:04.780553 waagent[1543]: 2025-11-01T01:02:04.780437Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 1 01:02:04.785583 waagent[1543]: 2025-11-01T01:02:04.785511Z INFO Daemon Daemon cloud-init is enabled: False Nov 1 01:02:04.788484 waagent[1543]: 2025-11-01T01:02:04.788418Z INFO Daemon Daemon Copying ovf-env.xml Nov 1 01:02:04.858266 waagent[1543]: 2025-11-01T01:02:04.854729Z INFO Daemon Daemon Successfully mounted dvd Nov 1 01:02:04.913933 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 1 01:02:04.950382 waagent[1543]: 2025-11-01T01:02:04.950124Z INFO Daemon Daemon Detect protocol endpoint Nov 1 01:02:04.965756 waagent[1543]: 2025-11-01T01:02:04.950830Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 1 01:02:04.965756 waagent[1543]: 2025-11-01T01:02:04.951967Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 1 01:02:04.965756 waagent[1543]: 2025-11-01T01:02:04.952765Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 1 01:02:04.965756 waagent[1543]: 2025-11-01T01:02:04.954421Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 1 01:02:04.965756 waagent[1543]: 2025-11-01T01:02:04.955276Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 1 01:02:05.181391 waagent[1543]: 2025-11-01T01:02:05.181310Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 1 01:02:05.185357 waagent[1543]: 2025-11-01T01:02:05.185307Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 1 01:02:05.188223 waagent[1543]: 2025-11-01T01:02:05.188153Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 1 01:02:05.681002 waagent[1543]: 2025-11-01T01:02:05.680849Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 1 01:02:05.689353 waagent[1543]: 2025-11-01T01:02:05.689282Z INFO Daemon Daemon Forcing an update of the goal state.. Nov 1 01:02:05.694667 waagent[1543]: 2025-11-01T01:02:05.689702Z INFO Daemon Daemon Fetching goal state [incarnation 1] Nov 1 01:02:05.822916 waagent[1543]: 2025-11-01T01:02:05.822775Z INFO Daemon Daemon Found private key matching thumbprint D3D9ABDB811390AFA7F91A01D73944C643D2AD51 Nov 1 01:02:05.827000 waagent[1543]: 2025-11-01T01:02:05.826909Z INFO Daemon Daemon Fetch goal state completed Nov 1 01:02:05.870573 waagent[1543]: 2025-11-01T01:02:05.870481Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 0df6455f-80d1-4777-b984-474514c73f5f New eTag: 4387675266261898053] Nov 1 01:02:05.876487 waagent[1543]: 2025-11-01T01:02:05.876404Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Nov 1 01:02:05.917086 waagent[1543]: 2025-11-01T01:02:05.916986Z INFO Daemon Daemon Starting provisioning Nov 1 01:02:05.920074 waagent[1543]: 2025-11-01T01:02:05.919979Z INFO Daemon Daemon Handle ovf-env.xml. Nov 1 01:02:05.922916 waagent[1543]: 2025-11-01T01:02:05.922840Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-16445aab1e] Nov 1 01:02:05.944958 waagent[1543]: 2025-11-01T01:02:05.944800Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-16445aab1e] Nov 1 01:02:05.955462 waagent[1543]: 2025-11-01T01:02:05.945729Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 1 01:02:05.955462 waagent[1543]: 2025-11-01T01:02:05.946810Z INFO Daemon Daemon Primary interface is [eth0] Nov 1 01:02:05.962221 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Nov 1 01:02:05.962505 systemd[1]: Stopped systemd-networkd-wait-online.service. Nov 1 01:02:05.962586 systemd[1]: Stopping systemd-networkd-wait-online.service... Nov 1 01:02:05.962932 systemd[1]: Stopping systemd-networkd.service... Nov 1 01:02:05.968302 systemd-networkd[1195]: eth0: DHCPv6 lease lost Nov 1 01:02:05.969737 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 01:02:05.969943 systemd[1]: Stopped systemd-networkd.service. Nov 1 01:02:05.972548 systemd[1]: Starting systemd-networkd.service... Nov 1 01:02:06.005080 systemd-networkd[1595]: enP9347s1: Link UP Nov 1 01:02:06.005091 systemd-networkd[1595]: enP9347s1: Gained carrier Nov 1 01:02:06.006611 systemd-networkd[1595]: eth0: Link UP Nov 1 01:02:06.006621 systemd-networkd[1595]: eth0: Gained carrier Nov 1 01:02:06.007071 systemd-networkd[1595]: lo: Link UP Nov 1 01:02:06.007081 systemd-networkd[1595]: lo: Gained carrier Nov 1 01:02:06.007413 systemd-networkd[1595]: eth0: Gained IPv6LL Nov 1 01:02:06.007698 systemd-networkd[1595]: Enumeration completed Nov 1 01:02:06.015475 waagent[1543]: 2025-11-01T01:02:06.009225Z INFO Daemon Daemon Create user account if not exists Nov 1 01:02:06.007819 systemd[1]: Started systemd-networkd.service. Nov 1 01:02:06.012765 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 01:02:06.017648 waagent[1543]: 2025-11-01T01:02:06.016230Z INFO Daemon Daemon User core already exists, skip useradd Nov 1 01:02:06.017648 waagent[1543]: 2025-11-01T01:02:06.016612Z INFO Daemon Daemon Configure sudoer Nov 1 01:02:06.021005 waagent[1543]: 2025-11-01T01:02:06.018189Z INFO Daemon Daemon Configure sshd Nov 1 01:02:06.021005 waagent[1543]: 2025-11-01T01:02:06.018816Z INFO Daemon Daemon Deploy ssh public key. Nov 1 01:02:06.021644 systemd-networkd[1595]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:02:06.061376 systemd-networkd[1595]: eth0: DHCPv4 address 10.200.4.9/24, gateway 10.200.4.1 acquired from 168.63.129.16 Nov 1 01:02:06.065161 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 01:02:07.166512 waagent[1543]: 2025-11-01T01:02:07.166414Z INFO Daemon Daemon Provisioning complete Nov 1 01:02:07.178414 waagent[1543]: 2025-11-01T01:02:07.178340Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 1 01:02:07.185625 waagent[1543]: 2025-11-01T01:02:07.178867Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 1 01:02:07.185625 waagent[1543]: 2025-11-01T01:02:07.180753Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Nov 1 01:02:07.449383 waagent[1601]: 2025-11-01T01:02:07.449195Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Nov 1 01:02:07.450107 waagent[1601]: 2025-11-01T01:02:07.450038Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 01:02:07.450263 waagent[1601]: 2025-11-01T01:02:07.450199Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 01:02:07.461230 waagent[1601]: 2025-11-01T01:02:07.461149Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Nov 1 01:02:07.461413 waagent[1601]: 2025-11-01T01:02:07.461353Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Nov 1 01:02:07.513155 waagent[1601]: 2025-11-01T01:02:07.513020Z INFO ExtHandler ExtHandler Found private key matching thumbprint D3D9ABDB811390AFA7F91A01D73944C643D2AD51 Nov 1 01:02:07.513484 waagent[1601]: 2025-11-01T01:02:07.513421Z INFO ExtHandler ExtHandler Fetch goal state completed Nov 1 01:02:07.526869 waagent[1601]: 2025-11-01T01:02:07.526803Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 76f9b6ca-2ae3-4763-8ceb-d7e9100f9fe5 New eTag: 4387675266261898053] Nov 1 01:02:07.527409 waagent[1601]: 2025-11-01T01:02:07.527349Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Nov 1 01:02:07.599117 waagent[1601]: 2025-11-01T01:02:07.598945Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 1 01:02:07.609411 waagent[1601]: 2025-11-01T01:02:07.609316Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1601 Nov 1 01:02:07.612798 waagent[1601]: 2025-11-01T01:02:07.612726Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Nov 1 01:02:07.613961 waagent[1601]: 2025-11-01T01:02:07.613901Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 1 01:02:07.709161 waagent[1601]: 2025-11-01T01:02:07.709095Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 1 01:02:07.709575 waagent[1601]: 2025-11-01T01:02:07.709511Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 1 01:02:07.718055 waagent[1601]: 2025-11-01T01:02:07.717997Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 1 01:02:07.718556 waagent[1601]: 2025-11-01T01:02:07.718494Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Nov 1 01:02:07.719618 waagent[1601]: 2025-11-01T01:02:07.719556Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Nov 1 01:02:07.720923 waagent[1601]: 2025-11-01T01:02:07.720862Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 1 01:02:07.721327 waagent[1601]: 2025-11-01T01:02:07.721267Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 01:02:07.721697 waagent[1601]: 2025-11-01T01:02:07.721645Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 1 01:02:07.722030 waagent[1601]: 2025-11-01T01:02:07.721977Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 01:02:07.722565 waagent[1601]: 2025-11-01T01:02:07.722508Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 01:02:07.723076 waagent[1601]: 2025-11-01T01:02:07.723020Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 1 01:02:07.723574 waagent[1601]: 2025-11-01T01:02:07.723510Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 1 01:02:07.723854 waagent[1601]: 2025-11-01T01:02:07.723794Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 1 01:02:07.723854 waagent[1601]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 1 01:02:07.723854 waagent[1601]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Nov 1 01:02:07.723854 waagent[1601]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 1 01:02:07.723854 waagent[1601]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 1 01:02:07.723854 waagent[1601]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 01:02:07.723854 waagent[1601]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 01:02:07.724348 waagent[1601]: 2025-11-01T01:02:07.724292Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 1 01:02:07.724717 waagent[1601]: 2025-11-01T01:02:07.724665Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 01:02:07.727540 waagent[1601]: 2025-11-01T01:02:07.727258Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 1 01:02:07.728182 waagent[1601]: 2025-11-01T01:02:07.728102Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 1 01:02:07.729602 waagent[1601]: 2025-11-01T01:02:07.729542Z INFO EnvHandler ExtHandler Configure routes Nov 1 01:02:07.729774 waagent[1601]: 2025-11-01T01:02:07.729720Z INFO EnvHandler ExtHandler Gateway:None Nov 1 01:02:07.729914 waagent[1601]: 2025-11-01T01:02:07.729868Z INFO EnvHandler ExtHandler Routes:None Nov 1 01:02:07.732688 waagent[1601]: 2025-11-01T01:02:07.732635Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 1 01:02:07.741732 waagent[1601]: 2025-11-01T01:02:07.741680Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Nov 1 01:02:07.743601 waagent[1601]: 2025-11-01T01:02:07.743552Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Nov 1 01:02:07.744493 waagent[1601]: 2025-11-01T01:02:07.744445Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Nov 1 01:02:07.784374 waagent[1601]: 2025-11-01T01:02:07.784221Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1595' Nov 1 01:02:07.785095 waagent[1601]: 2025-11-01T01:02:07.785037Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Nov 1 01:02:07.870027 waagent[1601]: 2025-11-01T01:02:07.869893Z INFO MonitorHandler ExtHandler Network interfaces: Nov 1 01:02:07.870027 waagent[1601]: Executing ['ip', '-a', '-o', 'link']: Nov 1 01:02:07.870027 waagent[1601]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 1 01:02:07.870027 waagent[1601]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2e:80:e2 brd ff:ff:ff:ff:ff:ff Nov 1 01:02:07.870027 waagent[1601]: 3: enP9347s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2e:80:e2 brd ff:ff:ff:ff:ff:ff\ altname enP9347p0s2 Nov 1 01:02:07.870027 waagent[1601]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 1 01:02:07.870027 waagent[1601]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 1 01:02:07.870027 waagent[1601]: 2: eth0 inet 10.200.4.9/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 1 01:02:07.870027 waagent[1601]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 1 01:02:07.870027 waagent[1601]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Nov 1 01:02:07.870027 waagent[1601]: 2: eth0 inet6 fe80::7e1e:52ff:fe2e:80e2/64 scope link \ valid_lft forever preferred_lft forever Nov 1 01:02:08.129311 waagent[1601]: 2025-11-01T01:02:08.129185Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.15.0.1 -- exiting Nov 1 01:02:08.184522 waagent[1543]: 2025-11-01T01:02:08.184401Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Nov 1 01:02:08.190340 waagent[1543]: 2025-11-01T01:02:08.190277Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.15.0.1 to be the latest agent Nov 1 01:02:09.204029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 01:02:09.204329 systemd[1]: Stopped kubelet.service. Nov 1 01:02:09.204388 systemd[1]: kubelet.service: Consumed 1.095s CPU time. Nov 1 01:02:09.206334 systemd[1]: Starting kubelet.service... Nov 1 01:02:09.335537 waagent[1627]: 2025-11-01T01:02:09.335313Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.15.0.1) Nov 1 01:02:09.336304 waagent[1627]: 2025-11-01T01:02:09.336200Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Nov 1 01:02:09.336487 waagent[1627]: 2025-11-01T01:02:09.336431Z INFO ExtHandler ExtHandler Python: 3.9.16 Nov 1 01:02:09.336663 waagent[1627]: 2025-11-01T01:02:09.336610Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Nov 1 01:02:09.351692 systemd[1]: Started kubelet.service. Nov 1 01:02:09.360222 waagent[1627]: 2025-11-01T01:02:09.360078Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 1 01:02:09.361057 waagent[1627]: 2025-11-01T01:02:09.360976Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 01:02:09.361460 waagent[1627]: 2025-11-01T01:02:09.361397Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 01:02:09.361868 waagent[1627]: 2025-11-01T01:02:09.361808Z INFO ExtHandler ExtHandler Initializing the goal state... Nov 1 01:02:09.378876 waagent[1627]: 2025-11-01T01:02:09.378780Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 1 01:02:09.388892 waagent[1627]: 2025-11-01T01:02:09.388830Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Nov 1 01:02:09.390075 waagent[1627]: 2025-11-01T01:02:09.390021Z INFO ExtHandler Nov 1 01:02:09.390346 waagent[1627]: 2025-11-01T01:02:09.390300Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 29db8eeb-209c-48bc-9544-0ccec95eac7e eTag: 4387675266261898053 source: Fabric] Nov 1 01:02:09.391116 waagent[1627]: 2025-11-01T01:02:09.391069Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 1 01:02:09.499798 waagent[1627]: 2025-11-01T01:02:09.499582Z INFO ExtHandler Nov 1 01:02:09.500049 waagent[1627]: 2025-11-01T01:02:09.499974Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 1 01:02:09.510260 waagent[1627]: 2025-11-01T01:02:09.510161Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 1 01:02:09.510774 waagent[1627]: 2025-11-01T01:02:09.510719Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Nov 1 01:02:09.531270 waagent[1627]: 2025-11-01T01:02:09.531159Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Nov 1 01:02:10.023500 kubelet[1638]: E1101 01:02:10.023452 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:02:10.026382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:02:10.026552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:02:10.095266 waagent[1627]: 2025-11-01T01:02:10.095118Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D3D9ABDB811390AFA7F91A01D73944C643D2AD51', 'hasPrivateKey': True} Nov 1 01:02:10.096603 waagent[1627]: 2025-11-01T01:02:10.096527Z INFO ExtHandler Fetch goal state from WireServer completed Nov 1 01:02:10.097518 waagent[1627]: 2025-11-01T01:02:10.097448Z INFO ExtHandler ExtHandler Goal state initialization completed. Nov 1 01:02:10.131454 waagent[1627]: 2025-11-01T01:02:10.131316Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Nov 1 01:02:10.140802 waagent[1627]: 2025-11-01T01:02:10.140692Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Nov 1 01:02:10.144635 waagent[1627]: 2025-11-01T01:02:10.144527Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Nov 1 01:02:10.144882 waagent[1627]: 2025-11-01T01:02:10.144825Z INFO ExtHandler ExtHandler Checking state of the firewall Nov 1 01:02:10.247454 waagent[1627]: 2025-11-01T01:02:10.247329Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: Nov 1 01:02:10.247454 waagent[1627]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 01:02:10.247454 waagent[1627]: pkts bytes target prot opt in out source destination Nov 1 01:02:10.247454 waagent[1627]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 1 01:02:10.247454 waagent[1627]: pkts bytes target prot opt in out source destination Nov 1 01:02:10.247454 waagent[1627]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 01:02:10.247454 waagent[1627]: pkts bytes target prot opt in out source destination Nov 1 01:02:10.247454 waagent[1627]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 1 01:02:10.247454 waagent[1627]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 1 01:02:10.247454 waagent[1627]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 1 01:02:10.248641 waagent[1627]: 2025-11-01T01:02:10.248570Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Nov 1 01:02:10.251417 waagent[1627]: 2025-11-01T01:02:10.251317Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Nov 1 01:02:10.251846 waagent[1627]: 2025-11-01T01:02:10.251786Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up /lib/systemd/system/waagent-network-setup.service Nov 1 01:02:10.252314 waagent[1627]: 2025-11-01T01:02:10.252194Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 1 01:02:10.261001 waagent[1627]: 2025-11-01T01:02:10.260934Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 1 01:02:10.261551 waagent[1627]: 2025-11-01T01:02:10.261490Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Nov 1 01:02:10.269027 waagent[1627]: 2025-11-01T01:02:10.268946Z INFO ExtHandler ExtHandler WALinuxAgent-2.15.0.1 running as process 1627 Nov 1 01:02:10.272177 waagent[1627]: 2025-11-01T01:02:10.272105Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Nov 1 01:02:10.272984 waagent[1627]: 2025-11-01T01:02:10.272922Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Nov 1 01:02:10.273940 waagent[1627]: 2025-11-01T01:02:10.273833Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 1 01:02:10.276516 waagent[1627]: 2025-11-01T01:02:10.276453Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Nov 1 01:02:10.276845 waagent[1627]: 2025-11-01T01:02:10.276792Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 1 01:02:10.278521 waagent[1627]: 2025-11-01T01:02:10.278462Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 1 01:02:10.279145 waagent[1627]: 2025-11-01T01:02:10.279087Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 1 01:02:10.279573 waagent[1627]: 2025-11-01T01:02:10.279518Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 01:02:10.280185 waagent[1627]: 2025-11-01T01:02:10.280132Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 01:02:10.280364 waagent[1627]: 2025-11-01T01:02:10.280314Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 1 01:02:10.280555 waagent[1627]: 2025-11-01T01:02:10.280485Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 01:02:10.281060 waagent[1627]: 2025-11-01T01:02:10.281005Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 01:02:10.281218 waagent[1627]: 2025-11-01T01:02:10.281165Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 1 01:02:10.281999 waagent[1627]: 2025-11-01T01:02:10.281945Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 1 01:02:10.282401 waagent[1627]: 2025-11-01T01:02:10.282348Z INFO EnvHandler ExtHandler Configure routes Nov 1 01:02:10.282648 waagent[1627]: 2025-11-01T01:02:10.282600Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 1 01:02:10.283213 waagent[1627]: 2025-11-01T01:02:10.283163Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 1 01:02:10.283351 waagent[1627]: 2025-11-01T01:02:10.283285Z INFO EnvHandler ExtHandler Gateway:None Nov 1 01:02:10.283492 waagent[1627]: 2025-11-01T01:02:10.283450Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 1 01:02:10.284295 waagent[1627]: 2025-11-01T01:02:10.284218Z INFO EnvHandler ExtHandler Routes:None Nov 1 01:02:10.285097 waagent[1627]: 2025-11-01T01:02:10.285045Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 1 01:02:10.285097 waagent[1627]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 1 01:02:10.285097 waagent[1627]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Nov 1 01:02:10.285097 waagent[1627]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 1 01:02:10.285097 waagent[1627]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 1 01:02:10.285097 waagent[1627]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 01:02:10.285097 waagent[1627]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 01:02:10.314071 waagent[1627]: 2025-11-01T01:02:10.314001Z INFO ExtHandler ExtHandler Downloading agent manifest Nov 1 01:02:10.319732 waagent[1627]: 2025-11-01T01:02:10.319653Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Nov 1 01:02:10.321045 waagent[1627]: 2025-11-01T01:02:10.320982Z INFO MonitorHandler ExtHandler Network interfaces: Nov 1 01:02:10.321045 waagent[1627]: Executing ['ip', '-a', '-o', 'link']: Nov 1 01:02:10.321045 waagent[1627]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 1 01:02:10.321045 waagent[1627]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2e:80:e2 brd ff:ff:ff:ff:ff:ff Nov 1 01:02:10.321045 waagent[1627]: 3: enP9347s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2e:80:e2 brd ff:ff:ff:ff:ff:ff\ altname enP9347p0s2 Nov 1 01:02:10.321045 waagent[1627]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 1 01:02:10.321045 waagent[1627]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 1 01:02:10.321045 waagent[1627]: 2: eth0 inet 10.200.4.9/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 1 01:02:10.321045 waagent[1627]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 1 01:02:10.321045 waagent[1627]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Nov 1 01:02:10.321045 waagent[1627]: 2: eth0 inet6 fe80::7e1e:52ff:fe2e:80e2/64 scope link \ valid_lft forever preferred_lft forever Nov 1 01:02:10.337753 waagent[1627]: 2025-11-01T01:02:10.337653Z INFO ExtHandler ExtHandler Nov 1 01:02:10.339743 waagent[1627]: 2025-11-01T01:02:10.339672Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2a7b6525-471a-4447-b63c-1498efc65d5d correlation 7b51fefa-4839-4bd3-8615-f866b66175c9 created: 2025-11-01T01:00:42.892906Z] Nov 1 01:02:10.343590 waagent[1627]: 2025-11-01T01:02:10.343527Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 1 01:02:10.347572 waagent[1627]: 2025-11-01T01:02:10.347510Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 9 ms] Nov 1 01:02:10.372132 waagent[1627]: 2025-11-01T01:02:10.372047Z INFO ExtHandler ExtHandler Looking for existing remote access users. Nov 1 01:02:10.374941 waagent[1627]: 2025-11-01T01:02:10.374871Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.15.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 539AE639-6A14-4DC6-9810-8CB1BF95C765;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Nov 1 01:02:10.399415 waagent[1627]: 2025-11-01T01:02:10.399343Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 1 01:02:20.203953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 01:02:20.204318 systemd[1]: Stopped kubelet.service. Nov 1 01:02:20.206388 systemd[1]: Starting kubelet.service... Nov 1 01:02:20.333452 systemd[1]: Started kubelet.service. Nov 1 01:02:21.053976 kubelet[1684]: E1101 01:02:21.053913 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:02:21.055980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:02:21.056203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:02:30.000060 systemd[1]: Created slice system-sshd.slice. Nov 1 01:02:30.002198 systemd[1]: Started sshd@0-10.200.4.9:22-10.200.16.10:42690.service. Nov 1 01:02:30.831371 sshd[1691]: Accepted publickey for core from 10.200.16.10 port 42690 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:02:30.833122 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:30.838480 systemd-logind[1426]: New session 3 of user core. Nov 1 01:02:30.839156 systemd[1]: Started session-3.scope. Nov 1 01:02:31.203958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 01:02:31.204223 systemd[1]: Stopped kubelet.service. Nov 1 01:02:31.206068 systemd[1]: Starting kubelet.service... Nov 1 01:02:31.343749 systemd[1]: Started sshd@1-10.200.4.9:22-10.200.16.10:42696.service. Nov 1 01:02:31.354737 systemd[1]: Started kubelet.service. Nov 1 01:02:31.943219 sshd[1698]: Accepted publickey for core from 10.200.16.10 port 42696 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:02:31.944999 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:31.950825 systemd[1]: Started session-4.scope. Nov 1 01:02:31.951291 systemd-logind[1426]: New session 4 of user core. Nov 1 01:02:32.069440 kubelet[1701]: E1101 01:02:32.069383 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:02:32.071052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:02:32.071220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:02:32.364220 sshd[1698]: pam_unix(sshd:session): session closed for user core Nov 1 01:02:32.367592 systemd[1]: sshd@1-10.200.4.9:22-10.200.16.10:42696.service: Deactivated successfully. Nov 1 01:02:32.368630 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 01:02:32.369406 systemd-logind[1426]: Session 4 logged out. Waiting for processes to exit. Nov 1 01:02:32.370341 systemd-logind[1426]: Removed session 4. Nov 1 01:02:32.464756 systemd[1]: Started sshd@2-10.200.4.9:22-10.200.16.10:42708.service. Nov 1 01:02:33.063792 sshd[1712]: Accepted publickey for core from 10.200.16.10 port 42708 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:02:33.065578 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:33.070504 systemd[1]: Started session-5.scope. Nov 1 01:02:33.070983 systemd-logind[1426]: New session 5 of user core. Nov 1 01:02:33.492530 sshd[1712]: pam_unix(sshd:session): session closed for user core Nov 1 01:02:33.495724 systemd[1]: sshd@2-10.200.4.9:22-10.200.16.10:42708.service: Deactivated successfully. Nov 1 01:02:33.496577 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 01:02:33.497169 systemd-logind[1426]: Session 5 logged out. Waiting for processes to exit. Nov 1 01:02:33.497940 systemd-logind[1426]: Removed session 5. Nov 1 01:02:33.590681 systemd[1]: Started sshd@3-10.200.4.9:22-10.200.16.10:42716.service. Nov 1 01:02:34.182229 sshd[1718]: Accepted publickey for core from 10.200.16.10 port 42716 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:02:34.183788 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:34.188355 systemd-logind[1426]: New session 6 of user core. Nov 1 01:02:34.188738 systemd[1]: Started session-6.scope. Nov 1 01:02:34.600679 sshd[1718]: pam_unix(sshd:session): session closed for user core Nov 1 01:02:34.603932 systemd[1]: sshd@3-10.200.4.9:22-10.200.16.10:42716.service: Deactivated successfully. Nov 1 01:02:34.604765 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 01:02:34.605391 systemd-logind[1426]: Session 6 logged out. Waiting for processes to exit. Nov 1 01:02:34.606144 systemd-logind[1426]: Removed session 6. Nov 1 01:02:34.698661 systemd[1]: Started sshd@4-10.200.4.9:22-10.200.16.10:42732.service. Nov 1 01:02:35.291638 sshd[1724]: Accepted publickey for core from 10.200.16.10 port 42732 ssh2: RSA SHA256:AlhY8Qb6fVlZV7QUYELheUuRM7INJ6rje8ez2X++JEk Nov 1 01:02:35.293321 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:35.299036 systemd[1]: Started session-7.scope. Nov 1 01:02:35.299631 systemd-logind[1426]: New session 7 of user core. Nov 1 01:02:35.803909 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 01:02:35.804211 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 01:02:35.817860 systemd[1]: Starting coreos-metadata.service... Nov 1 01:02:35.892197 coreos-metadata[1731]: Nov 01 01:02:35.892 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 1 01:02:35.894907 coreos-metadata[1731]: Nov 01 01:02:35.894 INFO Fetch successful Nov 1 01:02:35.895120 coreos-metadata[1731]: Nov 01 01:02:35.895 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 1 01:02:35.896986 coreos-metadata[1731]: Nov 01 01:02:35.896 INFO Fetch successful Nov 1 01:02:35.897366 coreos-metadata[1731]: Nov 01 01:02:35.897 INFO Fetching http://168.63.129.16/machine/0f3bdc7a-9d0b-4d39-915d-4f3a86a4d9c4/7bb169fa%2D48b3%2D4540%2D8180%2Dd8c9caa13aca.%5Fci%2D3510.3.8%2Dn%2D16445aab1e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 1 01:02:35.898981 coreos-metadata[1731]: Nov 01 01:02:35.898 INFO Fetch successful Nov 1 01:02:35.933498 coreos-metadata[1731]: Nov 01 01:02:35.933 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 1 01:02:35.943111 coreos-metadata[1731]: Nov 01 01:02:35.943 INFO Fetch successful Nov 1 01:02:35.952342 systemd[1]: Finished coreos-metadata.service. Nov 1 01:02:36.035279 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 1 01:02:36.497675 systemd[1]: Stopped kubelet.service. Nov 1 01:02:36.500471 systemd[1]: Starting kubelet.service... Nov 1 01:02:36.532522 systemd[1]: Reloading. Nov 1 01:02:36.639213 /usr/lib/systemd/system-generators/torcx-generator[1788]: time="2025-11-01T01:02:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 01:02:36.639285 /usr/lib/systemd/system-generators/torcx-generator[1788]: time="2025-11-01T01:02:36Z" level=info msg="torcx already run" Nov 1 01:02:36.734652 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:02:36.734681 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:02:36.756626 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:02:36.873355 systemd[1]: Started kubelet.service. Nov 1 01:02:36.884402 systemd[1]: Stopping kubelet.service... Nov 1 01:02:36.885349 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 01:02:36.885574 systemd[1]: Stopped kubelet.service. Nov 1 01:02:36.887572 systemd[1]: Starting kubelet.service... Nov 1 01:02:38.012324 systemd[1]: Started kubelet.service. Nov 1 01:02:38.066869 kubelet[1862]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:02:38.066869 kubelet[1862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:02:38.067375 kubelet[1862]: I1101 01:02:38.066967 1862 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:02:38.577543 kubelet[1862]: I1101 01:02:38.577499 1862 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 01:02:38.577543 kubelet[1862]: I1101 01:02:38.577525 1862 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:02:38.579935 kubelet[1862]: I1101 01:02:38.579903 1862 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 01:02:38.579935 kubelet[1862]: I1101 01:02:38.579938 1862 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:02:38.580280 kubelet[1862]: I1101 01:02:38.580260 1862 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 01:02:38.583890 kubelet[1862]: I1101 01:02:38.583863 1862 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:02:38.595543 kubelet[1862]: E1101 01:02:38.595506 1862 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:02:38.595734 kubelet[1862]: I1101 01:02:38.595720 1862 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 01:02:38.600265 kubelet[1862]: I1101 01:02:38.600231 1862 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 01:02:38.602163 kubelet[1862]: I1101 01:02:38.602114 1862 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:02:38.602380 kubelet[1862]: I1101 01:02:38.602160 1862 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.4.9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 01:02:38.602561 kubelet[1862]: I1101 01:02:38.602385 1862 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:02:38.602561 kubelet[1862]: I1101 01:02:38.602400 1862 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 01:02:38.602561 kubelet[1862]: I1101 01:02:38.602528 1862 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 01:02:38.610314 kubelet[1862]: I1101 01:02:38.610288 1862 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:02:38.611812 kubelet[1862]: I1101 01:02:38.611790 1862 kubelet.go:475] "Attempting to sync node with API server" Nov 1 01:02:38.611812 kubelet[1862]: I1101 01:02:38.611813 1862 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:02:38.612146 kubelet[1862]: I1101 01:02:38.612128 1862 kubelet.go:387] "Adding apiserver pod source" Nov 1 01:02:38.612263 kubelet[1862]: E1101 01:02:38.612212 1862 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:38.612333 kubelet[1862]: E1101 01:02:38.612278 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:38.612333 kubelet[1862]: I1101 01:02:38.612304 1862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:02:38.615303 kubelet[1862]: I1101 01:02:38.615285 1862 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 01:02:38.615923 kubelet[1862]: I1101 01:02:38.615903 1862 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 01:02:38.616048 kubelet[1862]: I1101 01:02:38.616038 1862 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 01:02:38.616151 kubelet[1862]: W1101 01:02:38.616140 1862 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 01:02:38.618549 kubelet[1862]: I1101 01:02:38.618535 1862 server.go:1262] "Started kubelet" Nov 1 01:02:38.619750 kubelet[1862]: E1101 01:02:38.619133 1862 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 01:02:38.619750 kubelet[1862]: E1101 01:02:38.619259 1862 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"10.200.4.9\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 01:02:38.619750 kubelet[1862]: I1101 01:02:38.619293 1862 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:02:38.629899 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 01:02:38.630094 kubelet[1862]: I1101 01:02:38.630071 1862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:02:38.631214 kubelet[1862]: I1101 01:02:38.631178 1862 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:02:38.631361 kubelet[1862]: I1101 01:02:38.631345 1862 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 01:02:38.631726 kubelet[1862]: I1101 01:02:38.631706 1862 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:02:38.635795 kubelet[1862]: I1101 01:02:38.635775 1862 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:02:38.636882 kubelet[1862]: I1101 01:02:38.636860 1862 server.go:310] "Adding debug handlers to kubelet server" Nov 1 01:02:38.637944 kubelet[1862]: I1101 01:02:38.637915 1862 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 01:02:38.638943 kubelet[1862]: E1101 01:02:38.638145 1862 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.200.4.9\" not found" Nov 1 01:02:38.638943 kubelet[1862]: I1101 01:02:38.638455 1862 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 01:02:38.640759 kubelet[1862]: I1101 01:02:38.640728 1862 factory.go:223] Registration of the systemd container factory successfully Nov 1 01:02:38.640845 kubelet[1862]: I1101 01:02:38.640833 1862 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:02:38.642464 kubelet[1862]: I1101 01:02:38.642450 1862 reconciler.go:29] "Reconciler: start to sync state" Nov 1 01:02:38.644280 kubelet[1862]: I1101 01:02:38.643923 1862 factory.go:223] Registration of the containerd container factory successfully Nov 1 01:02:38.644497 kubelet[1862]: E1101 01:02:38.644150 1862 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 01:02:38.646749 kubelet[1862]: E1101 01:02:38.644588 1862 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.4.9.1873bc566d45f94c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.4.9,UID:10.200.4.9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.4.9,},FirstTimestamp:2025-11-01 01:02:38.618507596 +0000 UTC m=+0.598895143,LastTimestamp:2025-11-01 01:02:38.618507596 +0000 UTC m=+0.598895143,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.4.9,}" Nov 1 01:02:38.647063 kubelet[1862]: E1101 01:02:38.647042 1862 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:02:38.664672 kubelet[1862]: E1101 01:02:38.664638 1862 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.4.9\" not found" node="10.200.4.9" Nov 1 01:02:38.668404 kubelet[1862]: I1101 01:02:38.668378 1862 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:02:38.668404 kubelet[1862]: I1101 01:02:38.668400 1862 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:02:38.668566 kubelet[1862]: I1101 01:02:38.668419 1862 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:02:38.679680 kubelet[1862]: I1101 01:02:38.679651 1862 policy_none.go:49] "None policy: Start" Nov 1 01:02:38.679680 kubelet[1862]: I1101 01:02:38.679675 1862 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 01:02:38.679833 kubelet[1862]: I1101 01:02:38.679688 1862 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 01:02:38.688524 kubelet[1862]: I1101 01:02:38.688502 1862 policy_none.go:47] "Start" Nov 1 01:02:38.692292 systemd[1]: Created slice kubepods.slice. Nov 1 01:02:38.697195 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 01:02:38.702402 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 01:02:38.708803 kubelet[1862]: I1101 01:02:38.708780 1862 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 01:02:38.710488 kubelet[1862]: I1101 01:02:38.710469 1862 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 01:02:38.710602 kubelet[1862]: I1101 01:02:38.710594 1862 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 01:02:38.710677 kubelet[1862]: I1101 01:02:38.710669 1862 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 01:02:38.710764 kubelet[1862]: E1101 01:02:38.710750 1862 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:02:38.712368 kubelet[1862]: E1101 01:02:38.712349 1862 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 01:02:38.712649 kubelet[1862]: I1101 01:02:38.712635 1862 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:02:38.712771 kubelet[1862]: I1101 01:02:38.712740 1862 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:02:38.714397 kubelet[1862]: I1101 01:02:38.714381 1862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:02:38.715707 kubelet[1862]: E1101 01:02:38.715688 1862 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:02:38.715846 kubelet[1862]: E1101 01:02:38.715833 1862 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.4.9\" not found" Nov 1 01:02:38.813668 kubelet[1862]: I1101 01:02:38.813634 1862 kubelet_node_status.go:75] "Attempting to register node" node="10.200.4.9" Nov 1 01:02:38.818950 kubelet[1862]: I1101 01:02:38.818918 1862 kubelet_node_status.go:78] "Successfully registered node" node="10.200.4.9" Nov 1 01:02:38.818950 kubelet[1862]: E1101 01:02:38.818952 1862 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"10.200.4.9\": node \"10.200.4.9\" not found" Nov 1 01:02:38.833084 kubelet[1862]: E1101 01:02:38.832984 1862 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.200.4.9\" not found" Nov 1 01:02:38.933614 kubelet[1862]: E1101 01:02:38.933558 1862 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.200.4.9\" not found" Nov 1 01:02:39.032253 sudo[1727]: pam_unix(sudo:session): session closed for user root Nov 1 01:02:39.034078 kubelet[1862]: E1101 01:02:39.034045 1862 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.200.4.9\" not found" Nov 1 01:02:39.134522 kubelet[1862]: E1101 01:02:39.134385 1862 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.200.4.9\" not found" Nov 1 01:02:39.161910 sshd[1724]: pam_unix(sshd:session): session closed for user core Nov 1 01:02:39.165377 systemd[1]: sshd@4-10.200.4.9:22-10.200.16.10:42732.service: Deactivated successfully. Nov 1 01:02:39.166438 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 01:02:39.167308 systemd-logind[1426]: Session 7 logged out. Waiting for processes to exit. Nov 1 01:02:39.168334 systemd-logind[1426]: Removed session 7. Nov 1 01:02:39.235197 kubelet[1862]: E1101 01:02:39.235143 1862 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.200.4.9\" not found" Nov 1 01:02:39.335895 kubelet[1862]: E1101 01:02:39.335826 1862 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.200.4.9\" not found" Nov 1 01:02:39.436710 kubelet[1862]: E1101 01:02:39.436561 1862 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.200.4.9\" not found" Nov 1 01:02:39.537706 kubelet[1862]: E1101 01:02:39.537649 1862 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.200.4.9\" not found" Nov 1 01:02:39.582352 kubelet[1862]: I1101 01:02:39.582293 1862 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 1 01:02:39.582611 kubelet[1862]: I1101 01:02:39.582570 1862 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Nov 1 01:02:39.582793 kubelet[1862]: I1101 01:02:39.582664 1862 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Nov 1 01:02:39.612755 kubelet[1862]: E1101 01:02:39.612694 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:39.637825 kubelet[1862]: E1101 01:02:39.637774 1862 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.200.4.9\" not found" Nov 1 01:02:39.737991 kubelet[1862]: E1101 01:02:39.737942 1862 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.200.4.9\" not found" Nov 1 01:02:39.838620 kubelet[1862]: E1101 01:02:39.838567 1862 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"10.200.4.9\" not found" Nov 1 01:02:39.939314 kubelet[1862]: I1101 01:02:39.939285 1862 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Nov 1 01:02:39.939724 env[1433]: time="2025-11-01T01:02:39.939682425Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 01:02:39.940128 kubelet[1862]: I1101 01:02:39.939875 1862 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Nov 1 01:02:40.613151 kubelet[1862]: E1101 01:02:40.613092 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:40.613151 kubelet[1862]: I1101 01:02:40.613103 1862 apiserver.go:52] "Watching apiserver" Nov 1 01:02:40.629947 systemd[1]: Created slice kubepods-burstable-poda3ccde92_f282_4223_900a_33fd7fdb2f34.slice. Nov 1 01:02:40.639910 systemd[1]: Created slice kubepods-besteffort-pod5c4099c5_6c7f_43c8_9a51_77b17e002e9e.slice. Nov 1 01:02:40.641943 kubelet[1862]: I1101 01:02:40.641917 1862 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 01:02:40.656443 kubelet[1862]: I1101 01:02:40.656411 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-host-proc-sys-kernel\") pod \"cilium-cxwsj\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " pod="kube-system/cilium-cxwsj" Nov 1 01:02:40.656443 kubelet[1862]: I1101 01:02:40.656444 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a3ccde92-f282-4223-900a-33fd7fdb2f34-hubble-tls\") pod \"cilium-cxwsj\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " pod="kube-system/cilium-cxwsj" Nov 1 01:02:40.656654 kubelet[1862]: I1101 01:02:40.656471 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlr9s\" (UniqueName: \"kubernetes.io/projected/5c4099c5-6c7f-43c8-9a51-77b17e002e9e-kube-api-access-tlr9s\") pod \"kube-proxy-tcgc6\" (UID: \"5c4099c5-6c7f-43c8-9a51-77b17e002e9e\") " pod="kube-system/kube-proxy-tcgc6" Nov 1 01:02:40.656654 kubelet[1862]: I1101 01:02:40.656494 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3ccde92-f282-4223-900a-33fd7fdb2f34-cilium-config-path\") pod \"cilium-cxwsj\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " pod="kube-system/cilium-cxwsj" Nov 1 01:02:40.656654 kubelet[1862]: I1101 01:02:40.656512 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf2nv\" (UniqueName: \"kubernetes.io/projected/a3ccde92-f282-4223-900a-33fd7fdb2f34-kube-api-access-qf2nv\") pod \"cilium-cxwsj\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " pod="kube-system/cilium-cxwsj" Nov 1 01:02:40.656654 kubelet[1862]: I1101 01:02:40.656530 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c4099c5-6c7f-43c8-9a51-77b17e002e9e-kube-proxy\") pod \"kube-proxy-tcgc6\" (UID: \"5c4099c5-6c7f-43c8-9a51-77b17e002e9e\") " pod="kube-system/kube-proxy-tcgc6" Nov 1 01:02:40.656654 kubelet[1862]: I1101 01:02:40.656550 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c4099c5-6c7f-43c8-9a51-77b17e002e9e-lib-modules\") pod \"kube-proxy-tcgc6\" (UID: \"5c4099c5-6c7f-43c8-9a51-77b17e002e9e\") " pod="kube-system/kube-proxy-tcgc6" Nov 1 01:02:40.656861 kubelet[1862]: I1101 01:02:40.656589 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-cilium-run\") pod \"cilium-cxwsj\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " pod="kube-system/cilium-cxwsj" Nov 1 01:02:40.656861 kubelet[1862]: I1101 01:02:40.656609 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a3ccde92-f282-4223-900a-33fd7fdb2f34-clustermesh-secrets\") pod \"cilium-cxwsj\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " pod="kube-system/cilium-cxwsj" Nov 1 01:02:40.656861 kubelet[1862]: I1101 01:02:40.656630 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-bpf-maps\") pod \"cilium-cxwsj\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " pod="kube-system/cilium-cxwsj" Nov 1 01:02:40.656861 kubelet[1862]: I1101 01:02:40.656649 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-cni-path\") pod \"cilium-cxwsj\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " pod="kube-system/cilium-cxwsj" Nov 1 01:02:40.656861 kubelet[1862]: I1101 01:02:40.656674 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-xtables-lock\") pod \"cilium-cxwsj\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " pod="kube-system/cilium-cxwsj" Nov 1 01:02:40.656861 kubelet[1862]: I1101 01:02:40.656695 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-host-proc-sys-net\") pod \"cilium-cxwsj\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " pod="kube-system/cilium-cxwsj" Nov 1 01:02:40.657061 kubelet[1862]: I1101 01:02:40.656728 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c4099c5-6c7f-43c8-9a51-77b17e002e9e-xtables-lock\") pod \"kube-proxy-tcgc6\" (UID: \"5c4099c5-6c7f-43c8-9a51-77b17e002e9e\") " pod="kube-system/kube-proxy-tcgc6" Nov 1 01:02:40.657061 kubelet[1862]: I1101 01:02:40.656751 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-hostproc\") pod \"cilium-cxwsj\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " pod="kube-system/cilium-cxwsj" Nov 1 01:02:40.657061 kubelet[1862]: I1101 01:02:40.656772 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-cilium-cgroup\") pod \"cilium-cxwsj\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " pod="kube-system/cilium-cxwsj" Nov 1 01:02:40.657061 kubelet[1862]: I1101 01:02:40.656791 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-etc-cni-netd\") pod \"cilium-cxwsj\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " pod="kube-system/cilium-cxwsj" Nov 1 01:02:40.657061 kubelet[1862]: I1101 01:02:40.656814 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-lib-modules\") pod \"cilium-cxwsj\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " pod="kube-system/cilium-cxwsj" Nov 1 01:02:40.757827 kubelet[1862]: I1101 01:02:40.757782 1862 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 01:02:40.943983 env[1433]: time="2025-11-01T01:02:40.943846412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cxwsj,Uid:a3ccde92-f282-4223-900a-33fd7fdb2f34,Namespace:kube-system,Attempt:0,}" Nov 1 01:02:40.955847 env[1433]: time="2025-11-01T01:02:40.955808436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tcgc6,Uid:5c4099c5-6c7f-43c8-9a51-77b17e002e9e,Namespace:kube-system,Attempt:0,}" Nov 1 01:02:41.614224 kubelet[1862]: E1101 01:02:41.614168 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:42.095668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956007241.mount: Deactivated successfully. Nov 1 01:02:42.126850 env[1433]: time="2025-11-01T01:02:42.126785889Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:02:42.131143 env[1433]: time="2025-11-01T01:02:42.131088696Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:02:42.141519 env[1433]: time="2025-11-01T01:02:42.141472914Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:02:42.144109 env[1433]: time="2025-11-01T01:02:42.144063218Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:02:42.146642 env[1433]: time="2025-11-01T01:02:42.146604423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:02:42.149403 env[1433]: time="2025-11-01T01:02:42.149369328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:02:42.152177 env[1433]: time="2025-11-01T01:02:42.152142132Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:02:42.155054 env[1433]: time="2025-11-01T01:02:42.155019137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:02:42.216953 env[1433]: time="2025-11-01T01:02:42.216866344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:02:42.216953 env[1433]: time="2025-11-01T01:02:42.216928245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:02:42.217224 env[1433]: time="2025-11-01T01:02:42.217184445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:42.217594 env[1433]: time="2025-11-01T01:02:42.217526446Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/568ff70fc625c7435e3e0a9adcb9a1f6af03aeb3deb1726eadb4bbad9fe06eb2 pid=1913 runtime=io.containerd.runc.v2 Nov 1 01:02:42.228564 env[1433]: time="2025-11-01T01:02:42.228492265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:02:42.228767 env[1433]: time="2025-11-01T01:02:42.228743365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:02:42.228857 env[1433]: time="2025-11-01T01:02:42.228837365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:42.229059 env[1433]: time="2025-11-01T01:02:42.229034965Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991 pid=1932 runtime=io.containerd.runc.v2 Nov 1 01:02:42.244322 systemd[1]: Started cri-containerd-568ff70fc625c7435e3e0a9adcb9a1f6af03aeb3deb1726eadb4bbad9fe06eb2.scope. Nov 1 01:02:42.254102 systemd[1]: Started cri-containerd-63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991.scope. Nov 1 01:02:42.287321 env[1433]: time="2025-11-01T01:02:42.287273366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tcgc6,Uid:5c4099c5-6c7f-43c8-9a51-77b17e002e9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"568ff70fc625c7435e3e0a9adcb9a1f6af03aeb3deb1726eadb4bbad9fe06eb2\"" Nov 1 01:02:42.292144 env[1433]: time="2025-11-01T01:02:42.292096975Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 01:02:42.299324 env[1433]: time="2025-11-01T01:02:42.299274487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cxwsj,Uid:a3ccde92-f282-4223-900a-33fd7fdb2f34,Namespace:kube-system,Attempt:0,} returns sandbox id \"63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991\"" Nov 1 01:02:42.615144 kubelet[1862]: E1101 01:02:42.615092 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:43.207310 update_engine[1427]: I1101 01:02:43.207266 1427 update_attempter.cc:509] Updating boot flags... Nov 1 01:02:43.619028 kubelet[1862]: E1101 01:02:43.618731 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:43.893883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4031583723.mount: Deactivated successfully. Nov 1 01:02:44.619200 kubelet[1862]: E1101 01:02:44.619144 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:45.619695 kubelet[1862]: E1101 01:02:45.619635 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:46.619904 kubelet[1862]: E1101 01:02:46.619832 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:47.620123 kubelet[1862]: E1101 01:02:47.620062 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:48.620202 kubelet[1862]: E1101 01:02:48.620165 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:49.621258 kubelet[1862]: E1101 01:02:49.621181 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:50.622067 kubelet[1862]: E1101 01:02:50.622014 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:51.622231 kubelet[1862]: E1101 01:02:51.622176 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:52.622967 kubelet[1862]: E1101 01:02:52.622907 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:53.623860 kubelet[1862]: E1101 01:02:53.623824 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:54.624328 kubelet[1862]: E1101 01:02:54.624271 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:55.108121 env[1433]: time="2025-11-01T01:02:55.108059277Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:02:55.122485 env[1433]: time="2025-11-01T01:02:55.122402888Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:02:55.126345 env[1433]: time="2025-11-01T01:02:55.126299191Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:02:55.130647 env[1433]: time="2025-11-01T01:02:55.130602194Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:02:55.131021 env[1433]: time="2025-11-01T01:02:55.130984095Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 01:02:55.132828 env[1433]: time="2025-11-01T01:02:55.132796596Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 01:02:55.141589 env[1433]: time="2025-11-01T01:02:55.141542902Z" level=info msg="CreateContainer within sandbox \"568ff70fc625c7435e3e0a9adcb9a1f6af03aeb3deb1726eadb4bbad9fe06eb2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 01:02:55.185065 env[1433]: time="2025-11-01T01:02:55.185006235Z" level=info msg="CreateContainer within sandbox \"568ff70fc625c7435e3e0a9adcb9a1f6af03aeb3deb1726eadb4bbad9fe06eb2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e3b64f61d96314142d2d8b6029df3f71adfaf078d126b6250120150179b47cb1\"" Nov 1 01:02:55.185830 env[1433]: time="2025-11-01T01:02:55.185772336Z" level=info msg="StartContainer for \"e3b64f61d96314142d2d8b6029df3f71adfaf078d126b6250120150179b47cb1\"" Nov 1 01:02:55.216532 systemd[1]: Started cri-containerd-e3b64f61d96314142d2d8b6029df3f71adfaf078d126b6250120150179b47cb1.scope. Nov 1 01:02:55.257619 env[1433]: time="2025-11-01T01:02:55.255428188Z" level=info msg="StartContainer for \"e3b64f61d96314142d2d8b6029df3f71adfaf078d126b6250120150179b47cb1\" returns successfully" Nov 1 01:02:55.624998 kubelet[1862]: E1101 01:02:55.624949 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:56.162421 systemd[1]: run-containerd-runc-k8s.io-e3b64f61d96314142d2d8b6029df3f71adfaf078d126b6250120150179b47cb1-runc.ZoL9uH.mount: Deactivated successfully. Nov 1 01:02:56.625920 kubelet[1862]: E1101 01:02:56.625840 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:57.626476 kubelet[1862]: E1101 01:02:57.626418 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:58.612557 kubelet[1862]: E1101 01:02:58.612490 1862 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:58.627031 kubelet[1862]: E1101 01:02:58.626960 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:02:59.627689 kubelet[1862]: E1101 01:02:59.627642 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:00.628691 kubelet[1862]: E1101 01:03:00.628622 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:00.834284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount238798923.mount: Deactivated successfully. Nov 1 01:03:01.629206 kubelet[1862]: E1101 01:03:01.629161 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:02.629366 kubelet[1862]: E1101 01:03:02.629292 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:03.514848 env[1433]: time="2025-11-01T01:03:03.514791698Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:03:03.523286 env[1433]: time="2025-11-01T01:03:03.523232665Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:03:03.528981 env[1433]: time="2025-11-01T01:03:03.528936810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:03:03.529559 env[1433]: time="2025-11-01T01:03:03.529520415Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 1 01:03:03.538001 env[1433]: time="2025-11-01T01:03:03.537964182Z" level=info msg="CreateContainer within sandbox \"63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 01:03:03.584385 env[1433]: time="2025-11-01T01:03:03.584332950Z" level=info msg="CreateContainer within sandbox \"63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533\"" Nov 1 01:03:03.585058 env[1433]: time="2025-11-01T01:03:03.585023955Z" level=info msg="StartContainer for \"07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533\"" Nov 1 01:03:03.607863 systemd[1]: Started cri-containerd-07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533.scope. Nov 1 01:03:03.630278 kubelet[1862]: E1101 01:03:03.630215 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:03.648230 systemd[1]: cri-containerd-07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533.scope: Deactivated successfully. Nov 1 01:03:03.650281 env[1433]: time="2025-11-01T01:03:03.648634060Z" level=info msg="StartContainer for \"07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533\" returns successfully" Nov 1 01:03:03.780155 kubelet[1862]: I1101 01:03:03.779899 1862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tcgc6" podStartSLOduration=12.93882728 podStartE2EDuration="25.779885102s" podCreationTimestamp="2025-11-01 01:02:38 +0000 UTC" firstStartedPulling="2025-11-01 01:02:42.291103173 +0000 UTC m=+4.271490720" lastFinishedPulling="2025-11-01 01:02:55.132160995 +0000 UTC m=+17.112548542" observedRunningTime="2025-11-01 01:02:55.754029979 +0000 UTC m=+17.734417626" watchObservedRunningTime="2025-11-01 01:03:03.779885102 +0000 UTC m=+25.760272649" Nov 1 01:03:04.559946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533-rootfs.mount: Deactivated successfully. Nov 1 01:03:04.630782 kubelet[1862]: E1101 01:03:04.630732 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:05.631046 kubelet[1862]: E1101 01:03:05.630986 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:06.632035 kubelet[1862]: E1101 01:03:06.631946 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:07.632549 kubelet[1862]: E1101 01:03:07.632491 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:07.955865 env[1433]: time="2025-11-01T01:03:07.955774311Z" level=info msg="shim disconnected" id=07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533 Nov 1 01:03:07.956399 env[1433]: time="2025-11-01T01:03:07.955854312Z" level=warning msg="cleaning up after shim disconnected" id=07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533 namespace=k8s.io Nov 1 01:03:07.956399 env[1433]: time="2025-11-01T01:03:07.955886212Z" level=info msg="cleaning up dead shim" Nov 1 01:03:07.964480 env[1433]: time="2025-11-01T01:03:07.964444073Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:03:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2305 runtime=io.containerd.runc.v2\n" Nov 1 01:03:08.633188 kubelet[1862]: E1101 01:03:08.633126 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:08.778713 env[1433]: time="2025-11-01T01:03:08.778658825Z" level=info msg="CreateContainer within sandbox \"63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 01:03:08.829743 env[1433]: time="2025-11-01T01:03:08.829674478Z" level=info msg="CreateContainer within sandbox \"63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723\"" Nov 1 01:03:08.830967 env[1433]: time="2025-11-01T01:03:08.830317783Z" level=info msg="StartContainer for \"307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723\"" Nov 1 01:03:08.854411 systemd[1]: Started cri-containerd-307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723.scope. Nov 1 01:03:08.889977 env[1433]: time="2025-11-01T01:03:08.889598493Z" level=info msg="StartContainer for \"307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723\" returns successfully" Nov 1 01:03:08.896278 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 01:03:08.896797 systemd[1]: Stopped systemd-sysctl.service. Nov 1 01:03:08.896998 systemd[1]: Stopping systemd-sysctl.service... Nov 1 01:03:08.899526 systemd[1]: Starting systemd-sysctl.service... Nov 1 01:03:08.902206 systemd[1]: cri-containerd-307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723.scope: Deactivated successfully. Nov 1 01:03:08.912028 systemd[1]: Finished systemd-sysctl.service. Nov 1 01:03:08.943788 env[1433]: time="2025-11-01T01:03:08.943732569Z" level=info msg="shim disconnected" id=307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723 Nov 1 01:03:08.943788 env[1433]: time="2025-11-01T01:03:08.943786969Z" level=warning msg="cleaning up after shim disconnected" id=307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723 namespace=k8s.io Nov 1 01:03:08.944092 env[1433]: time="2025-11-01T01:03:08.943798969Z" level=info msg="cleaning up dead shim" Nov 1 01:03:08.951570 env[1433]: time="2025-11-01T01:03:08.951518323Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:03:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2370 runtime=io.containerd.runc.v2\n" Nov 1 01:03:09.634321 kubelet[1862]: E1101 01:03:09.634267 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:09.783117 env[1433]: time="2025-11-01T01:03:09.783066748Z" level=info msg="CreateContainer within sandbox \"63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 01:03:09.809281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723-rootfs.mount: Deactivated successfully. Nov 1 01:03:09.820366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457038719.mount: Deactivated successfully. Nov 1 01:03:09.837999 env[1433]: time="2025-11-01T01:03:09.837933418Z" level=info msg="CreateContainer within sandbox \"63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb\"" Nov 1 01:03:09.838618 env[1433]: time="2025-11-01T01:03:09.838584923Z" level=info msg="StartContainer for \"b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb\"" Nov 1 01:03:09.859690 systemd[1]: Started cri-containerd-b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb.scope. Nov 1 01:03:09.903058 systemd[1]: cri-containerd-b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb.scope: Deactivated successfully. Nov 1 01:03:09.904315 env[1433]: time="2025-11-01T01:03:09.904268166Z" level=info msg="StartContainer for \"b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb\" returns successfully" Nov 1 01:03:09.944334 env[1433]: time="2025-11-01T01:03:09.944274136Z" level=info msg="shim disconnected" id=b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb Nov 1 01:03:09.944334 env[1433]: time="2025-11-01T01:03:09.944337536Z" level=warning msg="cleaning up after shim disconnected" id=b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb namespace=k8s.io Nov 1 01:03:09.944334 env[1433]: time="2025-11-01T01:03:09.944350436Z" level=info msg="cleaning up dead shim" Nov 1 01:03:09.952422 env[1433]: time="2025-11-01T01:03:09.952378590Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:03:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2429 runtime=io.containerd.runc.v2\n" Nov 1 01:03:10.634606 kubelet[1862]: E1101 01:03:10.634544 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:10.788684 env[1433]: time="2025-11-01T01:03:10.788636494Z" level=info msg="CreateContainer within sandbox \"63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 01:03:10.809405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb-rootfs.mount: Deactivated successfully. Nov 1 01:03:10.834371 env[1433]: time="2025-11-01T01:03:10.834322894Z" level=info msg="CreateContainer within sandbox \"63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d\"" Nov 1 01:03:10.835100 env[1433]: time="2025-11-01T01:03:10.835029798Z" level=info msg="StartContainer for \"0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d\"" Nov 1 01:03:10.862885 systemd[1]: Started cri-containerd-0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d.scope. Nov 1 01:03:10.897931 systemd[1]: cri-containerd-0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d.scope: Deactivated successfully. Nov 1 01:03:10.903780 env[1433]: time="2025-11-01T01:03:10.903737850Z" level=info msg="StartContainer for \"0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d\" returns successfully" Nov 1 01:03:10.934143 env[1433]: time="2025-11-01T01:03:10.934090049Z" level=info msg="shim disconnected" id=0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d Nov 1 01:03:10.934416 env[1433]: time="2025-11-01T01:03:10.934151350Z" level=warning msg="cleaning up after shim disconnected" id=0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d namespace=k8s.io Nov 1 01:03:10.934416 env[1433]: time="2025-11-01T01:03:10.934166550Z" level=info msg="cleaning up dead shim" Nov 1 01:03:10.941637 env[1433]: time="2025-11-01T01:03:10.941595799Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:03:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2487 runtime=io.containerd.runc.v2\n" Nov 1 01:03:11.635500 kubelet[1862]: E1101 01:03:11.635460 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:11.793269 env[1433]: time="2025-11-01T01:03:11.793214259Z" level=info msg="CreateContainer within sandbox \"63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 01:03:11.809630 systemd[1]: run-containerd-runc-k8s.io-0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d-runc.lR8BPv.mount: Deactivated successfully. Nov 1 01:03:11.809789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d-rootfs.mount: Deactivated successfully. Nov 1 01:03:11.828311 env[1433]: time="2025-11-01T01:03:11.828258183Z" level=info msg="CreateContainer within sandbox \"63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\"" Nov 1 01:03:11.829053 env[1433]: time="2025-11-01T01:03:11.829017988Z" level=info msg="StartContainer for \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\"" Nov 1 01:03:11.853004 systemd[1]: Started cri-containerd-307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5.scope. Nov 1 01:03:11.896269 env[1433]: time="2025-11-01T01:03:11.896129818Z" level=info msg="StartContainer for \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\" returns successfully" Nov 1 01:03:12.044609 kubelet[1862]: I1101 01:03:12.043564 1862 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 01:03:12.536356 kernel: Initializing XFRM netlink socket Nov 1 01:03:12.636024 kubelet[1862]: E1101 01:03:12.635980 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:12.809770 systemd[1]: run-containerd-runc-k8s.io-307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5-runc.aYD5Rp.mount: Deactivated successfully. Nov 1 01:03:12.811428 kubelet[1862]: I1101 01:03:12.810377 1862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cxwsj" podStartSLOduration=13.580415499 podStartE2EDuration="34.810356533s" podCreationTimestamp="2025-11-01 01:02:38 +0000 UTC" firstStartedPulling="2025-11-01 01:02:42.30073119 +0000 UTC m=+4.281118837" lastFinishedPulling="2025-11-01 01:03:03.530672224 +0000 UTC m=+25.511059871" observedRunningTime="2025-11-01 01:03:12.810057631 +0000 UTC m=+34.790445178" watchObservedRunningTime="2025-11-01 01:03:12.810356533 +0000 UTC m=+34.790744080" Nov 1 01:03:13.636497 kubelet[1862]: E1101 01:03:13.636441 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:14.198030 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Nov 1 01:03:14.198158 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 01:03:14.199440 systemd-networkd[1595]: cilium_host: Link UP Nov 1 01:03:14.199621 systemd-networkd[1595]: cilium_net: Link UP Nov 1 01:03:14.199802 systemd-networkd[1595]: cilium_net: Gained carrier Nov 1 01:03:14.200649 systemd-networkd[1595]: cilium_host: Gained carrier Nov 1 01:03:14.242331 systemd-networkd[1595]: cilium_net: Gained IPv6LL Nov 1 01:03:14.368420 systemd-networkd[1595]: cilium_vxlan: Link UP Nov 1 01:03:14.368430 systemd-networkd[1595]: cilium_vxlan: Gained carrier Nov 1 01:03:14.622321 kernel: NET: Registered PF_ALG protocol family Nov 1 01:03:14.637559 kubelet[1862]: E1101 01:03:14.637513 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:15.026400 systemd-networkd[1595]: cilium_host: Gained IPv6LL Nov 1 01:03:15.361959 systemd-networkd[1595]: lxc_health: Link UP Nov 1 01:03:15.377305 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 01:03:15.378093 systemd-networkd[1595]: lxc_health: Gained carrier Nov 1 01:03:15.638697 kubelet[1862]: E1101 01:03:15.638533 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:15.666492 systemd-networkd[1595]: cilium_vxlan: Gained IPv6LL Nov 1 01:03:16.561516 systemd-networkd[1595]: lxc_health: Gained IPv6LL Nov 1 01:03:16.638772 kubelet[1862]: E1101 01:03:16.638717 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:17.639854 kubelet[1862]: E1101 01:03:17.639793 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:18.613364 kubelet[1862]: E1101 01:03:18.613300 1862 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:18.640168 kubelet[1862]: E1101 01:03:18.640118 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:18.717923 systemd[1]: Created slice kubepods-besteffort-podecb93e7d_deb6_4c48_8df7_586ceded066a.slice. Nov 1 01:03:18.827841 kubelet[1862]: I1101 01:03:18.827780 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjn7v\" (UniqueName: \"kubernetes.io/projected/ecb93e7d-deb6-4c48-8df7-586ceded066a-kube-api-access-bjn7v\") pod \"nginx-deployment-bb8f74bfb-qsjhl\" (UID: \"ecb93e7d-deb6-4c48-8df7-586ceded066a\") " pod="default/nginx-deployment-bb8f74bfb-qsjhl" Nov 1 01:03:19.033480 env[1433]: time="2025-11-01T01:03:19.032921470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-qsjhl,Uid:ecb93e7d-deb6-4c48-8df7-586ceded066a,Namespace:default,Attempt:0,}" Nov 1 01:03:19.172953 systemd-networkd[1595]: lxcafa394cd9f6b: Link UP Nov 1 01:03:19.182359 kernel: eth0: renamed from tmp78dbe Nov 1 01:03:19.194507 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:03:19.194703 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcafa394cd9f6b: link becomes ready Nov 1 01:03:19.195787 systemd-networkd[1595]: lxcafa394cd9f6b: Gained carrier Nov 1 01:03:19.641853 kubelet[1862]: E1101 01:03:19.641802 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:19.786809 env[1433]: time="2025-11-01T01:03:19.786735293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:03:19.786809 env[1433]: time="2025-11-01T01:03:19.786772693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:03:19.787074 env[1433]: time="2025-11-01T01:03:19.786786693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:03:19.787074 env[1433]: time="2025-11-01T01:03:19.786915694Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/78dbe8bc5b8af616432ef1d5280592ff203da993cf3077b83e4ecdd01690720c pid=3013 runtime=io.containerd.runc.v2 Nov 1 01:03:19.809307 systemd[1]: Started cri-containerd-78dbe8bc5b8af616432ef1d5280592ff203da993cf3077b83e4ecdd01690720c.scope. Nov 1 01:03:19.849167 env[1433]: time="2025-11-01T01:03:19.849120517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-bb8f74bfb-qsjhl,Uid:ecb93e7d-deb6-4c48-8df7-586ceded066a,Namespace:default,Attempt:0,} returns sandbox id \"78dbe8bc5b8af616432ef1d5280592ff203da993cf3077b83e4ecdd01690720c\"" Nov 1 01:03:19.851005 env[1433]: time="2025-11-01T01:03:19.850969927Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Nov 1 01:03:20.337429 systemd-networkd[1595]: lxcafa394cd9f6b: Gained IPv6LL Nov 1 01:03:20.642672 kubelet[1862]: E1101 01:03:20.642559 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:21.643311 kubelet[1862]: E1101 01:03:21.643230 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:22.643590 kubelet[1862]: E1101 01:03:22.643539 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:22.843293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569006586.mount: Deactivated successfully. Nov 1 01:03:23.644362 kubelet[1862]: E1101 01:03:23.644316 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:24.439696 env[1433]: time="2025-11-01T01:03:24.439639489Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:03:24.446720 env[1433]: time="2025-11-01T01:03:24.446677621Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8d14817f00613fe76ef7459f977ad93e7b71a3948346b7ac4d50e35f3dd518e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:03:24.450635 env[1433]: time="2025-11-01T01:03:24.450598339Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:03:24.455400 env[1433]: time="2025-11-01T01:03:24.455366161Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:0537df20ac7c5485a0f6b7bfb8e3fbbc8714fce070bab2a6344e5cadfba58d90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:03:24.456008 env[1433]: time="2025-11-01T01:03:24.455976464Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:8d14817f00613fe76ef7459f977ad93e7b71a3948346b7ac4d50e35f3dd518e9\"" Nov 1 01:03:24.463451 env[1433]: time="2025-11-01T01:03:24.463417398Z" level=info msg="CreateContainer within sandbox \"78dbe8bc5b8af616432ef1d5280592ff203da993cf3077b83e4ecdd01690720c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Nov 1 01:03:24.506491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2957234053.mount: Deactivated successfully. Nov 1 01:03:24.531939 env[1433]: time="2025-11-01T01:03:24.531862413Z" level=info msg="CreateContainer within sandbox \"78dbe8bc5b8af616432ef1d5280592ff203da993cf3077b83e4ecdd01690720c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4c75798354d73f2d1d5bc77dd690b415ecb835daa75c3b0c31a79652633d16ce\"" Nov 1 01:03:24.532791 env[1433]: time="2025-11-01T01:03:24.532753317Z" level=info msg="StartContainer for \"4c75798354d73f2d1d5bc77dd690b415ecb835daa75c3b0c31a79652633d16ce\"" Nov 1 01:03:24.557281 systemd[1]: Started cri-containerd-4c75798354d73f2d1d5bc77dd690b415ecb835daa75c3b0c31a79652633d16ce.scope. Nov 1 01:03:24.589108 env[1433]: time="2025-11-01T01:03:24.589070576Z" level=info msg="StartContainer for \"4c75798354d73f2d1d5bc77dd690b415ecb835daa75c3b0c31a79652633d16ce\" returns successfully" Nov 1 01:03:24.644892 kubelet[1862]: E1101 01:03:24.644832 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:24.862516 kubelet[1862]: I1101 01:03:24.862456 1862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-bb8f74bfb-qsjhl" podStartSLOduration=2.255535187 podStartE2EDuration="6.862438133s" podCreationTimestamp="2025-11-01 01:03:18 +0000 UTC" firstStartedPulling="2025-11-01 01:03:19.850398724 +0000 UTC m=+41.830786271" lastFinishedPulling="2025-11-01 01:03:24.45730157 +0000 UTC m=+46.437689217" observedRunningTime="2025-11-01 01:03:24.862423433 +0000 UTC m=+46.842810980" watchObservedRunningTime="2025-11-01 01:03:24.862438133 +0000 UTC m=+46.842825680" Nov 1 01:03:25.645510 kubelet[1862]: E1101 01:03:25.645446 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:26.646393 kubelet[1862]: E1101 01:03:26.646338 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:27.647071 kubelet[1862]: E1101 01:03:27.646999 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:28.648216 kubelet[1862]: E1101 01:03:28.648145 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:29.056274 systemd[1]: Created slice kubepods-besteffort-pod16db08ef_ad84_460a_8523_7ba54347d3c2.slice. Nov 1 01:03:29.096187 kubelet[1862]: I1101 01:03:29.096122 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crf4j\" (UniqueName: \"kubernetes.io/projected/16db08ef-ad84-460a-8523-7ba54347d3c2-kube-api-access-crf4j\") pod \"nfs-server-provisioner-0\" (UID: \"16db08ef-ad84-460a-8523-7ba54347d3c2\") " pod="default/nfs-server-provisioner-0" Nov 1 01:03:29.096512 kubelet[1862]: I1101 01:03:29.096471 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/16db08ef-ad84-460a-8523-7ba54347d3c2-data\") pod \"nfs-server-provisioner-0\" (UID: \"16db08ef-ad84-460a-8523-7ba54347d3c2\") " pod="default/nfs-server-provisioner-0" Nov 1 01:03:29.364790 env[1433]: time="2025-11-01T01:03:29.364446077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:16db08ef-ad84-460a-8523-7ba54347d3c2,Namespace:default,Attempt:0,}" Nov 1 01:03:29.433302 systemd-networkd[1595]: lxcb5a3b42eae1f: Link UP Nov 1 01:03:29.441310 kernel: eth0: renamed from tmp812b8 Nov 1 01:03:29.454835 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:03:29.454961 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb5a3b42eae1f: link becomes ready Nov 1 01:03:29.459779 systemd-networkd[1595]: lxcb5a3b42eae1f: Gained carrier Nov 1 01:03:29.609219 env[1433]: time="2025-11-01T01:03:29.609131676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:03:29.609219 env[1433]: time="2025-11-01T01:03:29.609176576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:03:29.609219 env[1433]: time="2025-11-01T01:03:29.609196876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:03:29.609714 env[1433]: time="2025-11-01T01:03:29.609653678Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/812b8c194a60df86cdfb95e42b1236e5676d930bf6f65424b7894845a355664b pid=3137 runtime=io.containerd.runc.v2 Nov 1 01:03:29.633615 systemd[1]: Started cri-containerd-812b8c194a60df86cdfb95e42b1236e5676d930bf6f65424b7894845a355664b.scope. Nov 1 01:03:29.648723 kubelet[1862]: E1101 01:03:29.648665 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:29.674274 env[1433]: time="2025-11-01T01:03:29.674214041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:16db08ef-ad84-460a-8523-7ba54347d3c2,Namespace:default,Attempt:0,} returns sandbox id \"812b8c194a60df86cdfb95e42b1236e5676d930bf6f65424b7894845a355664b\"" Nov 1 01:03:29.675964 env[1433]: time="2025-11-01T01:03:29.675930348Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Nov 1 01:03:30.232051 systemd[1]: run-containerd-runc-k8s.io-812b8c194a60df86cdfb95e42b1236e5676d930bf6f65424b7894845a355664b-runc.ZkgtE5.mount: Deactivated successfully. Nov 1 01:03:30.649416 kubelet[1862]: E1101 01:03:30.649278 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:30.705511 systemd-networkd[1595]: lxcb5a3b42eae1f: Gained IPv6LL Nov 1 01:03:31.649525 kubelet[1862]: E1101 01:03:31.649463 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:32.650389 kubelet[1862]: E1101 01:03:32.650336 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:33.651319 kubelet[1862]: E1101 01:03:33.651255 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:34.652170 kubelet[1862]: E1101 01:03:34.652106 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:35.652652 kubelet[1862]: E1101 01:03:35.652597 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:36.652917 kubelet[1862]: E1101 01:03:36.652860 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:37.653605 kubelet[1862]: E1101 01:03:37.653549 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:38.612301 kubelet[1862]: E1101 01:03:38.612222 1862 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:38.653963 kubelet[1862]: E1101 01:03:38.653922 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:39.655358 kubelet[1862]: E1101 01:03:39.655292 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:40.656319 kubelet[1862]: E1101 01:03:40.656263 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:41.656600 kubelet[1862]: E1101 01:03:41.656541 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:42.657304 kubelet[1862]: E1101 01:03:42.657227 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:43.658211 kubelet[1862]: E1101 01:03:43.658146 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:44.658911 kubelet[1862]: E1101 01:03:44.658846 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:45.659569 kubelet[1862]: E1101 01:03:45.659503 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:46.660361 kubelet[1862]: E1101 01:03:46.660303 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:47.661452 kubelet[1862]: E1101 01:03:47.661397 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:48.662022 kubelet[1862]: E1101 01:03:48.661959 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:49.662733 kubelet[1862]: E1101 01:03:49.662670 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:50.663072 kubelet[1862]: E1101 01:03:50.663025 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:51.663653 kubelet[1862]: E1101 01:03:51.663597 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:52.664454 kubelet[1862]: E1101 01:03:52.664402 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:53.665022 kubelet[1862]: E1101 01:03:53.664963 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:54.665181 kubelet[1862]: E1101 01:03:54.665115 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:55.665459 kubelet[1862]: E1101 01:03:55.665383 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:55.951800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926859318.mount: Deactivated successfully. Nov 1 01:03:56.665747 kubelet[1862]: E1101 01:03:56.665670 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:57.666848 kubelet[1862]: E1101 01:03:57.666786 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:58.612390 kubelet[1862]: E1101 01:03:58.612338 1862 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:58.667953 kubelet[1862]: E1101 01:03:58.667905 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:59.063504 env[1433]: time="2025-11-01T01:03:59.063454516Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:03:59.069973 env[1433]: time="2025-11-01T01:03:59.069932030Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:03:59.074697 env[1433]: time="2025-11-01T01:03:59.074655741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:03:59.080021 env[1433]: time="2025-11-01T01:03:59.079974952Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:03:59.080462 env[1433]: time="2025-11-01T01:03:59.080419653Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Nov 1 01:03:59.088279 env[1433]: time="2025-11-01T01:03:59.088229271Z" level=info msg="CreateContainer within sandbox \"812b8c194a60df86cdfb95e42b1236e5676d930bf6f65424b7894845a355664b\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Nov 1 01:03:59.112966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519048875.mount: Deactivated successfully. Nov 1 01:03:59.126986 env[1433]: time="2025-11-01T01:03:59.126930857Z" level=info msg="CreateContainer within sandbox \"812b8c194a60df86cdfb95e42b1236e5676d930bf6f65424b7894845a355664b\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"bd514f28e4eb9dabf6bc7bf892150430aab4a99b606938fcdf9f8fe23556c091\"" Nov 1 01:03:59.127954 env[1433]: time="2025-11-01T01:03:59.127905859Z" level=info msg="StartContainer for \"bd514f28e4eb9dabf6bc7bf892150430aab4a99b606938fcdf9f8fe23556c091\"" Nov 1 01:03:59.157347 systemd[1]: Started cri-containerd-bd514f28e4eb9dabf6bc7bf892150430aab4a99b606938fcdf9f8fe23556c091.scope. Nov 1 01:03:59.191021 env[1433]: time="2025-11-01T01:03:59.190959700Z" level=info msg="StartContainer for \"bd514f28e4eb9dabf6bc7bf892150430aab4a99b606938fcdf9f8fe23556c091\" returns successfully" Nov 1 01:03:59.669044 kubelet[1862]: E1101 01:03:59.668979 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:03:59.937377 kubelet[1862]: I1101 01:03:59.937206 1862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.530568756 podStartE2EDuration="30.937188366s" podCreationTimestamp="2025-11-01 01:03:29 +0000 UTC" firstStartedPulling="2025-11-01 01:03:29.675521547 +0000 UTC m=+51.655909194" lastFinishedPulling="2025-11-01 01:03:59.082141257 +0000 UTC m=+81.062528804" observedRunningTime="2025-11-01 01:03:59.936713865 +0000 UTC m=+81.917101412" watchObservedRunningTime="2025-11-01 01:03:59.937188366 +0000 UTC m=+81.917576013" Nov 1 01:04:00.109942 systemd[1]: run-containerd-runc-k8s.io-bd514f28e4eb9dabf6bc7bf892150430aab4a99b606938fcdf9f8fe23556c091-runc.4xMxs1.mount: Deactivated successfully. Nov 1 01:04:00.669633 kubelet[1862]: E1101 01:04:00.669567 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:01.670423 kubelet[1862]: E1101 01:04:01.670354 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:02.670571 kubelet[1862]: E1101 01:04:02.670506 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:03.671546 kubelet[1862]: E1101 01:04:03.671485 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:04.442936 systemd[1]: Created slice kubepods-besteffort-poda78e3062_0988_4664_8936_3ed7178096ec.slice. Nov 1 01:04:04.519641 kubelet[1862]: I1101 01:04:04.519585 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rlbm\" (UniqueName: \"kubernetes.io/projected/a78e3062-0988-4664-8936-3ed7178096ec-kube-api-access-2rlbm\") pod \"test-pod-1\" (UID: \"a78e3062-0988-4664-8936-3ed7178096ec\") " pod="default/test-pod-1" Nov 1 01:04:04.519641 kubelet[1862]: I1101 01:04:04.519646 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-334290b5-8223-41f5-adb7-37a6ecc51482\" (UniqueName: \"kubernetes.io/nfs/a78e3062-0988-4664-8936-3ed7178096ec-pvc-334290b5-8223-41f5-adb7-37a6ecc51482\") pod \"test-pod-1\" (UID: \"a78e3062-0988-4664-8936-3ed7178096ec\") " pod="default/test-pod-1" Nov 1 01:04:04.672343 kubelet[1862]: E1101 01:04:04.672291 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:04.892278 kernel: FS-Cache: Loaded Nov 1 01:04:04.992738 kernel: RPC: Registered named UNIX socket transport module. Nov 1 01:04:04.992891 kernel: RPC: Registered udp transport module. Nov 1 01:04:04.992920 kernel: RPC: Registered tcp transport module. Nov 1 01:04:04.998229 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Nov 1 01:04:05.216307 kernel: FS-Cache: Netfs 'nfs' registered for caching Nov 1 01:04:05.456337 kernel: NFS: Registering the id_resolver key type Nov 1 01:04:05.456482 kernel: Key type id_resolver registered Nov 1 01:04:05.456519 kernel: Key type id_legacy registered Nov 1 01:04:05.673484 kubelet[1862]: E1101 01:04:05.673378 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:05.679064 nfsidmap[3260]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.8-n-16445aab1e' Nov 1 01:04:05.692906 nfsidmap[3261]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.8-n-16445aab1e' Nov 1 01:04:05.956977 env[1433]: time="2025-11-01T01:04:05.956930291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a78e3062-0988-4664-8936-3ed7178096ec,Namespace:default,Attempt:0,}" Nov 1 01:04:06.039911 systemd-networkd[1595]: lxc6cec0640bab3: Link UP Nov 1 01:04:06.047276 kernel: eth0: renamed from tmp9e7ca Nov 1 01:04:06.057153 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:04:06.057300 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6cec0640bab3: link becomes ready Nov 1 01:04:06.057350 systemd-networkd[1595]: lxc6cec0640bab3: Gained carrier Nov 1 01:04:06.239051 env[1433]: time="2025-11-01T01:04:06.238698356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:04:06.239921 env[1433]: time="2025-11-01T01:04:06.238740956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:04:06.239921 env[1433]: time="2025-11-01T01:04:06.238754856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:04:06.239921 env[1433]: time="2025-11-01T01:04:06.238896756Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e7ca5cf434fc7c9469af6e07ed0985e4485a5c35cd33a608d6c4c8487f39873 pid=3289 runtime=io.containerd.runc.v2 Nov 1 01:04:06.252856 systemd[1]: Started cri-containerd-9e7ca5cf434fc7c9469af6e07ed0985e4485a5c35cd33a608d6c4c8487f39873.scope. Nov 1 01:04:06.293271 env[1433]: time="2025-11-01T01:04:06.293218965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a78e3062-0988-4664-8936-3ed7178096ec,Namespace:default,Attempt:0,} returns sandbox id \"9e7ca5cf434fc7c9469af6e07ed0985e4485a5c35cd33a608d6c4c8487f39873\"" Nov 1 01:04:06.295028 env[1433]: time="2025-11-01T01:04:06.294998268Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Nov 1 01:04:06.629121 env[1433]: time="2025-11-01T01:04:06.628961437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:04:06.637052 env[1433]: time="2025-11-01T01:04:06.636999153Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:8d14817f00613fe76ef7459f977ad93e7b71a3948346b7ac4d50e35f3dd518e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:04:06.642337 env[1433]: time="2025-11-01T01:04:06.642298063Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:04:06.646476 env[1433]: time="2025-11-01T01:04:06.646436772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:0537df20ac7c5485a0f6b7bfb8e3fbbc8714fce070bab2a6344e5cadfba58d90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:04:06.647088 env[1433]: time="2025-11-01T01:04:06.647055073Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:8d14817f00613fe76ef7459f977ad93e7b71a3948346b7ac4d50e35f3dd518e9\"" Nov 1 01:04:06.659686 env[1433]: time="2025-11-01T01:04:06.659646098Z" level=info msg="CreateContainer within sandbox \"9e7ca5cf434fc7c9469af6e07ed0985e4485a5c35cd33a608d6c4c8487f39873\" for container &ContainerMetadata{Name:test,Attempt:0,}" Nov 1 01:04:06.674552 kubelet[1862]: E1101 01:04:06.674513 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:06.715449 env[1433]: time="2025-11-01T01:04:06.715399810Z" level=info msg="CreateContainer within sandbox \"9e7ca5cf434fc7c9469af6e07ed0985e4485a5c35cd33a608d6c4c8487f39873\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"576950e2f9a941539ae9cc1fe2f1c8012c44c92fd7e76580b7aee7d5488b3251\"" Nov 1 01:04:06.715838 env[1433]: time="2025-11-01T01:04:06.715809510Z" level=info msg="StartContainer for \"576950e2f9a941539ae9cc1fe2f1c8012c44c92fd7e76580b7aee7d5488b3251\"" Nov 1 01:04:06.742337 systemd[1]: Started cri-containerd-576950e2f9a941539ae9cc1fe2f1c8012c44c92fd7e76580b7aee7d5488b3251.scope. Nov 1 01:04:06.772698 env[1433]: time="2025-11-01T01:04:06.772642424Z" level=info msg="StartContainer for \"576950e2f9a941539ae9cc1fe2f1c8012c44c92fd7e76580b7aee7d5488b3251\" returns successfully" Nov 1 01:04:06.950746 kubelet[1862]: I1101 01:04:06.950600 1862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=36.596540572 podStartE2EDuration="36.95058618s" podCreationTimestamp="2025-11-01 01:03:30 +0000 UTC" firstStartedPulling="2025-11-01 01:04:06.294315067 +0000 UTC m=+88.274702614" lastFinishedPulling="2025-11-01 01:04:06.648360675 +0000 UTC m=+88.628748222" observedRunningTime="2025-11-01 01:04:06.95031158 +0000 UTC m=+88.930699127" watchObservedRunningTime="2025-11-01 01:04:06.95058618 +0000 UTC m=+88.930973727" Nov 1 01:04:07.675347 kubelet[1862]: E1101 01:04:07.675281 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:08.081417 systemd-networkd[1595]: lxc6cec0640bab3: Gained IPv6LL Nov 1 01:04:08.676029 kubelet[1862]: E1101 01:04:08.675952 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:09.676279 kubelet[1862]: E1101 01:04:09.676209 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:10.676508 kubelet[1862]: E1101 01:04:10.676441 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:11.016982 systemd[1]: run-containerd-runc-k8s.io-307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5-runc.c61BxK.mount: Deactivated successfully. Nov 1 01:04:11.033216 env[1433]: time="2025-11-01T01:04:11.033147264Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:04:11.037692 env[1433]: time="2025-11-01T01:04:11.037654473Z" level=info msg="StopContainer for \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\" with timeout 2 (s)" Nov 1 01:04:11.037934 env[1433]: time="2025-11-01T01:04:11.037902873Z" level=info msg="Stop container \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\" with signal terminated" Nov 1 01:04:11.045629 systemd-networkd[1595]: lxc_health: Link DOWN Nov 1 01:04:11.045639 systemd-networkd[1595]: lxc_health: Lost carrier Nov 1 01:04:11.069659 systemd[1]: cri-containerd-307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5.scope: Deactivated successfully. Nov 1 01:04:11.069956 systemd[1]: cri-containerd-307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5.scope: Consumed 6.388s CPU time. Nov 1 01:04:11.090818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5-rootfs.mount: Deactivated successfully. Nov 1 01:04:11.677514 kubelet[1862]: E1101 01:04:11.677449 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:12.677624 kubelet[1862]: E1101 01:04:12.677573 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:13.045348 env[1433]: time="2025-11-01T01:04:13.045231089Z" level=info msg="Kill container \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\"" Nov 1 01:04:13.065938 env[1433]: time="2025-11-01T01:04:13.065878827Z" level=info msg="shim disconnected" id=307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5 Nov 1 01:04:13.065938 env[1433]: time="2025-11-01T01:04:13.065940927Z" level=warning msg="cleaning up after shim disconnected" id=307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5 namespace=k8s.io Nov 1 01:04:13.066192 env[1433]: time="2025-11-01T01:04:13.065952827Z" level=info msg="cleaning up dead shim" Nov 1 01:04:13.074321 env[1433]: time="2025-11-01T01:04:13.074268442Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3418 runtime=io.containerd.runc.v2\n" Nov 1 01:04:13.086835 env[1433]: time="2025-11-01T01:04:13.086785065Z" level=info msg="StopContainer for \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\" returns successfully" Nov 1 01:04:13.087513 env[1433]: time="2025-11-01T01:04:13.087480566Z" level=info msg="StopPodSandbox for \"63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991\"" Nov 1 01:04:13.087636 env[1433]: time="2025-11-01T01:04:13.087552666Z" level=info msg="Container to stop \"b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 01:04:13.087636 env[1433]: time="2025-11-01T01:04:13.087573866Z" level=info msg="Container to stop \"07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 01:04:13.087636 env[1433]: time="2025-11-01T01:04:13.087589966Z" level=info msg="Container to stop \"307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 01:04:13.087636 env[1433]: time="2025-11-01T01:04:13.087604466Z" level=info msg="Container to stop \"0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 01:04:13.087636 env[1433]: time="2025-11-01T01:04:13.087617266Z" level=info msg="Container to stop \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 01:04:13.090161 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991-shm.mount: Deactivated successfully. Nov 1 01:04:13.097726 systemd[1]: cri-containerd-63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991.scope: Deactivated successfully. Nov 1 01:04:13.116775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991-rootfs.mount: Deactivated successfully. Nov 1 01:04:13.126795 env[1433]: time="2025-11-01T01:04:13.126739437Z" level=info msg="shim disconnected" id=63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991 Nov 1 01:04:13.126949 env[1433]: time="2025-11-01T01:04:13.126799538Z" level=warning msg="cleaning up after shim disconnected" id=63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991 namespace=k8s.io Nov 1 01:04:13.126949 env[1433]: time="2025-11-01T01:04:13.126814438Z" level=info msg="cleaning up dead shim" Nov 1 01:04:13.134994 env[1433]: time="2025-11-01T01:04:13.134952852Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3449 runtime=io.containerd.runc.v2\n" Nov 1 01:04:13.135398 env[1433]: time="2025-11-01T01:04:13.135366253Z" level=info msg="TearDown network for sandbox \"63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991\" successfully" Nov 1 01:04:13.135398 env[1433]: time="2025-11-01T01:04:13.135395153Z" level=info msg="StopPodSandbox for \"63424a60b4ee14e0fd2becec0f536879cf6728116dc06d8cd63c21dbf1140991\" returns successfully" Nov 1 01:04:13.272497 kubelet[1862]: I1101 01:04:13.272445 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-lib-modules\") pod \"a3ccde92-f282-4223-900a-33fd7fdb2f34\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " Nov 1 01:04:13.272739 kubelet[1862]: I1101 01:04:13.272569 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a3ccde92-f282-4223-900a-33fd7fdb2f34" (UID: "a3ccde92-f282-4223-900a-33fd7fdb2f34"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:13.272739 kubelet[1862]: I1101 01:04:13.272730 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a3ccde92-f282-4223-900a-33fd7fdb2f34-clustermesh-secrets\") pod \"a3ccde92-f282-4223-900a-33fd7fdb2f34\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " Nov 1 01:04:13.272954 kubelet[1862]: I1101 01:04:13.272932 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-hostproc\") pod \"a3ccde92-f282-4223-900a-33fd7fdb2f34\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " Nov 1 01:04:13.273088 kubelet[1862]: I1101 01:04:13.273069 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-bpf-maps\") pod \"a3ccde92-f282-4223-900a-33fd7fdb2f34\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " Nov 1 01:04:13.273208 kubelet[1862]: I1101 01:04:13.273183 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-cni-path\") pod \"a3ccde92-f282-4223-900a-33fd7fdb2f34\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " Nov 1 01:04:13.273366 kubelet[1862]: I1101 01:04:13.273346 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-host-proc-sys-net\") pod \"a3ccde92-f282-4223-900a-33fd7fdb2f34\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " Nov 1 01:04:13.273479 kubelet[1862]: I1101 01:04:13.273462 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-etc-cni-netd\") pod \"a3ccde92-f282-4223-900a-33fd7fdb2f34\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " Nov 1 01:04:13.273601 kubelet[1862]: I1101 01:04:13.273584 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a3ccde92-f282-4223-900a-33fd7fdb2f34-hubble-tls\") pod \"a3ccde92-f282-4223-900a-33fd7fdb2f34\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " Nov 1 01:04:13.273714 kubelet[1862]: I1101 01:04:13.273685 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-cni-path" (OuterVolumeSpecName: "cni-path") pod "a3ccde92-f282-4223-900a-33fd7fdb2f34" (UID: "a3ccde92-f282-4223-900a-33fd7fdb2f34"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:13.273789 kubelet[1862]: I1101 01:04:13.273750 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-hostproc" (OuterVolumeSpecName: "hostproc") pod "a3ccde92-f282-4223-900a-33fd7fdb2f34" (UID: "a3ccde92-f282-4223-900a-33fd7fdb2f34"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:13.273789 kubelet[1862]: I1101 01:04:13.273778 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a3ccde92-f282-4223-900a-33fd7fdb2f34" (UID: "a3ccde92-f282-4223-900a-33fd7fdb2f34"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:13.273908 kubelet[1862]: I1101 01:04:13.273832 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a3ccde92-f282-4223-900a-33fd7fdb2f34" (UID: "a3ccde92-f282-4223-900a-33fd7fdb2f34"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:13.273908 kubelet[1862]: I1101 01:04:13.273864 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a3ccde92-f282-4223-900a-33fd7fdb2f34" (UID: "a3ccde92-f282-4223-900a-33fd7fdb2f34"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:13.274052 kubelet[1862]: I1101 01:04:13.273704 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3ccde92-f282-4223-900a-33fd7fdb2f34-cilium-config-path\") pod \"a3ccde92-f282-4223-900a-33fd7fdb2f34\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " Nov 1 01:04:13.274176 kubelet[1862]: I1101 01:04:13.274155 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-xtables-lock\") pod \"a3ccde92-f282-4223-900a-33fd7fdb2f34\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " Nov 1 01:04:13.274312 kubelet[1862]: I1101 01:04:13.274294 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-cilium-cgroup\") pod \"a3ccde92-f282-4223-900a-33fd7fdb2f34\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " Nov 1 01:04:13.277301 kubelet[1862]: I1101 01:04:13.277277 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-host-proc-sys-kernel\") pod \"a3ccde92-f282-4223-900a-33fd7fdb2f34\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " Nov 1 01:04:13.277492 kubelet[1862]: I1101 01:04:13.277477 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf2nv\" (UniqueName: \"kubernetes.io/projected/a3ccde92-f282-4223-900a-33fd7fdb2f34-kube-api-access-qf2nv\") pod \"a3ccde92-f282-4223-900a-33fd7fdb2f34\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " Nov 1 01:04:13.280133 systemd[1]: var-lib-kubelet-pods-a3ccde92\x2df282\x2d4223\x2d900a\x2d33fd7fdb2f34-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 01:04:13.281824 kubelet[1862]: I1101 01:04:13.281800 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-cilium-run\") pod \"a3ccde92-f282-4223-900a-33fd7fdb2f34\" (UID: \"a3ccde92-f282-4223-900a-33fd7fdb2f34\") " Nov 1 01:04:13.281916 kubelet[1862]: I1101 01:04:13.281864 1862 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-lib-modules\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:13.281916 kubelet[1862]: I1101 01:04:13.281879 1862 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-hostproc\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:13.281916 kubelet[1862]: I1101 01:04:13.281890 1862 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-bpf-maps\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:13.281916 kubelet[1862]: I1101 01:04:13.281900 1862 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-cni-path\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:13.281916 kubelet[1862]: I1101 01:04:13.281910 1862 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-host-proc-sys-net\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:13.282122 kubelet[1862]: I1101 01:04:13.281924 1862 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-etc-cni-netd\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:13.282122 kubelet[1862]: I1101 01:04:13.274433 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a3ccde92-f282-4223-900a-33fd7fdb2f34" (UID: "a3ccde92-f282-4223-900a-33fd7fdb2f34"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:13.282122 kubelet[1862]: I1101 01:04:13.277182 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ccde92-f282-4223-900a-33fd7fdb2f34-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a3ccde92-f282-4223-900a-33fd7fdb2f34" (UID: "a3ccde92-f282-4223-900a-33fd7fdb2f34"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 01:04:13.282122 kubelet[1862]: I1101 01:04:13.277221 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a3ccde92-f282-4223-900a-33fd7fdb2f34" (UID: "a3ccde92-f282-4223-900a-33fd7fdb2f34"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:13.282122 kubelet[1862]: I1101 01:04:13.277443 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a3ccde92-f282-4223-900a-33fd7fdb2f34" (UID: "a3ccde92-f282-4223-900a-33fd7fdb2f34"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:13.282532 kubelet[1862]: I1101 01:04:13.281961 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a3ccde92-f282-4223-900a-33fd7fdb2f34" (UID: "a3ccde92-f282-4223-900a-33fd7fdb2f34"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:13.282532 kubelet[1862]: I1101 01:04:13.282078 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ccde92-f282-4223-900a-33fd7fdb2f34-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a3ccde92-f282-4223-900a-33fd7fdb2f34" (UID: "a3ccde92-f282-4223-900a-33fd7fdb2f34"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 01:04:13.287285 systemd[1]: var-lib-kubelet-pods-a3ccde92\x2df282\x2d4223\x2d900a\x2d33fd7fdb2f34-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqf2nv.mount: Deactivated successfully. Nov 1 01:04:13.290102 systemd[1]: var-lib-kubelet-pods-a3ccde92\x2df282\x2d4223\x2d900a\x2d33fd7fdb2f34-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 01:04:13.291284 kubelet[1862]: I1101 01:04:13.290965 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ccde92-f282-4223-900a-33fd7fdb2f34-kube-api-access-qf2nv" (OuterVolumeSpecName: "kube-api-access-qf2nv") pod "a3ccde92-f282-4223-900a-33fd7fdb2f34" (UID: "a3ccde92-f282-4223-900a-33fd7fdb2f34"). InnerVolumeSpecName "kube-api-access-qf2nv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:04:13.291284 kubelet[1862]: I1101 01:04:13.291081 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ccde92-f282-4223-900a-33fd7fdb2f34-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a3ccde92-f282-4223-900a-33fd7fdb2f34" (UID: "a3ccde92-f282-4223-900a-33fd7fdb2f34"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:04:13.382871 kubelet[1862]: I1101 01:04:13.382688 1862 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-cilium-cgroup\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:13.382871 kubelet[1862]: I1101 01:04:13.382755 1862 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-host-proc-sys-kernel\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:13.382871 kubelet[1862]: I1101 01:04:13.382771 1862 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qf2nv\" (UniqueName: \"kubernetes.io/projected/a3ccde92-f282-4223-900a-33fd7fdb2f34-kube-api-access-qf2nv\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:13.382871 kubelet[1862]: I1101 01:04:13.382790 1862 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-cilium-run\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:13.382871 kubelet[1862]: I1101 01:04:13.382804 1862 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a3ccde92-f282-4223-900a-33fd7fdb2f34-clustermesh-secrets\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:13.382871 kubelet[1862]: I1101 01:04:13.382818 1862 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a3ccde92-f282-4223-900a-33fd7fdb2f34-hubble-tls\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:13.382871 kubelet[1862]: I1101 01:04:13.382830 1862 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3ccde92-f282-4223-900a-33fd7fdb2f34-cilium-config-path\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:13.382871 kubelet[1862]: I1101 01:04:13.382843 1862 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3ccde92-f282-4223-900a-33fd7fdb2f34-xtables-lock\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:13.678065 kubelet[1862]: E1101 01:04:13.677914 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:13.733954 kubelet[1862]: E1101 01:04:13.733895 1862 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 01:04:13.964009 kubelet[1862]: I1101 01:04:13.963855 1862 scope.go:117] "RemoveContainer" containerID="307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5" Nov 1 01:04:13.966607 env[1433]: time="2025-11-01T01:04:13.966553563Z" level=info msg="RemoveContainer for \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\"" Nov 1 01:04:13.969094 systemd[1]: Removed slice kubepods-burstable-poda3ccde92_f282_4223_900a_33fd7fdb2f34.slice. Nov 1 01:04:13.969213 systemd[1]: kubepods-burstable-poda3ccde92_f282_4223_900a_33fd7fdb2f34.slice: Consumed 6.497s CPU time. Nov 1 01:04:13.975985 env[1433]: time="2025-11-01T01:04:13.975948780Z" level=info msg="RemoveContainer for \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\" returns successfully" Nov 1 01:04:13.976201 kubelet[1862]: I1101 01:04:13.976177 1862 scope.go:117] "RemoveContainer" containerID="0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d" Nov 1 01:04:13.977178 env[1433]: time="2025-11-01T01:04:13.977147182Z" level=info msg="RemoveContainer for \"0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d\"" Nov 1 01:04:13.984135 env[1433]: time="2025-11-01T01:04:13.984099594Z" level=info msg="RemoveContainer for \"0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d\" returns successfully" Nov 1 01:04:13.984299 kubelet[1862]: I1101 01:04:13.984277 1862 scope.go:117] "RemoveContainer" containerID="b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb" Nov 1 01:04:13.985306 env[1433]: time="2025-11-01T01:04:13.985261897Z" level=info msg="RemoveContainer for \"b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb\"" Nov 1 01:04:13.994503 env[1433]: time="2025-11-01T01:04:13.994466013Z" level=info msg="RemoveContainer for \"b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb\" returns successfully" Nov 1 01:04:13.994737 kubelet[1862]: I1101 01:04:13.994651 1862 scope.go:117] "RemoveContainer" containerID="307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723" Nov 1 01:04:13.995671 env[1433]: time="2025-11-01T01:04:13.995643815Z" level=info msg="RemoveContainer for \"307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723\"" Nov 1 01:04:14.006948 env[1433]: time="2025-11-01T01:04:14.006912036Z" level=info msg="RemoveContainer for \"307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723\" returns successfully" Nov 1 01:04:14.007143 kubelet[1862]: I1101 01:04:14.007121 1862 scope.go:117] "RemoveContainer" containerID="07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533" Nov 1 01:04:14.008322 env[1433]: time="2025-11-01T01:04:14.008287838Z" level=info msg="RemoveContainer for \"07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533\"" Nov 1 01:04:14.017579 env[1433]: time="2025-11-01T01:04:14.017543255Z" level=info msg="RemoveContainer for \"07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533\" returns successfully" Nov 1 01:04:14.017742 kubelet[1862]: I1101 01:04:14.017720 1862 scope.go:117] "RemoveContainer" containerID="307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5" Nov 1 01:04:14.018000 env[1433]: time="2025-11-01T01:04:14.017928555Z" level=error msg="ContainerStatus for \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\": not found" Nov 1 01:04:14.018151 kubelet[1862]: E1101 01:04:14.018125 1862 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\": not found" containerID="307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5" Nov 1 01:04:14.018234 kubelet[1862]: I1101 01:04:14.018173 1862 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5"} err="failed to get container status \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"307c5b9284f5d0637d2ec6fe543b888fdc3a33ee958f50da0b36417d85ec70c5\": not found" Nov 1 01:04:14.018234 kubelet[1862]: I1101 01:04:14.018219 1862 scope.go:117] "RemoveContainer" containerID="0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d" Nov 1 01:04:14.018548 env[1433]: time="2025-11-01T01:04:14.018496356Z" level=error msg="ContainerStatus for \"0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d\": not found" Nov 1 01:04:14.018674 kubelet[1862]: E1101 01:04:14.018652 1862 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d\": not found" containerID="0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d" Nov 1 01:04:14.018752 kubelet[1862]: I1101 01:04:14.018699 1862 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d"} err="failed to get container status \"0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f5945601608320d6e56d266de48bc1da50f3e90b7efadfd18b45ba9e5e2491d\": not found" Nov 1 01:04:14.018752 kubelet[1862]: I1101 01:04:14.018721 1862 scope.go:117] "RemoveContainer" containerID="b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb" Nov 1 01:04:14.018966 env[1433]: time="2025-11-01T01:04:14.018917757Z" level=error msg="ContainerStatus for \"b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb\": not found" Nov 1 01:04:14.019090 kubelet[1862]: E1101 01:04:14.019065 1862 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb\": not found" containerID="b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb" Nov 1 01:04:14.019155 kubelet[1862]: I1101 01:04:14.019099 1862 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb"} err="failed to get container status \"b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6624a6247a589764a2db05644fb0125a32f5c79b81d629d69fce455bd4f70eb\": not found" Nov 1 01:04:14.019155 kubelet[1862]: I1101 01:04:14.019123 1862 scope.go:117] "RemoveContainer" containerID="307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723" Nov 1 01:04:14.019377 env[1433]: time="2025-11-01T01:04:14.019326958Z" level=error msg="ContainerStatus for \"307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723\": not found" Nov 1 01:04:14.019489 kubelet[1862]: E1101 01:04:14.019466 1862 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723\": not found" containerID="307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723" Nov 1 01:04:14.019563 kubelet[1862]: I1101 01:04:14.019492 1862 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723"} err="failed to get container status \"307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723\": rpc error: code = NotFound desc = an error occurred when try to find container \"307031d680e368f68d7fba4ffed1bfcb98a95be0f2228794bc4b06f91cbf4723\": not found" Nov 1 01:04:14.019563 kubelet[1862]: I1101 01:04:14.019510 1862 scope.go:117] "RemoveContainer" containerID="07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533" Nov 1 01:04:14.019739 env[1433]: time="2025-11-01T01:04:14.019693159Z" level=error msg="ContainerStatus for \"07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533\": not found" Nov 1 01:04:14.019848 kubelet[1862]: E1101 01:04:14.019826 1862 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533\": not found" containerID="07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533" Nov 1 01:04:14.019922 kubelet[1862]: I1101 01:04:14.019850 1862 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533"} err="failed to get container status \"07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533\": rpc error: code = NotFound desc = an error occurred when try to find container \"07de2529e2ab99fbb8a350ff7f4e995cbb0fd571617d9e05346eea89ff471533\": not found" Nov 1 01:04:14.551294 systemd[1]: Created slice kubepods-besteffort-podde1f0f1d_b65d_4c09_bb5b_bc2de520a167.slice. Nov 1 01:04:14.567634 systemd[1]: Created slice kubepods-burstable-pod816edf8f_32d7_4b61_94e1_337c89fae564.slice. Nov 1 01:04:14.682985 kubelet[1862]: E1101 01:04:14.682944 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:14.689932 kubelet[1862]: I1101 01:04:14.689896 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-xtables-lock\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.690067 kubelet[1862]: I1101 01:04:14.689935 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-host-proc-sys-net\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.690067 kubelet[1862]: I1101 01:04:14.689960 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-cgroup\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.690067 kubelet[1862]: I1101 01:04:14.689978 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-cni-path\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.690067 kubelet[1862]: I1101 01:04:14.689999 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-etc-cni-netd\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.690067 kubelet[1862]: I1101 01:04:14.690018 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-config-path\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.690067 kubelet[1862]: I1101 01:04:14.690038 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-bpf-maps\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.690359 kubelet[1862]: I1101 01:04:14.690058 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-hostproc\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.690359 kubelet[1862]: I1101 01:04:14.690078 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-lib-modules\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.690359 kubelet[1862]: I1101 01:04:14.690098 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-host-proc-sys-kernel\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.690359 kubelet[1862]: I1101 01:04:14.690122 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkd4t\" (UniqueName: \"kubernetes.io/projected/de1f0f1d-b65d-4c09-bb5b-bc2de520a167-kube-api-access-zkd4t\") pod \"cilium-operator-6f9c7c5859-mpz48\" (UID: \"de1f0f1d-b65d-4c09-bb5b-bc2de520a167\") " pod="kube-system/cilium-operator-6f9c7c5859-mpz48" Nov 1 01:04:14.690359 kubelet[1862]: I1101 01:04:14.690146 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de1f0f1d-b65d-4c09-bb5b-bc2de520a167-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-mpz48\" (UID: \"de1f0f1d-b65d-4c09-bb5b-bc2de520a167\") " pod="kube-system/cilium-operator-6f9c7c5859-mpz48" Nov 1 01:04:14.690533 kubelet[1862]: I1101 01:04:14.690169 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-ipsec-secrets\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.690533 kubelet[1862]: I1101 01:04:14.690190 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/816edf8f-32d7-4b61-94e1-337c89fae564-hubble-tls\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.690533 kubelet[1862]: I1101 01:04:14.690211 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6ht6\" (UniqueName: \"kubernetes.io/projected/816edf8f-32d7-4b61-94e1-337c89fae564-kube-api-access-z6ht6\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.690533 kubelet[1862]: I1101 01:04:14.690235 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-run\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.690533 kubelet[1862]: I1101 01:04:14.690268 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/816edf8f-32d7-4b61-94e1-337c89fae564-clustermesh-secrets\") pod \"cilium-t4kl7\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " pod="kube-system/cilium-t4kl7" Nov 1 01:04:14.714544 kubelet[1862]: I1101 01:04:14.714491 1862 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3ccde92-f282-4223-900a-33fd7fdb2f34" path="/var/lib/kubelet/pods/a3ccde92-f282-4223-900a-33fd7fdb2f34/volumes" Nov 1 01:04:14.861971 env[1433]: time="2025-11-01T01:04:14.861832768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-mpz48,Uid:de1f0f1d-b65d-4c09-bb5b-bc2de520a167,Namespace:kube-system,Attempt:0,}" Nov 1 01:04:14.892471 env[1433]: time="2025-11-01T01:04:14.892419323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t4kl7,Uid:816edf8f-32d7-4b61-94e1-337c89fae564,Namespace:kube-system,Attempt:0,}" Nov 1 01:04:14.923093 env[1433]: time="2025-11-01T01:04:14.923021678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:04:14.923386 env[1433]: time="2025-11-01T01:04:14.923308479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:04:14.923386 env[1433]: time="2025-11-01T01:04:14.923326979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:04:14.923866 env[1433]: time="2025-11-01T01:04:14.923796279Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1c61b1b603041e1ec72d8b83813ec2a782ba28c26a9251f8295d30ac4aeefd9 pid=3477 runtime=io.containerd.runc.v2 Nov 1 01:04:14.940658 systemd[1]: Started cri-containerd-a1c61b1b603041e1ec72d8b83813ec2a782ba28c26a9251f8295d30ac4aeefd9.scope. Nov 1 01:04:14.950866 env[1433]: time="2025-11-01T01:04:14.950382827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:04:14.950866 env[1433]: time="2025-11-01T01:04:14.950429127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:04:14.950866 env[1433]: time="2025-11-01T01:04:14.950444127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:04:14.950866 env[1433]: time="2025-11-01T01:04:14.950756928Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6658167865403cbba3c6567b1e52562584f4076d87095cb685249ef4a36ae2d3 pid=3503 runtime=io.containerd.runc.v2 Nov 1 01:04:14.973615 systemd[1]: Started cri-containerd-6658167865403cbba3c6567b1e52562584f4076d87095cb685249ef4a36ae2d3.scope. Nov 1 01:04:15.009121 env[1433]: time="2025-11-01T01:04:15.009074432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t4kl7,Uid:816edf8f-32d7-4b61-94e1-337c89fae564,Namespace:kube-system,Attempt:0,} returns sandbox id \"6658167865403cbba3c6567b1e52562584f4076d87095cb685249ef4a36ae2d3\"" Nov 1 01:04:15.016576 env[1433]: time="2025-11-01T01:04:15.016008644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-mpz48,Uid:de1f0f1d-b65d-4c09-bb5b-bc2de520a167,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1c61b1b603041e1ec72d8b83813ec2a782ba28c26a9251f8295d30ac4aeefd9\"" Nov 1 01:04:15.017675 env[1433]: time="2025-11-01T01:04:15.017643447Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 01:04:15.018401 env[1433]: time="2025-11-01T01:04:15.018375749Z" level=info msg="CreateContainer within sandbox \"6658167865403cbba3c6567b1e52562584f4076d87095cb685249ef4a36ae2d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 01:04:15.059663 env[1433]: time="2025-11-01T01:04:15.059604722Z" level=info msg="CreateContainer within sandbox \"6658167865403cbba3c6567b1e52562584f4076d87095cb685249ef4a36ae2d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c\"" Nov 1 01:04:15.060332 env[1433]: time="2025-11-01T01:04:15.060295223Z" level=info msg="StartContainer for \"091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c\"" Nov 1 01:04:15.077127 systemd[1]: Started cri-containerd-091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c.scope. Nov 1 01:04:15.088902 systemd[1]: cri-containerd-091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c.scope: Deactivated successfully. Nov 1 01:04:15.114198 env[1433]: time="2025-11-01T01:04:15.113149716Z" level=info msg="shim disconnected" id=091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c Nov 1 01:04:15.114198 env[1433]: time="2025-11-01T01:04:15.113203116Z" level=warning msg="cleaning up after shim disconnected" id=091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c namespace=k8s.io Nov 1 01:04:15.114198 env[1433]: time="2025-11-01T01:04:15.113214716Z" level=info msg="cleaning up dead shim" Nov 1 01:04:15.121977 env[1433]: time="2025-11-01T01:04:15.121926832Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3576 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T01:04:15Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Nov 1 01:04:15.122333 env[1433]: time="2025-11-01T01:04:15.122199832Z" level=error msg="copy shim log" error="read /proc/self/fd/81: file already closed" Nov 1 01:04:15.122555 env[1433]: time="2025-11-01T01:04:15.122503733Z" level=error msg="Failed to pipe stdout of container \"091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c\"" error="reading from a closed fifo" Nov 1 01:04:15.122715 env[1433]: time="2025-11-01T01:04:15.122677133Z" level=error msg="Failed to pipe stderr of container \"091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c\"" error="reading from a closed fifo" Nov 1 01:04:15.127730 env[1433]: time="2025-11-01T01:04:15.127671242Z" level=error msg="StartContainer for \"091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Nov 1 01:04:15.127988 kubelet[1862]: E1101 01:04:15.127939 1862 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c" Nov 1 01:04:15.128100 kubelet[1862]: E1101 01:04:15.128068 1862 kuberuntime_manager.go:1449] "Unhandled Error" err="init container mount-cgroup start failed in pod cilium-t4kl7_kube-system(816edf8f-32d7-4b61-94e1-337c89fae564): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" logger="UnhandledError" Nov 1 01:04:15.128153 kubelet[1862]: E1101 01:04:15.128117 1862 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t4kl7" podUID="816edf8f-32d7-4b61-94e1-337c89fae564" Nov 1 01:04:15.684103 kubelet[1862]: E1101 01:04:15.684020 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:15.973198 env[1433]: time="2025-11-01T01:04:15.973142539Z" level=info msg="StopPodSandbox for \"6658167865403cbba3c6567b1e52562584f4076d87095cb685249ef4a36ae2d3\"" Nov 1 01:04:15.976932 env[1433]: time="2025-11-01T01:04:15.973227239Z" level=info msg="Container to stop \"091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 01:04:15.976651 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6658167865403cbba3c6567b1e52562584f4076d87095cb685249ef4a36ae2d3-shm.mount: Deactivated successfully. Nov 1 01:04:15.985374 systemd[1]: cri-containerd-6658167865403cbba3c6567b1e52562584f4076d87095cb685249ef4a36ae2d3.scope: Deactivated successfully. Nov 1 01:04:16.008580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6658167865403cbba3c6567b1e52562584f4076d87095cb685249ef4a36ae2d3-rootfs.mount: Deactivated successfully. Nov 1 01:04:16.021428 env[1433]: time="2025-11-01T01:04:16.021376124Z" level=info msg="shim disconnected" id=6658167865403cbba3c6567b1e52562584f4076d87095cb685249ef4a36ae2d3 Nov 1 01:04:16.021694 env[1433]: time="2025-11-01T01:04:16.021669824Z" level=warning msg="cleaning up after shim disconnected" id=6658167865403cbba3c6567b1e52562584f4076d87095cb685249ef4a36ae2d3 namespace=k8s.io Nov 1 01:04:16.021784 env[1433]: time="2025-11-01T01:04:16.021767124Z" level=info msg="cleaning up dead shim" Nov 1 01:04:16.029870 env[1433]: time="2025-11-01T01:04:16.029832338Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3607 runtime=io.containerd.runc.v2\n" Nov 1 01:04:16.030187 env[1433]: time="2025-11-01T01:04:16.030152739Z" level=info msg="TearDown network for sandbox \"6658167865403cbba3c6567b1e52562584f4076d87095cb685249ef4a36ae2d3\" successfully" Nov 1 01:04:16.030298 env[1433]: time="2025-11-01T01:04:16.030184839Z" level=info msg="StopPodSandbox for \"6658167865403cbba3c6567b1e52562584f4076d87095cb685249ef4a36ae2d3\" returns successfully" Nov 1 01:04:16.100169 kubelet[1862]: I1101 01:04:16.100121 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-host-proc-sys-kernel\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.100169 kubelet[1862]: I1101 01:04:16.100173 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/816edf8f-32d7-4b61-94e1-337c89fae564-clustermesh-secrets\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.100443 kubelet[1862]: I1101 01:04:16.100201 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-etc-cni-netd\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.100443 kubelet[1862]: I1101 01:04:16.100221 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-hostproc\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.100443 kubelet[1862]: I1101 01:04:16.100274 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6ht6\" (UniqueName: \"kubernetes.io/projected/816edf8f-32d7-4b61-94e1-337c89fae564-kube-api-access-z6ht6\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.100443 kubelet[1862]: I1101 01:04:16.100295 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-bpf-maps\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.100443 kubelet[1862]: I1101 01:04:16.100327 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-run\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.100443 kubelet[1862]: I1101 01:04:16.100350 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-xtables-lock\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.100691 kubelet[1862]: I1101 01:04:16.100370 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-host-proc-sys-net\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.100691 kubelet[1862]: I1101 01:04:16.100388 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-cni-path\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.100691 kubelet[1862]: I1101 01:04:16.100415 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-lib-modules\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.100691 kubelet[1862]: I1101 01:04:16.100439 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-ipsec-secrets\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.100691 kubelet[1862]: I1101 01:04:16.100460 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/816edf8f-32d7-4b61-94e1-337c89fae564-hubble-tls\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.100691 kubelet[1862]: I1101 01:04:16.100480 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-cgroup\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.100957 kubelet[1862]: I1101 01:04:16.100508 1862 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-config-path\") pod \"816edf8f-32d7-4b61-94e1-337c89fae564\" (UID: \"816edf8f-32d7-4b61-94e1-337c89fae564\") " Nov 1 01:04:16.101505 kubelet[1862]: I1101 01:04:16.101050 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:16.101505 kubelet[1862]: I1101 01:04:16.101120 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:16.102127 kubelet[1862]: I1101 01:04:16.102104 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:16.102275 kubelet[1862]: I1101 01:04:16.102257 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-hostproc" (OuterVolumeSpecName: "hostproc") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:16.103315 kubelet[1862]: I1101 01:04:16.103288 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 01:04:16.103407 kubelet[1862]: I1101 01:04:16.103335 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:16.103407 kubelet[1862]: I1101 01:04:16.103358 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:16.103407 kubelet[1862]: I1101 01:04:16.103377 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-cni-path" (OuterVolumeSpecName: "cni-path") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:16.103407 kubelet[1862]: I1101 01:04:16.103396 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:16.105032 kubelet[1862]: I1101 01:04:16.105009 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:16.105202 kubelet[1862]: I1101 01:04:16.105184 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 01:04:16.109340 systemd[1]: var-lib-kubelet-pods-816edf8f\x2d32d7\x2d4b61\x2d94e1\x2d337c89fae564-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 01:04:16.110953 kubelet[1862]: I1101 01:04:16.110921 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 01:04:16.113842 systemd[1]: var-lib-kubelet-pods-816edf8f\x2d32d7\x2d4b61\x2d94e1\x2d337c89fae564-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 01:04:16.116393 kubelet[1862]: I1101 01:04:16.116362 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/816edf8f-32d7-4b61-94e1-337c89fae564-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 01:04:16.118873 kubelet[1862]: I1101 01:04:16.118846 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/816edf8f-32d7-4b61-94e1-337c89fae564-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:04:16.118958 kubelet[1862]: I1101 01:04:16.118878 1862 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/816edf8f-32d7-4b61-94e1-337c89fae564-kube-api-access-z6ht6" (OuterVolumeSpecName: "kube-api-access-z6ht6") pod "816edf8f-32d7-4b61-94e1-337c89fae564" (UID: "816edf8f-32d7-4b61-94e1-337c89fae564"). InnerVolumeSpecName "kube-api-access-z6ht6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:04:16.201452 kubelet[1862]: I1101 01:04:16.201385 1862 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-config-path\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.201452 kubelet[1862]: I1101 01:04:16.201434 1862 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-host-proc-sys-kernel\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.201452 kubelet[1862]: I1101 01:04:16.201450 1862 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/816edf8f-32d7-4b61-94e1-337c89fae564-clustermesh-secrets\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.201452 kubelet[1862]: I1101 01:04:16.201462 1862 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-etc-cni-netd\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.201800 kubelet[1862]: I1101 01:04:16.201475 1862 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-hostproc\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.201800 kubelet[1862]: I1101 01:04:16.201488 1862 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z6ht6\" (UniqueName: \"kubernetes.io/projected/816edf8f-32d7-4b61-94e1-337c89fae564-kube-api-access-z6ht6\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.201800 kubelet[1862]: I1101 01:04:16.201499 1862 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-bpf-maps\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.201800 kubelet[1862]: I1101 01:04:16.201515 1862 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-run\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.201800 kubelet[1862]: I1101 01:04:16.201528 1862 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-xtables-lock\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.201800 kubelet[1862]: I1101 01:04:16.201541 1862 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-host-proc-sys-net\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.201800 kubelet[1862]: I1101 01:04:16.201552 1862 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-cni-path\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.201800 kubelet[1862]: I1101 01:04:16.201564 1862 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-lib-modules\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.202053 kubelet[1862]: I1101 01:04:16.201578 1862 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-ipsec-secrets\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.202053 kubelet[1862]: I1101 01:04:16.201590 1862 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/816edf8f-32d7-4b61-94e1-337c89fae564-hubble-tls\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.202053 kubelet[1862]: I1101 01:04:16.201602 1862 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/816edf8f-32d7-4b61-94e1-337c89fae564-cilium-cgroup\") on node \"10.200.4.9\" DevicePath \"\"" Nov 1 01:04:16.684880 kubelet[1862]: E1101 01:04:16.684835 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:16.717678 systemd[1]: Removed slice kubepods-burstable-pod816edf8f_32d7_4b61_94e1_337c89fae564.slice. Nov 1 01:04:16.800989 systemd[1]: var-lib-kubelet-pods-816edf8f\x2d32d7\x2d4b61\x2d94e1\x2d337c89fae564-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz6ht6.mount: Deactivated successfully. Nov 1 01:04:16.801106 systemd[1]: var-lib-kubelet-pods-816edf8f\x2d32d7\x2d4b61\x2d94e1\x2d337c89fae564-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 01:04:16.981946 kubelet[1862]: I1101 01:04:16.981900 1862 scope.go:117] "RemoveContainer" containerID="091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c" Nov 1 01:04:16.988571 env[1433]: time="2025-11-01T01:04:16.988523914Z" level=info msg="RemoveContainer for \"091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c\"" Nov 1 01:04:17.002211 env[1433]: time="2025-11-01T01:04:17.002164038Z" level=info msg="RemoveContainer for \"091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c\" returns successfully" Nov 1 01:04:17.043974 systemd[1]: Created slice kubepods-burstable-pod9d23e41e_a08f_45e1_93fa_7b6d4b6dfdda.slice. Nov 1 01:04:17.105787 kubelet[1862]: I1101 01:04:17.105731 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d5kn\" (UniqueName: \"kubernetes.io/projected/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-kube-api-access-8d5kn\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.106057 kubelet[1862]: I1101 01:04:17.106033 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-cilium-run\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.106183 kubelet[1862]: I1101 01:04:17.106168 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-cilium-config-path\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.106315 kubelet[1862]: I1101 01:04:17.106296 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-xtables-lock\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.106420 kubelet[1862]: I1101 01:04:17.106406 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-clustermesh-secrets\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.106521 kubelet[1862]: I1101 01:04:17.106507 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-cilium-ipsec-secrets\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.106633 kubelet[1862]: I1101 01:04:17.106607 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-host-proc-sys-kernel\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.106734 kubelet[1862]: I1101 01:04:17.106722 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-bpf-maps\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.106840 kubelet[1862]: I1101 01:04:17.106828 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-cni-path\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.106943 kubelet[1862]: I1101 01:04:17.106930 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-host-proc-sys-net\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.107045 kubelet[1862]: I1101 01:04:17.107033 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-etc-cni-netd\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.107125 kubelet[1862]: I1101 01:04:17.107113 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-hostproc\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.107214 kubelet[1862]: I1101 01:04:17.107200 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-cilium-cgroup\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.107310 kubelet[1862]: I1101 01:04:17.107295 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-lib-modules\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.107415 kubelet[1862]: I1101 01:04:17.107388 1862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda-hubble-tls\") pod \"cilium-xqm2t\" (UID: \"9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda\") " pod="kube-system/cilium-xqm2t" Nov 1 01:04:17.359981 env[1433]: time="2025-11-01T01:04:17.359857156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xqm2t,Uid:9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda,Namespace:kube-system,Attempt:0,}" Nov 1 01:04:17.401264 env[1433]: time="2025-11-01T01:04:17.401186828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:04:17.401484 env[1433]: time="2025-11-01T01:04:17.401223728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:04:17.401484 env[1433]: time="2025-11-01T01:04:17.401237328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:04:17.401484 env[1433]: time="2025-11-01T01:04:17.401404428Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/edb8e45551b9cc2e708f5c9a028253d9416729cdaca55eb410f8de912f4f0dbb pid=3636 runtime=io.containerd.runc.v2 Nov 1 01:04:17.415128 systemd[1]: Started cri-containerd-edb8e45551b9cc2e708f5c9a028253d9416729cdaca55eb410f8de912f4f0dbb.scope. Nov 1 01:04:17.444376 env[1433]: time="2025-11-01T01:04:17.444328402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xqm2t,Uid:9d23e41e-a08f-45e1-93fa-7b6d4b6dfdda,Namespace:kube-system,Attempt:0,} returns sandbox id \"edb8e45551b9cc2e708f5c9a028253d9416729cdaca55eb410f8de912f4f0dbb\"" Nov 1 01:04:17.452049 env[1433]: time="2025-11-01T01:04:17.452015715Z" level=info msg="CreateContainer within sandbox \"edb8e45551b9cc2e708f5c9a028253d9416729cdaca55eb410f8de912f4f0dbb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 01:04:17.471872 env[1433]: time="2025-11-01T01:04:17.471821950Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:04:17.488522 env[1433]: time="2025-11-01T01:04:17.488475978Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:04:17.497591 env[1433]: time="2025-11-01T01:04:17.497547394Z" level=info msg="CreateContainer within sandbox \"edb8e45551b9cc2e708f5c9a028253d9416729cdaca55eb410f8de912f4f0dbb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cd6e81e371810182e817fbbf997a71b3ad2e90343c57c7c111067d852ed1bf58\"" Nov 1 01:04:17.498237 env[1433]: time="2025-11-01T01:04:17.498118095Z" level=info msg="StartContainer for \"cd6e81e371810182e817fbbf997a71b3ad2e90343c57c7c111067d852ed1bf58\"" Nov 1 01:04:17.498953 env[1433]: time="2025-11-01T01:04:17.498921396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:04:17.499391 env[1433]: time="2025-11-01T01:04:17.499360597Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 1 01:04:17.507553 env[1433]: time="2025-11-01T01:04:17.507506811Z" level=info msg="CreateContainer within sandbox \"a1c61b1b603041e1ec72d8b83813ec2a782ba28c26a9251f8295d30ac4aeefd9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 01:04:17.520720 systemd[1]: Started cri-containerd-cd6e81e371810182e817fbbf997a71b3ad2e90343c57c7c111067d852ed1bf58.scope. Nov 1 01:04:17.551989 env[1433]: time="2025-11-01T01:04:17.551945388Z" level=info msg="StartContainer for \"cd6e81e371810182e817fbbf997a71b3ad2e90343c57c7c111067d852ed1bf58\" returns successfully" Nov 1 01:04:17.559590 systemd[1]: cri-containerd-cd6e81e371810182e817fbbf997a71b3ad2e90343c57c7c111067d852ed1bf58.scope: Deactivated successfully. Nov 1 01:04:17.560563 env[1433]: time="2025-11-01T01:04:17.560514403Z" level=info msg="CreateContainer within sandbox \"a1c61b1b603041e1ec72d8b83813ec2a782ba28c26a9251f8295d30ac4aeefd9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"63b6bf68c8c08a450b85dfdb1ee6abb2f5eb3682233535bdc189faf97cc97c6d\"" Nov 1 01:04:17.561544 env[1433]: time="2025-11-01T01:04:17.561510004Z" level=info msg="StartContainer for \"63b6bf68c8c08a450b85dfdb1ee6abb2f5eb3682233535bdc189faf97cc97c6d\"" Nov 1 01:04:17.586658 systemd[1]: Started cri-containerd-63b6bf68c8c08a450b85dfdb1ee6abb2f5eb3682233535bdc189faf97cc97c6d.scope. Nov 1 01:04:18.061429 kubelet[1862]: E1101 01:04:17.685279 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:18.073924 env[1433]: time="2025-11-01T01:04:18.073872688Z" level=info msg="shim disconnected" id=cd6e81e371810182e817fbbf997a71b3ad2e90343c57c7c111067d852ed1bf58 Nov 1 01:04:18.074519 env[1433]: time="2025-11-01T01:04:18.074492389Z" level=warning msg="cleaning up after shim disconnected" id=cd6e81e371810182e817fbbf997a71b3ad2e90343c57c7c111067d852ed1bf58 namespace=k8s.io Nov 1 01:04:18.074652 env[1433]: time="2025-11-01T01:04:18.074632989Z" level=info msg="cleaning up dead shim" Nov 1 01:04:18.086269 env[1433]: time="2025-11-01T01:04:18.084048605Z" level=info msg="StartContainer for \"63b6bf68c8c08a450b85dfdb1ee6abb2f5eb3682233535bdc189faf97cc97c6d\" returns successfully" Nov 1 01:04:18.091248 env[1433]: time="2025-11-01T01:04:18.091029417Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3754 runtime=io.containerd.runc.v2\n" Nov 1 01:04:18.218515 kubelet[1862]: W1101 01:04:18.218471 1862 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod816edf8f_32d7_4b61_94e1_337c89fae564.slice/cri-containerd-091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c.scope WatchSource:0}: container "091c40f2e5954f002898d4f208944a9bdaade41799fcb94e1112a7527c43e83c" in namespace "k8s.io": not found Nov 1 01:04:18.612791 kubelet[1862]: E1101 01:04:18.612740 1862 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:18.685748 kubelet[1862]: E1101 01:04:18.685690 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:18.713756 kubelet[1862]: I1101 01:04:18.713705 1862 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="816edf8f-32d7-4b61-94e1-337c89fae564" path="/var/lib/kubelet/pods/816edf8f-32d7-4b61-94e1-337c89fae564/volumes" Nov 1 01:04:18.734946 kubelet[1862]: E1101 01:04:18.734902 1862 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 01:04:19.004860 env[1433]: time="2025-11-01T01:04:19.004809777Z" level=info msg="CreateContainer within sandbox \"edb8e45551b9cc2e708f5c9a028253d9416729cdaca55eb410f8de912f4f0dbb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 01:04:19.012674 kubelet[1862]: I1101 01:04:19.012596 1862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-mpz48" podStartSLOduration=2.528974837 podStartE2EDuration="5.01256589s" podCreationTimestamp="2025-11-01 01:04:14 +0000 UTC" firstStartedPulling="2025-11-01 01:04:15.017324747 +0000 UTC m=+96.997712294" lastFinishedPulling="2025-11-01 01:04:17.5009158 +0000 UTC m=+99.481303347" observedRunningTime="2025-11-01 01:04:19.01254909 +0000 UTC m=+100.992936637" watchObservedRunningTime="2025-11-01 01:04:19.01256589 +0000 UTC m=+100.992953437" Nov 1 01:04:19.037955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518052428.mount: Deactivated successfully. Nov 1 01:04:19.048893 env[1433]: time="2025-11-01T01:04:19.048842951Z" level=info msg="CreateContainer within sandbox \"edb8e45551b9cc2e708f5c9a028253d9416729cdaca55eb410f8de912f4f0dbb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6f41a04fb967155a03306a3f0f67265710914fdabc09787c21d1ca2c9a63b289\"" Nov 1 01:04:19.049694 env[1433]: time="2025-11-01T01:04:19.049660452Z" level=info msg="StartContainer for \"6f41a04fb967155a03306a3f0f67265710914fdabc09787c21d1ca2c9a63b289\"" Nov 1 01:04:19.077978 systemd[1]: Started cri-containerd-6f41a04fb967155a03306a3f0f67265710914fdabc09787c21d1ca2c9a63b289.scope. Nov 1 01:04:19.105820 env[1433]: time="2025-11-01T01:04:19.105769247Z" level=info msg="StartContainer for \"6f41a04fb967155a03306a3f0f67265710914fdabc09787c21d1ca2c9a63b289\" returns successfully" Nov 1 01:04:19.107942 systemd[1]: cri-containerd-6f41a04fb967155a03306a3f0f67265710914fdabc09787c21d1ca2c9a63b289.scope: Deactivated successfully. Nov 1 01:04:19.142605 env[1433]: time="2025-11-01T01:04:19.142551409Z" level=info msg="shim disconnected" id=6f41a04fb967155a03306a3f0f67265710914fdabc09787c21d1ca2c9a63b289 Nov 1 01:04:19.142605 env[1433]: time="2025-11-01T01:04:19.142600009Z" level=warning msg="cleaning up after shim disconnected" id=6f41a04fb967155a03306a3f0f67265710914fdabc09787c21d1ca2c9a63b289 namespace=k8s.io Nov 1 01:04:19.142605 env[1433]: time="2025-11-01T01:04:19.142611109Z" level=info msg="cleaning up dead shim" Nov 1 01:04:19.150816 env[1433]: time="2025-11-01T01:04:19.150770723Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3822 runtime=io.containerd.runc.v2\n" Nov 1 01:04:19.686102 kubelet[1862]: E1101 01:04:19.686040 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:20.005209 env[1433]: time="2025-11-01T01:04:20.005156364Z" level=info msg="CreateContainer within sandbox \"edb8e45551b9cc2e708f5c9a028253d9416729cdaca55eb410f8de912f4f0dbb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 01:04:20.029803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f41a04fb967155a03306a3f0f67265710914fdabc09787c21d1ca2c9a63b289-rootfs.mount: Deactivated successfully. Nov 1 01:04:20.044744 env[1433]: time="2025-11-01T01:04:20.044690430Z" level=info msg="CreateContainer within sandbox \"edb8e45551b9cc2e708f5c9a028253d9416729cdaca55eb410f8de912f4f0dbb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4245586a947d7d9b7c724decedb86032831df3436eb894e1530ea11ef60c70ea\"" Nov 1 01:04:20.045298 env[1433]: time="2025-11-01T01:04:20.045266431Z" level=info msg="StartContainer for \"4245586a947d7d9b7c724decedb86032831df3436eb894e1530ea11ef60c70ea\"" Nov 1 01:04:20.074842 systemd[1]: run-containerd-runc-k8s.io-4245586a947d7d9b7c724decedb86032831df3436eb894e1530ea11ef60c70ea-runc.ayGLni.mount: Deactivated successfully. Nov 1 01:04:20.079540 systemd[1]: Started cri-containerd-4245586a947d7d9b7c724decedb86032831df3436eb894e1530ea11ef60c70ea.scope. Nov 1 01:04:20.110010 systemd[1]: cri-containerd-4245586a947d7d9b7c724decedb86032831df3436eb894e1530ea11ef60c70ea.scope: Deactivated successfully. Nov 1 01:04:20.112790 env[1433]: time="2025-11-01T01:04:20.112750244Z" level=info msg="StartContainer for \"4245586a947d7d9b7c724decedb86032831df3436eb894e1530ea11ef60c70ea\" returns successfully" Nov 1 01:04:20.150433 env[1433]: time="2025-11-01T01:04:20.150374006Z" level=info msg="shim disconnected" id=4245586a947d7d9b7c724decedb86032831df3436eb894e1530ea11ef60c70ea Nov 1 01:04:20.150433 env[1433]: time="2025-11-01T01:04:20.150430506Z" level=warning msg="cleaning up after shim disconnected" id=4245586a947d7d9b7c724decedb86032831df3436eb894e1530ea11ef60c70ea namespace=k8s.io Nov 1 01:04:20.150718 env[1433]: time="2025-11-01T01:04:20.150442606Z" level=info msg="cleaning up dead shim" Nov 1 01:04:20.157797 env[1433]: time="2025-11-01T01:04:20.157746519Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3881 runtime=io.containerd.runc.v2\n" Nov 1 01:04:20.686960 kubelet[1862]: E1101 01:04:20.686897 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:21.009099 env[1433]: time="2025-11-01T01:04:21.009045738Z" level=info msg="CreateContainer within sandbox \"edb8e45551b9cc2e708f5c9a028253d9416729cdaca55eb410f8de912f4f0dbb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 01:04:21.029849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4245586a947d7d9b7c724decedb86032831df3436eb894e1530ea11ef60c70ea-rootfs.mount: Deactivated successfully. Nov 1 01:04:21.037311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount866079408.mount: Deactivated successfully. Nov 1 01:04:21.050808 env[1433]: time="2025-11-01T01:04:21.050759607Z" level=info msg="CreateContainer within sandbox \"edb8e45551b9cc2e708f5c9a028253d9416729cdaca55eb410f8de912f4f0dbb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"71bcb963507d6b2d24d80ef08e919e2de6cc434006a463d7a0ce357339423ee1\"" Nov 1 01:04:21.051457 env[1433]: time="2025-11-01T01:04:21.051423608Z" level=info msg="StartContainer for \"71bcb963507d6b2d24d80ef08e919e2de6cc434006a463d7a0ce357339423ee1\"" Nov 1 01:04:21.072651 systemd[1]: Started cri-containerd-71bcb963507d6b2d24d80ef08e919e2de6cc434006a463d7a0ce357339423ee1.scope. Nov 1 01:04:21.101171 systemd[1]: cri-containerd-71bcb963507d6b2d24d80ef08e919e2de6cc434006a463d7a0ce357339423ee1.scope: Deactivated successfully. Nov 1 01:04:21.105103 env[1433]: time="2025-11-01T01:04:21.105049296Z" level=info msg="StartContainer for \"71bcb963507d6b2d24d80ef08e919e2de6cc434006a463d7a0ce357339423ee1\" returns successfully" Nov 1 01:04:21.136905 env[1433]: time="2025-11-01T01:04:21.136816849Z" level=info msg="shim disconnected" id=71bcb963507d6b2d24d80ef08e919e2de6cc434006a463d7a0ce357339423ee1 Nov 1 01:04:21.137403 env[1433]: time="2025-11-01T01:04:21.136906049Z" level=warning msg="cleaning up after shim disconnected" id=71bcb963507d6b2d24d80ef08e919e2de6cc434006a463d7a0ce357339423ee1 namespace=k8s.io Nov 1 01:04:21.137403 env[1433]: time="2025-11-01T01:04:21.136941949Z" level=info msg="cleaning up dead shim" Nov 1 01:04:21.142084 kubelet[1862]: I1101 01:04:21.140780 1862 setters.go:543] "Node became not ready" node="10.200.4.9" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T01:04:21Z","lastTransitionTime":"2025-11-01T01:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 01:04:21.146173 env[1433]: time="2025-11-01T01:04:21.146135564Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:04:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3938 runtime=io.containerd.runc.v2\n" Nov 1 01:04:21.329141 kubelet[1862]: W1101 01:04:21.328437 1862 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d23e41e_a08f_45e1_93fa_7b6d4b6dfdda.slice/cri-containerd-cd6e81e371810182e817fbbf997a71b3ad2e90343c57c7c111067d852ed1bf58.scope WatchSource:0}: task cd6e81e371810182e817fbbf997a71b3ad2e90343c57c7c111067d852ed1bf58 not found Nov 1 01:04:21.687170 kubelet[1862]: E1101 01:04:21.687020 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:22.014322 env[1433]: time="2025-11-01T01:04:22.014267896Z" level=info msg="CreateContainer within sandbox \"edb8e45551b9cc2e708f5c9a028253d9416729cdaca55eb410f8de912f4f0dbb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 01:04:22.067756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781553645.mount: Deactivated successfully. Nov 1 01:04:22.078003 env[1433]: time="2025-11-01T01:04:22.077948999Z" level=info msg="CreateContainer within sandbox \"edb8e45551b9cc2e708f5c9a028253d9416729cdaca55eb410f8de912f4f0dbb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d65cde6ca5e410d9a5834778056d6263dec66160751861ce2b1dd929c8e64189\"" Nov 1 01:04:22.078599 env[1433]: time="2025-11-01T01:04:22.078566800Z" level=info msg="StartContainer for \"d65cde6ca5e410d9a5834778056d6263dec66160751861ce2b1dd929c8e64189\"" Nov 1 01:04:22.096769 systemd[1]: Started cri-containerd-d65cde6ca5e410d9a5834778056d6263dec66160751861ce2b1dd929c8e64189.scope. Nov 1 01:04:22.131261 env[1433]: time="2025-11-01T01:04:22.131191886Z" level=info msg="StartContainer for \"d65cde6ca5e410d9a5834778056d6263dec66160751861ce2b1dd929c8e64189\" returns successfully" Nov 1 01:04:22.474273 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 1 01:04:22.687725 kubelet[1862]: E1101 01:04:22.687675 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:23.032137 kubelet[1862]: I1101 01:04:23.032079 1862 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xqm2t" podStartSLOduration=6.032062855 podStartE2EDuration="6.032062855s" podCreationTimestamp="2025-11-01 01:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:04:23.031955955 +0000 UTC m=+105.012343502" watchObservedRunningTime="2025-11-01 01:04:23.032062855 +0000 UTC m=+105.012450402" Nov 1 01:04:23.688854 kubelet[1862]: E1101 01:04:23.688805 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:24.441191 kubelet[1862]: W1101 01:04:24.441134 1862 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d23e41e_a08f_45e1_93fa_7b6d4b6dfdda.slice/cri-containerd-6f41a04fb967155a03306a3f0f67265710914fdabc09787c21d1ca2c9a63b289.scope WatchSource:0}: task 6f41a04fb967155a03306a3f0f67265710914fdabc09787c21d1ca2c9a63b289 not found Nov 1 01:04:24.689987 kubelet[1862]: E1101 01:04:24.689949 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:25.100234 systemd[1]: run-containerd-runc-k8s.io-d65cde6ca5e410d9a5834778056d6263dec66160751861ce2b1dd929c8e64189-runc.GMfe4z.mount: Deactivated successfully. Nov 1 01:04:25.243884 systemd-networkd[1595]: lxc_health: Link UP Nov 1 01:04:25.252586 systemd-networkd[1595]: lxc_health: Gained carrier Nov 1 01:04:25.253275 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 01:04:25.690643 kubelet[1862]: E1101 01:04:25.690585 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:26.691473 kubelet[1862]: E1101 01:04:26.691420 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:27.153446 systemd-networkd[1595]: lxc_health: Gained IPv6LL Nov 1 01:04:27.556714 kubelet[1862]: W1101 01:04:27.555743 1862 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d23e41e_a08f_45e1_93fa_7b6d4b6dfdda.slice/cri-containerd-4245586a947d7d9b7c724decedb86032831df3436eb894e1530ea11ef60c70ea.scope WatchSource:0}: task 4245586a947d7d9b7c724decedb86032831df3436eb894e1530ea11ef60c70ea not found Nov 1 01:04:27.693061 kubelet[1862]: E1101 01:04:27.692972 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:28.693886 kubelet[1862]: E1101 01:04:28.693832 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:29.694333 kubelet[1862]: E1101 01:04:29.694269 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:30.664200 kubelet[1862]: W1101 01:04:30.664144 1862 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d23e41e_a08f_45e1_93fa_7b6d4b6dfdda.slice/cri-containerd-71bcb963507d6b2d24d80ef08e919e2de6cc434006a463d7a0ce357339423ee1.scope WatchSource:0}: task 71bcb963507d6b2d24d80ef08e919e2de6cc434006a463d7a0ce357339423ee1 not found Nov 1 01:04:30.694925 kubelet[1862]: E1101 01:04:30.694869 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:31.695653 kubelet[1862]: E1101 01:04:31.695600 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:32.696375 kubelet[1862]: E1101 01:04:32.696310 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:33.696896 kubelet[1862]: E1101 01:04:33.696833 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Nov 1 01:04:34.697320 kubelet[1862]: E1101 01:04:34.697237 1862 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"