Dec 13 01:52:11.001110 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 01:52:11.001139 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:52:11.001153 kernel: BIOS-provided physical RAM map: Dec 13 01:52:11.001163 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:52:11.001173 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 01:52:11.001183 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 01:52:11.001197 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 01:52:11.001208 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 01:52:11.001218 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 01:52:11.001228 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 01:52:11.001239 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 01:52:11.001249 kernel: printk: bootconsole [earlyser0] enabled Dec 13 01:52:11.001260 kernel: NX (Execute Disable) protection: active Dec 13 01:52:11.001270 kernel: efi: EFI v2.70 by Microsoft Dec 13 01:52:11.001286 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Dec 13 01:52:11.001297 kernel: random: crng init done Dec 13 01:52:11.001308 kernel: SMBIOS 3.1.0 present. Dec 13 01:52:11.001320 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 01:52:11.001331 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 01:52:11.001343 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 01:52:11.001354 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Dec 13 01:52:11.001365 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 01:52:11.001378 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 01:52:11.001389 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 01:52:11.001401 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 01:52:11.001412 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 01:52:11.001424 kernel: tsc: Detected 2593.906 MHz processor Dec 13 01:52:11.001435 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:52:11.001447 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:52:11.001458 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 01:52:11.001470 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:52:11.001481 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 01:52:11.001495 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 01:52:11.001506 kernel: Using GB pages for direct mapping Dec 13 01:52:11.001517 kernel: Secure boot disabled Dec 13 01:52:11.001529 kernel: ACPI: Early table checksum verification disabled Dec 13 01:52:11.001540 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 01:52:11.001551 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:52:11.001563 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:52:11.001574 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 01:52:11.001593 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 01:52:11.001605 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:52:11.001617 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:52:11.001629 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:52:11.001641 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:52:11.001654 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:52:11.001668 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:52:11.001681 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:52:11.001693 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 01:52:11.001717 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 01:52:11.001729 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 01:52:11.001742 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 01:52:11.001754 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 01:52:11.001766 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 01:52:11.001781 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 01:52:11.001793 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 01:52:11.001805 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 01:52:11.001818 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 01:52:11.001830 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:52:11.001842 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:52:11.001854 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 01:52:11.001866 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 01:52:11.001879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 01:52:11.001894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 01:52:11.001906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 01:52:11.001918 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 01:52:11.001931 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 01:52:11.001943 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 01:52:11.001955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 01:52:11.001967 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 01:52:11.001980 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 01:52:11.001992 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 01:52:11.002007 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 01:52:11.002019 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 01:52:11.002031 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 01:52:11.002043 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 01:52:11.002056 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 01:52:11.002068 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 01:52:11.002080 kernel: Zone ranges: Dec 13 01:52:11.002092 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:52:11.002104 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:52:11.002119 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 01:52:11.002131 kernel: Movable zone start for each node Dec 13 01:52:11.002143 kernel: Early memory node ranges Dec 13 01:52:11.002155 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:52:11.002167 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 01:52:11.002180 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 01:52:11.002192 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 01:52:11.002204 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 01:52:11.002216 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:52:11.002230 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:52:11.002243 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 01:52:11.002255 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 01:52:11.002267 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 01:52:11.002279 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:52:11.002291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:52:11.002303 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:52:11.002316 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 01:52:11.002328 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:52:11.002342 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 01:52:11.002354 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 01:52:11.002367 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:52:11.002379 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:52:11.002391 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 01:52:11.002404 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 01:52:11.002416 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:52:11.002427 kernel: Hyper-V: PV spinlocks enabled Dec 13 01:52:11.002440 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:52:11.002454 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 01:52:11.002466 kernel: Policy zone: Normal Dec 13 01:52:11.002480 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:52:11.002493 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:52:11.002505 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:52:11.002517 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:52:11.002529 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:52:11.002542 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 308056K reserved, 0K cma-reserved) Dec 13 01:52:11.002556 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:52:11.002569 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 01:52:11.002590 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 01:52:11.002605 kernel: rcu: Hierarchical RCU implementation. Dec 13 01:52:11.002619 kernel: rcu: RCU event tracing is enabled. Dec 13 01:52:11.002632 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:52:11.002645 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:52:11.002657 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:52:11.002670 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:52:11.002683 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:52:11.002696 kernel: Using NULL legacy PIC Dec 13 01:52:11.002718 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 01:52:11.002731 kernel: Console: colour dummy device 80x25 Dec 13 01:52:11.002744 kernel: printk: console [tty1] enabled Dec 13 01:52:11.002757 kernel: printk: console [ttyS0] enabled Dec 13 01:52:11.002770 kernel: printk: bootconsole [earlyser0] disabled Dec 13 01:52:11.002785 kernel: ACPI: Core revision 20210730 Dec 13 01:52:11.002798 kernel: Failed to register legacy timer interrupt Dec 13 01:52:11.002811 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:52:11.002824 kernel: Hyper-V: Using IPI hypercalls Dec 13 01:52:11.002837 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Dec 13 01:52:11.002849 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:52:11.002862 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:52:11.002876 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:52:11.002888 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:52:11.002901 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:52:11.002916 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:52:11.002929 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 01:52:11.002942 kernel: RETBleed: Vulnerable Dec 13 01:52:11.002955 kernel: Speculative Store Bypass: Vulnerable Dec 13 01:52:11.002967 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:52:11.002980 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:52:11.002993 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:52:11.003005 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:52:11.003027 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:52:11.003041 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:52:11.003056 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 01:52:11.003069 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 01:52:11.003082 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 01:52:11.003094 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:52:11.003107 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 01:52:11.003121 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 01:52:11.003132 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 01:52:11.003145 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 01:52:11.003158 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:52:11.003171 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:52:11.003184 kernel: LSM: Security Framework initializing Dec 13 01:52:11.003197 kernel: SELinux: Initializing. Dec 13 01:52:11.003213 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:52:11.003226 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:52:11.003240 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 01:52:11.003254 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 01:52:11.003267 kernel: signal: max sigframe size: 3632 Dec 13 01:52:11.003281 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:52:11.003295 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:52:11.003308 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:52:11.003322 kernel: x86: Booting SMP configuration: Dec 13 01:52:11.003335 kernel: .... node #0, CPUs: #1 Dec 13 01:52:11.003351 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 01:52:11.003366 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:52:11.003379 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:52:11.003392 kernel: smpboot: Max logical packages: 1 Dec 13 01:52:11.003406 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Dec 13 01:52:11.003420 kernel: devtmpfs: initialized Dec 13 01:52:11.003433 kernel: x86/mm: Memory block size: 128MB Dec 13 01:52:11.003446 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 01:52:11.003462 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:52:11.003476 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:52:11.003489 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:52:11.003502 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:52:11.003516 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:52:11.003530 kernel: audit: type=2000 audit(1734054730.023:1): state=initialized audit_enabled=0 res=1 Dec 13 01:52:11.003543 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:52:11.003556 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:52:11.003570 kernel: cpuidle: using governor menu Dec 13 01:52:11.003585 kernel: ACPI: bus type PCI registered Dec 13 01:52:11.003599 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:52:11.003612 kernel: dca service started, version 1.12.1 Dec 13 01:52:11.003626 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:52:11.003640 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:52:11.003653 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:52:11.003666 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:52:11.003680 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:52:11.003693 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:52:11.003716 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:52:11.003727 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 01:52:11.003738 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 01:52:11.003751 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 01:52:11.003762 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:52:11.003774 kernel: ACPI: Interpreter enabled Dec 13 01:52:11.003787 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:52:11.003800 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:52:11.003813 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:52:11.003830 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 01:52:11.003843 kernel: iommu: Default domain type: Translated Dec 13 01:52:11.003857 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:52:11.003870 kernel: vgaarb: loaded Dec 13 01:52:11.003883 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:52:11.003897 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:52:11.003911 kernel: PTP clock support registered Dec 13 01:52:11.003924 kernel: Registered efivars operations Dec 13 01:52:11.003938 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:52:11.003951 kernel: PCI: System does not support PCI Dec 13 01:52:11.003967 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 01:52:11.003980 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:52:11.003994 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:52:11.004007 kernel: pnp: PnP ACPI init Dec 13 01:52:11.004021 kernel: pnp: PnP ACPI: found 3 devices Dec 13 01:52:11.004034 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:52:11.004047 kernel: NET: Registered PF_INET protocol family Dec 13 01:52:11.004061 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:52:11.004077 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:52:11.004091 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:52:11.004105 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:52:11.004118 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 01:52:11.004132 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:52:11.004146 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:52:11.004159 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:52:11.004173 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:52:11.004186 kernel: NET: Registered PF_XDP protocol family Dec 13 01:52:11.004202 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:52:11.004215 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:52:11.004229 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Dec 13 01:52:11.004242 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:52:11.004256 kernel: Initialise system trusted keyrings Dec 13 01:52:11.004269 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:52:11.004282 kernel: Key type asymmetric registered Dec 13 01:52:11.004295 kernel: Asymmetric key parser 'x509' registered Dec 13 01:52:11.004308 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 01:52:11.004324 kernel: io scheduler mq-deadline registered Dec 13 01:52:11.004337 kernel: io scheduler kyber registered Dec 13 01:52:11.004350 kernel: io scheduler bfq registered Dec 13 01:52:11.004364 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:52:11.004377 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:52:11.004391 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:52:11.004404 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:52:11.004418 kernel: i8042: PNP: No PS/2 controller found. Dec 13 01:52:11.004567 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 01:52:11.004681 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T01:52:10 UTC (1734054730) Dec 13 01:52:11.004798 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 01:52:11.004815 kernel: fail to initialize ptp_kvm Dec 13 01:52:11.004829 kernel: intel_pstate: CPU model not supported Dec 13 01:52:11.004843 kernel: efifb: probing for efifb Dec 13 01:52:11.004857 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:52:11.004871 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:52:11.004885 kernel: efifb: scrolling: redraw Dec 13 01:52:11.004901 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:52:11.004915 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:52:11.004929 kernel: fb0: EFI VGA frame buffer device Dec 13 01:52:11.004943 kernel: pstore: Registered efi as persistent store backend Dec 13 01:52:11.004957 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:52:11.004971 kernel: Segment Routing with IPv6 Dec 13 01:52:11.004984 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:52:11.004998 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:52:11.005011 kernel: Key type dns_resolver registered Dec 13 01:52:11.005026 kernel: IPI shorthand broadcast: enabled Dec 13 01:52:11.005040 kernel: sched_clock: Marking stable (694904100, 19597700)->(902781000, -188279200) Dec 13 01:52:11.005054 kernel: registered taskstats version 1 Dec 13 01:52:11.005068 kernel: Loading compiled-in X.509 certificates Dec 13 01:52:11.005081 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 01:52:11.005094 kernel: Key type .fscrypt registered Dec 13 01:52:11.005108 kernel: Key type fscrypt-provisioning registered Dec 13 01:52:11.005121 kernel: pstore: Using crash dump compression: deflate Dec 13 01:52:11.005137 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:52:11.005151 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:52:11.005164 kernel: ima: No architecture policies found Dec 13 01:52:11.005178 kernel: clk: Disabling unused clocks Dec 13 01:52:11.005191 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 01:52:11.005205 kernel: Write protecting the kernel read-only data: 28672k Dec 13 01:52:11.005218 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 01:52:11.005230 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 01:52:11.005244 kernel: Run /init as init process Dec 13 01:52:11.005257 kernel: with arguments: Dec 13 01:52:11.005273 kernel: /init Dec 13 01:52:11.005286 kernel: with environment: Dec 13 01:52:11.005299 kernel: HOME=/ Dec 13 01:52:11.005313 kernel: TERM=linux Dec 13 01:52:11.005326 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:52:11.005342 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:52:11.005358 systemd[1]: Detected virtualization microsoft. Dec 13 01:52:11.005375 systemd[1]: Detected architecture x86-64. Dec 13 01:52:11.005389 systemd[1]: Running in initrd. Dec 13 01:52:11.005403 systemd[1]: No hostname configured, using default hostname. Dec 13 01:52:11.005416 systemd[1]: Hostname set to . Dec 13 01:52:11.005432 systemd[1]: Initializing machine ID from random generator. Dec 13 01:52:11.005446 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:52:11.005460 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:52:11.005474 systemd[1]: Reached target cryptsetup.target. Dec 13 01:52:11.005488 systemd[1]: Reached target paths.target. Dec 13 01:52:11.005505 systemd[1]: Reached target slices.target. Dec 13 01:52:11.005519 systemd[1]: Reached target swap.target. Dec 13 01:52:11.005533 systemd[1]: Reached target timers.target. Dec 13 01:52:11.005548 systemd[1]: Listening on iscsid.socket. Dec 13 01:52:11.005562 systemd[1]: Listening on iscsiuio.socket. Dec 13 01:52:11.005577 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:52:11.005591 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:52:11.005608 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:52:11.005623 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:52:11.005637 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:52:11.005651 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:52:11.005665 systemd[1]: Reached target sockets.target. Dec 13 01:52:11.005680 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:52:11.005694 systemd[1]: Finished network-cleanup.service. Dec 13 01:52:11.005722 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:52:11.005733 systemd[1]: Starting systemd-journald.service... Dec 13 01:52:11.005748 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:52:11.005760 systemd[1]: Starting systemd-resolved.service... Dec 13 01:52:11.005771 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 01:52:11.005784 systemd-journald[183]: Journal started Dec 13 01:52:11.005834 systemd-journald[183]: Runtime Journal (/run/log/journal/82f7a28cc28e40d38f8ac42d4d639db8) is 8.0M, max 159.0M, 151.0M free. Dec 13 01:52:11.008837 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 01:52:11.020746 systemd[1]: Started systemd-journald.service. Dec 13 01:52:11.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.041180 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:52:11.043280 kernel: audit: type=1130 audit(1734054731.030:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.046741 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:52:11.050632 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 01:52:11.073725 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:52:11.073761 kernel: audit: type=1130 audit(1734054731.045:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.073809 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 01:52:11.080985 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:52:11.084202 kernel: Bridge firewalling registered Dec 13 01:52:11.083239 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 01:52:11.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.108718 kernel: audit: type=1130 audit(1734054731.050:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.108751 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:52:11.114649 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 01:52:11.119280 systemd[1]: Starting dracut-cmdline.service... Dec 13 01:52:11.129591 systemd-resolved[185]: Positive Trust Anchors: Dec 13 01:52:11.129609 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:52:11.129653 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:52:11.164600 kernel: audit: type=1130 audit(1734054731.062:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.164675 dracut-cmdline[200]: dracut-dracut-053 Dec 13 01:52:11.164675 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:52:11.148453 systemd-resolved[185]: Defaulting to hostname 'linux'. Dec 13 01:52:11.149490 systemd[1]: Started systemd-resolved.service. Dec 13 01:52:11.204647 kernel: audit: type=1130 audit(1734054731.112:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.204672 kernel: audit: type=1130 audit(1734054731.118:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.204686 kernel: audit: type=1130 audit(1734054731.151:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.151468 systemd[1]: Reached target nss-lookup.target. Dec 13 01:52:11.218757 kernel: SCSI subsystem initialized Dec 13 01:52:11.244322 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:52:11.244388 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:52:11.245671 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 01:52:11.253350 systemd-modules-load[184]: Inserted module 'dm_multipath' Dec 13 01:52:11.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.256589 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:52:11.274892 kernel: audit: type=1130 audit(1734054731.258:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.259652 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:52:11.277645 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:52:11.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.292716 kernel: audit: type=1130 audit(1734054731.281:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.292740 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:52:11.314726 kernel: iscsi: registered transport (tcp) Dec 13 01:52:11.340599 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:52:11.340645 kernel: QLogic iSCSI HBA Driver Dec 13 01:52:11.369165 systemd[1]: Finished dracut-cmdline.service. Dec 13 01:52:11.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.374015 systemd[1]: Starting dracut-pre-udev.service... Dec 13 01:52:11.422725 kernel: raid6: avx512x4 gen() 18688 MB/s Dec 13 01:52:11.444720 kernel: raid6: avx512x4 xor() 8392 MB/s Dec 13 01:52:11.463724 kernel: raid6: avx512x2 gen() 18789 MB/s Dec 13 01:52:11.483721 kernel: raid6: avx512x2 xor() 30006 MB/s Dec 13 01:52:11.502724 kernel: raid6: avx512x1 gen() 18663 MB/s Dec 13 01:52:11.522715 kernel: raid6: avx512x1 xor() 27010 MB/s Dec 13 01:52:11.542718 kernel: raid6: avx2x4 gen() 18708 MB/s Dec 13 01:52:11.562717 kernel: raid6: avx2x4 xor() 7878 MB/s Dec 13 01:52:11.581714 kernel: raid6: avx2x2 gen() 18670 MB/s Dec 13 01:52:11.601719 kernel: raid6: avx2x2 xor() 22322 MB/s Dec 13 01:52:11.620714 kernel: raid6: avx2x1 gen() 14232 MB/s Dec 13 01:52:11.640715 kernel: raid6: avx2x1 xor() 19518 MB/s Dec 13 01:52:11.660716 kernel: raid6: sse2x4 gen() 11771 MB/s Dec 13 01:52:11.680717 kernel: raid6: sse2x4 xor() 7200 MB/s Dec 13 01:52:11.700715 kernel: raid6: sse2x2 gen() 13020 MB/s Dec 13 01:52:11.720716 kernel: raid6: sse2x2 xor() 7521 MB/s Dec 13 01:52:11.740715 kernel: raid6: sse2x1 gen() 11692 MB/s Dec 13 01:52:11.763305 kernel: raid6: sse2x1 xor() 5935 MB/s Dec 13 01:52:11.763332 kernel: raid6: using algorithm avx512x2 gen() 18789 MB/s Dec 13 01:52:11.763345 kernel: raid6: .... xor() 30006 MB/s, rmw enabled Dec 13 01:52:11.766482 kernel: raid6: using avx512x2 recovery algorithm Dec 13 01:52:11.785724 kernel: xor: automatically using best checksumming function avx Dec 13 01:52:11.879736 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 01:52:11.887883 systemd[1]: Finished dracut-pre-udev.service. Dec 13 01:52:11.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.891000 audit: BPF prog-id=7 op=LOAD Dec 13 01:52:11.891000 audit: BPF prog-id=8 op=LOAD Dec 13 01:52:11.891988 systemd[1]: Starting systemd-udevd.service... Dec 13 01:52:11.905898 systemd-udevd[383]: Using default interface naming scheme 'v252'. Dec 13 01:52:11.912289 systemd[1]: Started systemd-udevd.service. Dec 13 01:52:11.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.920666 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 01:52:11.932722 dracut-pre-trigger[390]: rd.md=0: removing MD RAID activation Dec 13 01:52:11.957378 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 01:52:11.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:11.960074 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:52:11.994923 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:52:11.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:12.040814 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:52:12.065992 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 01:52:12.073721 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:52:12.096128 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 01:52:12.096189 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:52:12.111723 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:52:12.115720 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:52:12.119721 kernel: AES CTR mode by8 optimization enabled Dec 13 01:52:12.130810 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 01:52:12.130847 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:52:12.141335 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:52:12.141371 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:52:12.146069 kernel: scsi host1: storvsc_host_t Dec 13 01:52:12.149568 kernel: scsi host0: storvsc_host_t Dec 13 01:52:12.155738 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:52:12.160766 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:52:12.184371 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:52:12.204478 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:52:12.204498 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:52:12.212935 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:52:12.213105 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:52:12.213261 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:52:12.213411 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:52:12.213563 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:52:12.213731 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:52:12.213751 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:52:12.267434 kernel: hv_netvsc 6045bde1-952a-6045-bde1-952a6045bde1 eth0: VF slot 1 added Dec 13 01:52:12.276721 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:52:12.282720 kernel: hv_pci 65973a2a-70fc-42a2-bb2f-f869515c9d2f: PCI VMBus probing: Using version 0x10004 Dec 13 01:52:12.352893 kernel: hv_pci 65973a2a-70fc-42a2-bb2f-f869515c9d2f: PCI host bridge to bus 70fc:00 Dec 13 01:52:12.353011 kernel: pci_bus 70fc:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 01:52:12.353122 kernel: pci_bus 70fc:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:52:12.353212 kernel: pci 70fc:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 01:52:12.353320 kernel: pci 70fc:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 01:52:12.353423 kernel: pci 70fc:00:02.0: enabling Extended Tags Dec 13 01:52:12.353518 kernel: pci 70fc:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 70fc:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 01:52:12.353612 kernel: pci_bus 70fc:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:52:12.353714 kernel: pci 70fc:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 01:52:12.445723 kernel: mlx5_core 70fc:00:02.0: firmware version: 14.30.5000 Dec 13 01:52:12.698304 kernel: mlx5_core 70fc:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 01:52:12.698486 kernel: mlx5_core 70fc:00:02.0: Supported tc offload range - chains: 1, prios: 1 Dec 13 01:52:12.698644 kernel: mlx5_core 70fc:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing Dec 13 01:52:12.698799 kernel: hv_netvsc 6045bde1-952a-6045-bde1-952a6045bde1 eth0: VF registering: eth1 Dec 13 01:52:12.698895 kernel: mlx5_core 70fc:00:02.0 eth1: joined to eth0 Dec 13 01:52:12.672823 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 01:52:12.707722 kernel: mlx5_core 70fc:00:02.0 enP28924s1: renamed from eth1 Dec 13 01:52:12.725726 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (420) Dec 13 01:52:12.738253 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:52:12.875098 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 01:52:12.888753 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 01:52:12.891209 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 01:52:12.897465 systemd[1]: Starting disk-uuid.service... Dec 13 01:52:12.913722 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:52:12.919718 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:52:13.930279 disk-uuid[547]: The operation has completed successfully. Dec 13 01:52:13.932546 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:52:13.999645 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:52:14.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:14.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:13.999754 systemd[1]: Finished disk-uuid.service. Dec 13 01:52:14.014457 systemd[1]: Starting verity-setup.service... Dec 13 01:52:14.053727 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:52:14.350902 systemd[1]: Found device dev-mapper-usr.device. Dec 13 01:52:14.354673 systemd[1]: Finished verity-setup.service. Dec 13 01:52:14.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:14.358748 systemd[1]: Mounting sysusr-usr.mount... Dec 13 01:52:14.430739 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 01:52:14.430360 systemd[1]: Mounted sysusr-usr.mount. Dec 13 01:52:14.433575 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 01:52:14.434383 systemd[1]: Starting ignition-setup.service... Dec 13 01:52:14.445506 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 01:52:14.459416 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:52:14.459445 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:52:14.459461 kernel: BTRFS info (device sda6): has skinny extents Dec 13 01:52:14.517003 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 01:52:14.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:14.521000 audit: BPF prog-id=9 op=LOAD Dec 13 01:52:14.522922 systemd[1]: Starting systemd-networkd.service... Dec 13 01:52:14.544524 systemd-networkd[814]: lo: Link UP Dec 13 01:52:14.544535 systemd-networkd[814]: lo: Gained carrier Dec 13 01:52:14.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:14.545417 systemd-networkd[814]: Enumeration completed Dec 13 01:52:14.545497 systemd[1]: Started systemd-networkd.service. Dec 13 01:52:14.549124 systemd[1]: Reached target network.target. Dec 13 01:52:14.554283 systemd-networkd[814]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:52:14.555033 systemd[1]: Starting iscsiuio.service... Dec 13 01:52:14.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:14.568935 systemd[1]: Started iscsiuio.service. Dec 13 01:52:14.573402 systemd[1]: Starting iscsid.service... Dec 13 01:52:14.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:14.582414 iscsid[823]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:52:14.582414 iscsid[823]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 01:52:14.582414 iscsid[823]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 01:52:14.582414 iscsid[823]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 01:52:14.582414 iscsid[823]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 01:52:14.582414 iscsid[823]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:52:14.582414 iscsid[823]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 01:52:14.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:14.578489 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:52:14.579826 systemd[1]: Started iscsid.service. Dec 13 01:52:14.585315 systemd[1]: Starting dracut-initqueue.service... Dec 13 01:52:14.599098 systemd[1]: Finished dracut-initqueue.service. Dec 13 01:52:14.602504 systemd[1]: Reached target remote-fs-pre.target. Dec 13 01:52:14.633655 kernel: mlx5_core 70fc:00:02.0 enP28924s1: Link up Dec 13 01:52:14.606451 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:52:14.608326 systemd[1]: Reached target remote-fs.target. Dec 13 01:52:14.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:14.615263 systemd[1]: Starting dracut-pre-mount.service... Dec 13 01:52:14.638285 systemd[1]: Finished dracut-pre-mount.service. Dec 13 01:52:14.667539 kernel: hv_netvsc 6045bde1-952a-6045-bde1-952a6045bde1 eth0: Data path switched to VF: enP28924s1 Dec 13 01:52:14.667789 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:52:14.668137 systemd-networkd[814]: enP28924s1: Link UP Dec 13 01:52:14.668379 systemd-networkd[814]: eth0: Link UP Dec 13 01:52:14.668849 systemd-networkd[814]: eth0: Gained carrier Dec 13 01:52:14.675146 systemd-networkd[814]: enP28924s1: Gained carrier Dec 13 01:52:14.727879 systemd-networkd[814]: eth0: DHCPv4 address 10.200.8.16/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:52:14.823732 systemd[1]: Finished ignition-setup.service. Dec 13 01:52:14.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:14.828470 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 01:52:16.260943 systemd-networkd[814]: eth0: Gained IPv6LL Dec 13 01:52:18.741963 ignition[838]: Ignition 2.14.0 Dec 13 01:52:18.741979 ignition[838]: Stage: fetch-offline Dec 13 01:52:18.742078 ignition[838]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:52:18.742146 ignition[838]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:52:18.863539 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:52:18.863754 ignition[838]: parsed url from cmdline: "" Dec 13 01:52:18.863758 ignition[838]: no config URL provided Dec 13 01:52:18.863765 ignition[838]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:52:18.866609 ignition[838]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:52:18.866619 ignition[838]: failed to fetch config: resource requires networking Dec 13 01:52:18.892091 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 01:52:18.892113 kernel: audit: type=1130 audit(1734054738.874:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:18.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:18.871124 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 01:52:18.867895 ignition[838]: Ignition finished successfully Dec 13 01:52:18.875988 systemd[1]: Starting ignition-fetch.service... Dec 13 01:52:18.884282 ignition[844]: Ignition 2.14.0 Dec 13 01:52:18.884289 ignition[844]: Stage: fetch Dec 13 01:52:18.884395 ignition[844]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:52:18.884422 ignition[844]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:52:18.888219 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:52:18.894267 ignition[844]: parsed url from cmdline: "" Dec 13 01:52:18.894277 ignition[844]: no config URL provided Dec 13 01:52:18.894287 ignition[844]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:52:18.894302 ignition[844]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:52:18.894346 ignition[844]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:52:18.995522 ignition[844]: GET result: OK Dec 13 01:52:18.995608 ignition[844]: config has been read from IMDS userdata Dec 13 01:52:18.995630 ignition[844]: parsing config with SHA512: 9cfac43373bcc29848f80202edf189346dc6228f7d36846e6831249069038fec12ea363696eeaea47a918c2ba5cc4210d5cc006dab9df96b9de58b6778a6d092 Dec 13 01:52:19.002027 unknown[844]: fetched base config from "system" Dec 13 01:52:19.003742 unknown[844]: fetched base config from "system" Dec 13 01:52:19.004145 ignition[844]: fetch: fetch complete Dec 13 01:52:19.003749 unknown[844]: fetched user config from "azure" Dec 13 01:52:19.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:19.004150 ignition[844]: fetch: fetch passed Dec 13 01:52:19.026435 kernel: audit: type=1130 audit(1734054739.008:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:19.006972 systemd[1]: Finished ignition-fetch.service. Dec 13 01:52:19.004198 ignition[844]: Ignition finished successfully Dec 13 01:52:19.010482 systemd[1]: Starting ignition-kargs.service... Dec 13 01:52:19.035767 ignition[850]: Ignition 2.14.0 Dec 13 01:52:19.035777 ignition[850]: Stage: kargs Dec 13 01:52:19.035912 ignition[850]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:52:19.035943 ignition[850]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:52:19.044552 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:52:19.048795 ignition[850]: kargs: kargs passed Dec 13 01:52:19.048848 ignition[850]: Ignition finished successfully Dec 13 01:52:19.054092 systemd[1]: Finished ignition-kargs.service. Dec 13 01:52:19.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:19.066017 systemd[1]: Starting ignition-disks.service... Dec 13 01:52:19.071942 kernel: audit: type=1130 audit(1734054739.055:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:19.074518 ignition[856]: Ignition 2.14.0 Dec 13 01:52:19.074528 ignition[856]: Stage: disks Dec 13 01:52:19.074654 ignition[856]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:52:19.074678 ignition[856]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:52:19.077168 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:52:19.082498 ignition[856]: disks: disks passed Dec 13 01:52:19.082548 ignition[856]: Ignition finished successfully Dec 13 01:52:19.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:19.083909 systemd[1]: Finished ignition-disks.service. Dec 13 01:52:19.103534 kernel: audit: type=1130 audit(1734054739.086:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:19.086739 systemd[1]: Reached target initrd-root-device.target. Dec 13 01:52:19.100216 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:52:19.103507 systemd[1]: Reached target local-fs.target. Dec 13 01:52:19.105217 systemd[1]: Reached target sysinit.target. Dec 13 01:52:19.106872 systemd[1]: Reached target basic.target. Dec 13 01:52:19.111342 systemd[1]: Starting systemd-fsck-root.service... Dec 13 01:52:19.175236 systemd-fsck[864]: ROOT: clean, 621/7326000 files, 481077/7359488 blocks Dec 13 01:52:19.181041 systemd[1]: Finished systemd-fsck-root.service. Dec 13 01:52:19.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:19.188992 systemd[1]: Mounting sysroot.mount... Dec 13 01:52:19.199905 kernel: audit: type=1130 audit(1734054739.187:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:19.233723 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 01:52:19.233990 systemd[1]: Mounted sysroot.mount. Dec 13 01:52:19.235235 systemd[1]: Reached target initrd-root-fs.target. Dec 13 01:52:19.275923 systemd[1]: Mounting sysroot-usr.mount... Dec 13 01:52:19.279180 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 01:52:19.281230 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:52:19.281269 systemd[1]: Reached target ignition-diskful.target. Dec 13 01:52:19.285632 systemd[1]: Mounted sysroot-usr.mount. Dec 13 01:52:19.332820 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:52:19.341595 systemd[1]: Starting initrd-setup-root.service... Dec 13 01:52:19.354397 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (875) Dec 13 01:52:19.354443 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:52:19.354455 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:52:19.354465 kernel: BTRFS info (device sda6): has skinny extents Dec 13 01:52:19.366936 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:52:19.371480 initrd-setup-root[880]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:52:19.391100 initrd-setup-root[906]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:52:19.417943 initrd-setup-root[914]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:52:19.424729 initrd-setup-root[922]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:52:19.916160 systemd[1]: Finished initrd-setup-root.service. Dec 13 01:52:19.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:19.919260 systemd[1]: Starting ignition-mount.service... Dec 13 01:52:19.938838 kernel: audit: type=1130 audit(1734054739.917:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:19.934798 systemd[1]: Starting sysroot-boot.service... Dec 13 01:52:19.944355 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 01:52:19.946588 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 01:52:19.967836 ignition[941]: INFO : Ignition 2.14.0 Dec 13 01:52:19.970230 ignition[941]: INFO : Stage: mount Dec 13 01:52:19.972093 ignition[941]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:52:19.972093 ignition[941]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:52:19.993060 kernel: audit: type=1130 audit(1734054739.975:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:19.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:19.974336 systemd[1]: Finished sysroot-boot.service. Dec 13 01:52:20.005996 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:52:20.008889 ignition[941]: INFO : mount: mount passed Dec 13 01:52:20.008889 ignition[941]: INFO : Ignition finished successfully Dec 13 01:52:20.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:20.010808 systemd[1]: Finished ignition-mount.service. Dec 13 01:52:20.024695 kernel: audit: type=1130 audit(1734054740.011:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:20.999799 coreos-metadata[874]: Dec 13 01:52:20.999 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:52:21.019017 coreos-metadata[874]: Dec 13 01:52:21.018 INFO Fetch successful Dec 13 01:52:21.053692 coreos-metadata[874]: Dec 13 01:52:21.053 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:52:21.070125 coreos-metadata[874]: Dec 13 01:52:21.070 INFO Fetch successful Dec 13 01:52:21.085495 coreos-metadata[874]: Dec 13 01:52:21.085 INFO wrote hostname ci-3510.3.6-a-36ffbd9cb7 to /sysroot/etc/hostname Dec 13 01:52:21.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:21.087295 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 01:52:21.105849 kernel: audit: type=1130 audit(1734054741.090:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:21.092514 systemd[1]: Starting ignition-files.service... Dec 13 01:52:21.108990 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:52:21.121726 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (953) Dec 13 01:52:21.130054 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:52:21.130083 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:52:21.130096 kernel: BTRFS info (device sda6): has skinny extents Dec 13 01:52:21.137804 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:52:21.150920 ignition[972]: INFO : Ignition 2.14.0 Dec 13 01:52:21.150920 ignition[972]: INFO : Stage: files Dec 13 01:52:21.155896 ignition[972]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:52:21.155896 ignition[972]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:52:21.168509 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:52:21.187799 ignition[972]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:52:21.190777 ignition[972]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:52:21.190777 ignition[972]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:52:21.249612 ignition[972]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:52:21.253838 ignition[972]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:52:21.266595 unknown[972]: wrote ssh authorized keys file for user: core Dec 13 01:52:21.269077 ignition[972]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:52:21.293613 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:52:21.298043 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:52:21.298043 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:52:21.298043 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:52:21.298043 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:52:21.298043 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:52:21.298043 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 01:52:21.298043 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 01:52:21.334305 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (977) Dec 13 01:52:21.316368 systemd[1]: mnt-oem3394474204.mount: Deactivated successfully. Dec 13 01:52:21.336480 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3394474204" Dec 13 01:52:21.336480 ignition[972]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3394474204": device or resource busy Dec 13 01:52:21.336480 ignition[972]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3394474204", trying btrfs: device or resource busy Dec 13 01:52:21.336480 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3394474204" Dec 13 01:52:21.336480 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3394474204" Dec 13 01:52:21.336480 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem3394474204" Dec 13 01:52:21.336480 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem3394474204" Dec 13 01:52:21.336480 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 01:52:21.336480 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 01:52:21.336480 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 01:52:21.336480 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem407371288" Dec 13 01:52:21.336480 ignition[972]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem407371288": device or resource busy Dec 13 01:52:21.336480 ignition[972]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem407371288", trying btrfs: device or resource busy Dec 13 01:52:21.336480 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem407371288" Dec 13 01:52:21.334294 systemd[1]: mnt-oem407371288.mount: Deactivated successfully. Dec 13 01:52:21.402634 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem407371288" Dec 13 01:52:21.402634 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem407371288" Dec 13 01:52:21.402634 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem407371288" Dec 13 01:52:21.402634 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 01:52:21.402634 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:52:21.402634 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:52:21.965194 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Dec 13 01:52:22.368925 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:52:22.368925 ignition[972]: INFO : files: op(f): [started] processing unit "waagent.service" Dec 13 01:52:22.368925 ignition[972]: INFO : files: op(f): [finished] processing unit "waagent.service" Dec 13 01:52:22.368925 ignition[972]: INFO : files: op(10): [started] processing unit "nvidia.service" Dec 13 01:52:22.368925 ignition[972]: INFO : files: op(10): [finished] processing unit "nvidia.service" Dec 13 01:52:22.368925 ignition[972]: INFO : files: op(11): [started] setting preset to enabled for "waagent.service" Dec 13 01:52:22.406994 kernel: audit: type=1130 audit(1734054742.378:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.407126 ignition[972]: INFO : files: op(11): [finished] setting preset to enabled for "waagent.service" Dec 13 01:52:22.407126 ignition[972]: INFO : files: op(12): [started] setting preset to enabled for "nvidia.service" Dec 13 01:52:22.407126 ignition[972]: INFO : files: op(12): [finished] setting preset to enabled for "nvidia.service" Dec 13 01:52:22.407126 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:52:22.407126 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:52:22.407126 ignition[972]: INFO : files: files passed Dec 13 01:52:22.407126 ignition[972]: INFO : Ignition finished successfully Dec 13 01:52:22.374897 systemd[1]: Finished ignition-files.service. Dec 13 01:52:22.380667 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 01:52:22.395390 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 01:52:22.396213 systemd[1]: Starting ignition-quench.service... Dec 13 01:52:22.402404 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:52:22.402494 systemd[1]: Finished ignition-quench.service. Dec 13 01:52:22.442698 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:52:22.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.443261 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 01:52:22.449899 systemd[1]: Reached target ignition-complete.target. Dec 13 01:52:22.454571 systemd[1]: Starting initrd-parse-etc.service... Dec 13 01:52:22.468351 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:52:22.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.468461 systemd[1]: Finished initrd-parse-etc.service. Dec 13 01:52:22.472340 systemd[1]: Reached target initrd-fs.target. Dec 13 01:52:22.475744 systemd[1]: Reached target initrd.target. Dec 13 01:52:22.477733 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 01:52:22.478473 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 01:52:22.491411 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 01:52:22.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.493856 systemd[1]: Starting initrd-cleanup.service... Dec 13 01:52:22.505256 systemd[1]: Stopped target nss-lookup.target. Dec 13 01:52:22.508760 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 01:52:22.512728 systemd[1]: Stopped target timers.target. Dec 13 01:52:22.515986 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:52:22.518138 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 01:52:22.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.521958 systemd[1]: Stopped target initrd.target. Dec 13 01:52:22.525168 systemd[1]: Stopped target basic.target. Dec 13 01:52:22.528804 systemd[1]: Stopped target ignition-complete.target. Dec 13 01:52:22.532538 systemd[1]: Stopped target ignition-diskful.target. Dec 13 01:52:22.536272 systemd[1]: Stopped target initrd-root-device.target. Dec 13 01:52:22.540578 systemd[1]: Stopped target remote-fs.target. Dec 13 01:52:22.544113 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 01:52:22.547656 systemd[1]: Stopped target sysinit.target. Dec 13 01:52:22.552551 systemd[1]: Stopped target local-fs.target. Dec 13 01:52:22.555800 systemd[1]: Stopped target local-fs-pre.target. Dec 13 01:52:22.559397 systemd[1]: Stopped target swap.target. Dec 13 01:52:22.562504 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:52:22.562633 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 01:52:22.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.570415 systemd[1]: Stopped target cryptsetup.target. Dec 13 01:52:22.573950 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:52:22.576183 systemd[1]: Stopped dracut-initqueue.service. Dec 13 01:52:22.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.579647 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:52:22.582235 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 01:52:22.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.586427 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:52:22.588464 systemd[1]: Stopped ignition-files.service. Dec 13 01:52:22.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.592015 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:52:22.594418 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 01:52:22.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.599188 systemd[1]: Stopping ignition-mount.service... Dec 13 01:52:22.602221 systemd[1]: Stopping iscsiuio.service... Dec 13 01:52:22.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.617723 ignition[1010]: INFO : Ignition 2.14.0 Dec 13 01:52:22.617723 ignition[1010]: INFO : Stage: umount Dec 13 01:52:22.617723 ignition[1010]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:52:22.617723 ignition[1010]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:52:22.603779 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:52:22.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.636248 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:52:22.636248 ignition[1010]: INFO : umount: umount passed Dec 13 01:52:22.636248 ignition[1010]: INFO : Ignition finished successfully Dec 13 01:52:22.603938 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 01:52:22.607240 systemd[1]: Stopping sysroot-boot.service... Dec 13 01:52:22.609059 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:52:22.610826 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 01:52:22.614432 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:52:22.614579 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 01:52:22.638573 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 01:52:22.638664 systemd[1]: Stopped iscsiuio.service. Dec 13 01:52:22.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.660131 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:52:22.660248 systemd[1]: Stopped ignition-mount.service. Dec 13 01:52:22.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.667856 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:52:22.667955 systemd[1]: Stopped ignition-disks.service. Dec 13 01:52:22.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.673962 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:52:22.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.674053 systemd[1]: Stopped ignition-kargs.service. Dec 13 01:52:22.675684 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:52:22.675739 systemd[1]: Stopped ignition-fetch.service. Dec 13 01:52:22.678298 systemd[1]: Stopped target network.target. Dec 13 01:52:22.687792 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:52:22.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.687849 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 01:52:22.692333 systemd[1]: Stopped target paths.target. Dec 13 01:52:22.697138 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:52:22.700749 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 01:52:22.702959 systemd[1]: Stopped target slices.target. Dec 13 01:52:22.706250 systemd[1]: Stopped target sockets.target. Dec 13 01:52:22.707864 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:52:22.707894 systemd[1]: Closed iscsid.socket. Dec 13 01:52:22.709366 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:52:22.709410 systemd[1]: Closed iscsiuio.socket. Dec 13 01:52:22.712460 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:52:22.712512 systemd[1]: Stopped ignition-setup.service. Dec 13 01:52:22.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.725457 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:52:22.726986 systemd[1]: Stopping systemd-resolved.service... Dec 13 01:52:22.731257 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:52:22.732013 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:52:22.732098 systemd[1]: Finished initrd-cleanup.service. Dec 13 01:52:22.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.740657 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:52:22.740759 systemd-networkd[814]: eth0: DHCPv6 lease lost Dec 13 01:52:22.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.740778 systemd[1]: Stopped systemd-resolved.service. Dec 13 01:52:22.749667 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:52:22.751711 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:52:22.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.760594 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:52:22.762801 systemd[1]: Stopped sysroot-boot.service. Dec 13 01:52:22.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.766000 audit: BPF prog-id=6 op=UNLOAD Dec 13 01:52:22.766000 audit: BPF prog-id=9 op=UNLOAD Dec 13 01:52:22.766335 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:52:22.766378 systemd[1]: Closed systemd-networkd.socket. Dec 13 01:52:22.771522 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:52:22.771575 systemd[1]: Stopped initrd-setup-root.service. Dec 13 01:52:22.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.777831 systemd[1]: Stopping network-cleanup.service... Dec 13 01:52:22.781418 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:52:22.781482 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 01:52:22.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.787265 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:52:22.787316 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:52:22.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.792610 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:52:22.792661 systemd[1]: Stopped systemd-modules-load.service. Dec 13 01:52:22.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.798527 systemd[1]: Stopping systemd-udevd.service... Dec 13 01:52:22.802953 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 01:52:22.806088 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:52:22.808095 systemd[1]: Stopped systemd-udevd.service. Dec 13 01:52:22.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.812422 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:52:22.812496 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 01:52:22.814639 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:52:22.816779 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 01:52:22.824127 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:52:22.824181 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 01:52:22.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.829754 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:52:22.829807 systemd[1]: Stopped dracut-cmdline.service. Dec 13 01:52:22.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.834900 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:52:22.834948 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 01:52:22.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.841369 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 01:52:22.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.843483 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:52:22.843543 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 01:52:22.847757 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:52:22.863964 kernel: hv_netvsc 6045bde1-952a-6045-bde1-952a6045bde1 eth0: Data path switched from VF: enP28924s1 Dec 13 01:52:22.847838 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 01:52:22.881804 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:52:22.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:22.881891 systemd[1]: Stopped network-cleanup.service. Dec 13 01:52:22.886843 systemd[1]: Reached target initrd-switch-root.target. Dec 13 01:52:22.893068 systemd[1]: Starting initrd-switch-root.service... Dec 13 01:52:22.904147 systemd[1]: Switching root. Dec 13 01:52:22.933177 iscsid[823]: iscsid shutting down. Dec 13 01:52:22.934664 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). Dec 13 01:52:22.934757 systemd-journald[183]: Journal stopped Dec 13 01:52:39.633424 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 01:52:39.633471 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 01:52:39.633491 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 01:52:39.633507 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:52:39.633521 kernel: SELinux: policy capability open_perms=1 Dec 13 01:52:39.633536 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:52:39.633554 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:52:39.633573 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:52:39.633588 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:52:39.633602 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:52:39.633618 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:52:39.633634 kernel: kauditd_printk_skb: 41 callbacks suppressed Dec 13 01:52:39.633649 kernel: audit: type=1403 audit(1734054745.849:80): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:52:39.633668 systemd[1]: Successfully loaded SELinux policy in 310.182ms. Dec 13 01:52:39.633692 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 41.408ms. Dec 13 01:52:39.633720 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:52:39.633744 systemd[1]: Detected virtualization microsoft. Dec 13 01:52:39.633761 systemd[1]: Detected architecture x86-64. Dec 13 01:52:39.633780 systemd[1]: Detected first boot. Dec 13 01:52:39.633801 systemd[1]: Hostname set to . Dec 13 01:52:39.633819 systemd[1]: Initializing machine ID from random generator. Dec 13 01:52:39.633836 kernel: audit: type=1400 audit(1734054746.672:81): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:52:39.633855 kernel: audit: type=1400 audit(1734054746.687:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:52:39.633873 kernel: audit: type=1400 audit(1734054746.687:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:52:39.633888 kernel: audit: type=1334 audit(1734054746.698:84): prog-id=10 op=LOAD Dec 13 01:52:39.633908 kernel: audit: type=1334 audit(1734054746.698:85): prog-id=10 op=UNLOAD Dec 13 01:52:39.633924 kernel: audit: type=1334 audit(1734054746.708:86): prog-id=11 op=LOAD Dec 13 01:52:39.633939 kernel: audit: type=1334 audit(1734054746.708:87): prog-id=11 op=UNLOAD Dec 13 01:52:39.633955 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 01:52:39.633972 kernel: audit: type=1400 audit(1734054748.735:88): avc: denied { associate } for pid=1043 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 01:52:39.633990 kernel: audit: type=1300 audit(1734054748.735:88): arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1026 pid=1043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:52:39.634007 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:52:39.634028 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:52:39.634046 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:52:39.634066 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:52:39.634083 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 01:52:39.634100 kernel: audit: type=1334 audit(1734054759.059:90): prog-id=12 op=LOAD Dec 13 01:52:39.634115 kernel: audit: type=1334 audit(1734054759.059:91): prog-id=3 op=UNLOAD Dec 13 01:52:39.634131 kernel: audit: type=1334 audit(1734054759.063:92): prog-id=13 op=LOAD Dec 13 01:52:39.634151 kernel: audit: type=1334 audit(1734054759.067:93): prog-id=14 op=LOAD Dec 13 01:52:39.634170 kernel: audit: type=1334 audit(1734054759.067:94): prog-id=4 op=UNLOAD Dec 13 01:52:39.634190 kernel: audit: type=1334 audit(1734054759.067:95): prog-id=5 op=UNLOAD Dec 13 01:52:39.634207 kernel: audit: type=1334 audit(1734054759.072:96): prog-id=15 op=LOAD Dec 13 01:52:39.634224 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 01:52:39.634242 kernel: audit: type=1334 audit(1734054759.072:97): prog-id=12 op=UNLOAD Dec 13 01:52:39.634259 kernel: audit: type=1334 audit(1734054759.076:98): prog-id=16 op=LOAD Dec 13 01:52:39.634276 systemd[1]: Stopped iscsid.service. Dec 13 01:52:39.634295 kernel: audit: type=1334 audit(1734054759.080:99): prog-id=17 op=LOAD Dec 13 01:52:39.634315 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:52:39.634334 systemd[1]: Stopped initrd-switch-root.service. Dec 13 01:52:39.634352 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:52:39.634370 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 01:52:39.634388 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 01:52:39.634407 systemd[1]: Created slice system-getty.slice. Dec 13 01:52:39.634425 systemd[1]: Created slice system-modprobe.slice. Dec 13 01:52:39.634442 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 01:52:39.634463 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 01:52:39.634482 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 01:52:39.634501 systemd[1]: Created slice user.slice. Dec 13 01:52:39.634519 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:52:39.634537 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 01:52:39.634555 systemd[1]: Set up automount boot.automount. Dec 13 01:52:39.634574 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 01:52:39.634592 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 01:52:39.634610 systemd[1]: Stopped target initrd-fs.target. Dec 13 01:52:39.634631 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 01:52:39.634650 systemd[1]: Reached target integritysetup.target. Dec 13 01:52:39.634668 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:52:39.634686 systemd[1]: Reached target remote-fs.target. Dec 13 01:52:39.634722 systemd[1]: Reached target slices.target. Dec 13 01:52:39.634743 systemd[1]: Reached target swap.target. Dec 13 01:52:39.634760 systemd[1]: Reached target torcx.target. Dec 13 01:52:39.634779 systemd[1]: Reached target veritysetup.target. Dec 13 01:52:39.634800 systemd[1]: Listening on systemd-coredump.socket. Dec 13 01:52:39.634819 systemd[1]: Listening on systemd-initctl.socket. Dec 13 01:52:39.634837 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:52:39.634856 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:52:39.634878 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:52:39.634898 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 01:52:39.634917 systemd[1]: Mounting dev-hugepages.mount... Dec 13 01:52:39.634936 systemd[1]: Mounting dev-mqueue.mount... Dec 13 01:52:39.634954 systemd[1]: Mounting media.mount... Dec 13 01:52:39.634974 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:52:39.634992 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 01:52:39.635013 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 01:52:39.635032 systemd[1]: Mounting tmp.mount... Dec 13 01:52:39.635054 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 01:52:39.635073 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:52:39.635091 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:52:39.635109 systemd[1]: Starting modprobe@configfs.service... Dec 13 01:52:39.635128 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:52:39.635147 systemd[1]: Starting modprobe@drm.service... Dec 13 01:52:39.635166 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:52:39.635183 systemd[1]: Starting modprobe@fuse.service... Dec 13 01:52:39.635202 systemd[1]: Starting modprobe@loop.service... Dec 13 01:52:39.635224 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:52:39.635242 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:52:39.635261 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 01:52:39.635280 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:52:39.635300 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:52:39.635317 systemd[1]: Stopped systemd-journald.service. Dec 13 01:52:39.635336 systemd[1]: Starting systemd-journald.service... Dec 13 01:52:39.635355 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:52:39.635373 systemd[1]: Starting systemd-network-generator.service... Dec 13 01:52:39.635394 systemd[1]: Starting systemd-remount-fs.service... Dec 13 01:52:39.635414 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:52:39.635433 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:52:39.635452 systemd[1]: Stopped verity-setup.service. Dec 13 01:52:39.635471 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:52:39.635490 kernel: fuse: init (API version 7.34) Dec 13 01:52:39.635507 systemd[1]: Mounted dev-hugepages.mount. Dec 13 01:52:39.635525 systemd[1]: Mounted dev-mqueue.mount. Dec 13 01:52:39.635544 systemd[1]: Mounted media.mount. Dec 13 01:52:39.635566 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 01:52:39.635585 kernel: loop: module loaded Dec 13 01:52:39.635602 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 01:52:39.635621 systemd[1]: Mounted tmp.mount. Dec 13 01:52:39.635641 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 01:52:39.635668 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:52:39.635691 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:52:39.635726 systemd[1]: Finished modprobe@configfs.service. Dec 13 01:52:39.635746 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:52:39.635766 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:52:39.635785 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:52:39.635808 systemd[1]: Finished modprobe@drm.service. Dec 13 01:52:39.635826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:52:39.635845 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:52:39.635864 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:52:39.635884 systemd[1]: Finished modprobe@fuse.service. Dec 13 01:52:39.635902 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:52:39.635925 systemd-journald[1140]: Journal started Dec 13 01:52:39.635998 systemd-journald[1140]: Runtime Journal (/run/log/journal/8c8d0e190c3249ffa8ad4f98cb1b3cc3) is 8.0M, max 159.0M, 151.0M free. Dec 13 01:52:25.849000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:52:26.672000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:52:26.687000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:52:26.687000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:52:26.698000 audit: BPF prog-id=10 op=LOAD Dec 13 01:52:26.698000 audit: BPF prog-id=10 op=UNLOAD Dec 13 01:52:26.708000 audit: BPF prog-id=11 op=LOAD Dec 13 01:52:26.708000 audit: BPF prog-id=11 op=UNLOAD Dec 13 01:52:28.735000 audit[1043]: AVC avc: denied { associate } for pid=1043 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 01:52:28.735000 audit[1043]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1026 pid=1043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:52:28.735000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:52:28.742000 audit[1043]: AVC avc: denied { associate } for pid=1043 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 01:52:28.742000 audit[1043]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a9 a2=1ed a3=0 items=2 ppid=1026 pid=1043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:52:28.742000 audit: CWD cwd="/" Dec 13 01:52:28.742000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:52:28.742000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:52:28.742000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:52:39.059000 audit: BPF prog-id=12 op=LOAD Dec 13 01:52:39.059000 audit: BPF prog-id=3 op=UNLOAD Dec 13 01:52:39.063000 audit: BPF prog-id=13 op=LOAD Dec 13 01:52:39.067000 audit: BPF prog-id=14 op=LOAD Dec 13 01:52:39.067000 audit: BPF prog-id=4 op=UNLOAD Dec 13 01:52:39.067000 audit: BPF prog-id=5 op=UNLOAD Dec 13 01:52:39.072000 audit: BPF prog-id=15 op=LOAD Dec 13 01:52:39.072000 audit: BPF prog-id=12 op=UNLOAD Dec 13 01:52:39.076000 audit: BPF prog-id=16 op=LOAD Dec 13 01:52:39.080000 audit: BPF prog-id=17 op=LOAD Dec 13 01:52:39.080000 audit: BPF prog-id=13 op=UNLOAD Dec 13 01:52:39.080000 audit: BPF prog-id=14 op=UNLOAD Dec 13 01:52:39.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.116000 audit: BPF prog-id=15 op=UNLOAD Dec 13 01:52:39.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.468000 audit: BPF prog-id=18 op=LOAD Dec 13 01:52:39.469000 audit: BPF prog-id=19 op=LOAD Dec 13 01:52:39.469000 audit: BPF prog-id=20 op=LOAD Dec 13 01:52:39.469000 audit: BPF prog-id=16 op=UNLOAD Dec 13 01:52:39.469000 audit: BPF prog-id=17 op=UNLOAD Dec 13 01:52:39.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.625000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 01:52:39.625000 audit[1140]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc98d36b80 a2=4000 a3=7ffc98d36c1c items=0 ppid=1 pid=1140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:52:39.625000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 01:52:39.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.058355 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:52:28.678255 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:52:39.082362 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:52:28.678619 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 01:52:28.678640 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 01:52:28.678679 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 01:52:28.678690 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 01:52:28.678763 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 01:52:28.678778 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 01:52:28.679014 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 01:52:28.679059 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 01:52:28.679073 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 01:52:28.716640 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 01:52:28.716746 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 01:52:28.716782 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 01:52:28.716800 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 01:52:28.716829 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 01:52:28.716852 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:28Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 01:52:37.909038 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:37Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:52:37.909292 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:37Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:52:37.909398 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:37Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:52:37.909569 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:37Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:52:37.909615 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:37Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 01:52:37.909667 /usr/lib/systemd/system-generators/torcx-generator[1043]: time="2024-12-13T01:52:37Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 01:52:39.646721 systemd[1]: Finished modprobe@loop.service. Dec 13 01:52:39.646759 systemd[1]: Started systemd-journald.service. Dec 13 01:52:39.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.649287 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:52:39.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.651812 systemd[1]: Finished systemd-network-generator.service. Dec 13 01:52:39.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.654351 systemd[1]: Finished systemd-remount-fs.service. Dec 13 01:52:39.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.657186 systemd[1]: Reached target network-pre.target. Dec 13 01:52:39.660904 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 01:52:39.668122 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 01:52:39.673258 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:52:39.689487 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 01:52:39.692525 systemd[1]: Starting systemd-journal-flush.service... Dec 13 01:52:39.694563 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:52:39.695629 systemd[1]: Starting systemd-random-seed.service... Dec 13 01:52:39.697396 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:52:39.698502 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:52:39.701441 systemd[1]: Starting systemd-sysusers.service... Dec 13 01:52:39.707563 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:52:39.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.711124 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 01:52:39.714877 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 01:52:39.718308 systemd[1]: Starting systemd-udev-settle.service... Dec 13 01:52:39.729884 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:52:39.743335 systemd[1]: Finished systemd-random-seed.service. Dec 13 01:52:39.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.745449 systemd[1]: Reached target first-boot-complete.target. Dec 13 01:52:39.758343 systemd-journald[1140]: Time spent on flushing to /var/log/journal/8c8d0e190c3249ffa8ad4f98cb1b3cc3 is 24.008ms for 1145 entries. Dec 13 01:52:39.758343 systemd-journald[1140]: System Journal (/var/log/journal/8c8d0e190c3249ffa8ad4f98cb1b3cc3) is 8.0M, max 2.6G, 2.6G free. Dec 13 01:52:39.823941 systemd-journald[1140]: Received client request to flush runtime journal. Dec 13 01:52:39.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:39.769789 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:52:39.824980 systemd[1]: Finished systemd-journal-flush.service. Dec 13 01:52:39.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:40.391876 systemd[1]: Finished systemd-sysusers.service. Dec 13 01:52:40.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:41.044158 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 01:52:41.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:41.046000 audit: BPF prog-id=21 op=LOAD Dec 13 01:52:41.046000 audit: BPF prog-id=22 op=LOAD Dec 13 01:52:41.046000 audit: BPF prog-id=7 op=UNLOAD Dec 13 01:52:41.046000 audit: BPF prog-id=8 op=UNLOAD Dec 13 01:52:41.048371 systemd[1]: Starting systemd-udevd.service... Dec 13 01:52:41.066282 systemd-udevd[1170]: Using default interface naming scheme 'v252'. Dec 13 01:52:41.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:41.333000 audit: BPF prog-id=23 op=LOAD Dec 13 01:52:41.330693 systemd[1]: Started systemd-udevd.service. Dec 13 01:52:41.335441 systemd[1]: Starting systemd-networkd.service... Dec 13 01:52:41.370856 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 01:52:41.439734 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:52:41.446765 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 01:52:41.454261 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:52:41.454332 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:52:41.427000 audit[1172]: AVC avc: denied { confidentiality } for pid=1172 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:52:41.458731 kernel: hv_vmbus: registering driver hv_balloon Dec 13 01:52:41.458782 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:52:41.467531 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:52:41.467592 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:52:41.857763 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 01:52:41.858356 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 01:52:41.865894 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 01:52:41.872209 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:52:41.877362 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:52:41.427000 audit[1172]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e136c1dcf0 a1=f884 a2=7f8ef6bc0bc5 a3=5 items=12 ppid=1170 pid=1172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:52:41.427000 audit: CWD cwd="/" Dec 13 01:52:41.427000 audit: PATH item=0 name=(null) inode=237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:52:41.427000 audit: PATH item=1 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:52:41.427000 audit: PATH item=2 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:52:41.427000 audit: PATH item=3 name=(null) inode=15596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:52:41.427000 audit: PATH item=4 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:52:41.427000 audit: PATH item=5 name=(null) inode=15597 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:52:41.427000 audit: PATH item=6 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:52:41.427000 audit: PATH item=7 name=(null) inode=15598 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:52:41.427000 audit: PATH item=8 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:52:41.427000 audit: PATH item=9 name=(null) inode=15599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:52:41.427000 audit: PATH item=10 name=(null) inode=15595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:52:41.427000 audit: PATH item=11 name=(null) inode=15600 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:52:41.427000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 01:52:41.896000 audit: BPF prog-id=24 op=LOAD Dec 13 01:52:41.896000 audit: BPF prog-id=25 op=LOAD Dec 13 01:52:41.896000 audit: BPF prog-id=26 op=LOAD Dec 13 01:52:41.899199 systemd[1]: Starting systemd-userdbd.service... Dec 13 01:52:41.967553 systemd[1]: Started systemd-userdbd.service. Dec 13 01:52:41.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:42.119419 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Dec 13 01:52:42.133370 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1184) Dec 13 01:52:42.170003 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:52:42.243723 systemd[1]: Finished systemd-udev-settle.service. Dec 13 01:52:42.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:42.247305 systemd[1]: Starting lvm2-activation-early.service... Dec 13 01:52:42.335172 systemd-networkd[1181]: lo: Link UP Dec 13 01:52:42.335183 systemd-networkd[1181]: lo: Gained carrier Dec 13 01:52:42.335901 systemd-networkd[1181]: Enumeration completed Dec 13 01:52:42.336036 systemd[1]: Started systemd-networkd.service. Dec 13 01:52:42.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:42.340041 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:52:42.388707 systemd-networkd[1181]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:52:42.443373 kernel: mlx5_core 70fc:00:02.0 enP28924s1: Link up Dec 13 01:52:42.464365 kernel: hv_netvsc 6045bde1-952a-6045-bde1-952a6045bde1 eth0: Data path switched to VF: enP28924s1 Dec 13 01:52:42.465732 systemd-networkd[1181]: enP28924s1: Link UP Dec 13 01:52:42.466036 systemd-networkd[1181]: eth0: Link UP Dec 13 01:52:42.466113 systemd-networkd[1181]: eth0: Gained carrier Dec 13 01:52:42.472594 systemd-networkd[1181]: enP28924s1: Gained carrier Dec 13 01:52:42.501504 systemd-networkd[1181]: eth0: DHCPv4 address 10.200.8.16/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:52:42.604845 lvm[1248]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:52:42.637455 systemd[1]: Finished lvm2-activation-early.service. Dec 13 01:52:42.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:42.640332 systemd[1]: Reached target cryptsetup.target. Dec 13 01:52:42.643784 systemd[1]: Starting lvm2-activation.service... Dec 13 01:52:42.649994 lvm[1250]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:52:42.676397 systemd[1]: Finished lvm2-activation.service. Dec 13 01:52:42.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:42.679109 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:52:42.680940 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:52:42.680972 systemd[1]: Reached target local-fs.target. Dec 13 01:52:42.682633 systemd[1]: Reached target machines.target. Dec 13 01:52:42.685539 systemd[1]: Starting ldconfig.service... Dec 13 01:52:42.687678 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:52:42.687777 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:52:42.688907 systemd[1]: Starting systemd-boot-update.service... Dec 13 01:52:42.691849 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 01:52:42.695368 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 01:52:42.698698 systemd[1]: Starting systemd-sysext.service... Dec 13 01:52:42.720378 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1252 (bootctl) Dec 13 01:52:42.722152 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 01:52:42.761797 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:52:42.762416 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 01:52:42.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:42.778259 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 01:52:42.784331 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 01:52:42.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:42.851554 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 01:52:42.851815 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 01:52:42.893378 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 01:52:42.938369 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:52:42.956367 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 01:52:42.965506 (sd-sysext)[1264]: Using extensions 'kubernetes'. Dec 13 01:52:42.965940 (sd-sysext)[1264]: Merged extensions into '/usr'. Dec 13 01:52:42.981489 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:52:42.983028 systemd[1]: Mounting usr-share-oem.mount... Dec 13 01:52:42.985279 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:52:42.988455 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:52:42.990487 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:52:42.992656 systemd[1]: Starting modprobe@loop.service... Dec 13 01:52:42.993976 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:52:42.994123 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:52:42.994290 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:52:42.998445 systemd[1]: Mounted usr-share-oem.mount. Dec 13 01:52:43.000786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:52:43.000938 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:52:43.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.003399 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:52:43.003556 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:52:43.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.006151 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:52:43.006292 systemd[1]: Finished modprobe@loop.service. Dec 13 01:52:43.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.009533 systemd[1]: Finished systemd-sysext.service. Dec 13 01:52:43.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.013818 systemd[1]: Starting ensure-sysext.service... Dec 13 01:52:43.015903 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:52:43.015971 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:52:43.017254 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 01:52:43.024033 systemd[1]: Reloading. Dec 13 01:52:43.088059 /usr/lib/systemd/system-generators/torcx-generator[1290]: time="2024-12-13T01:52:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:52:43.090420 /usr/lib/systemd/system-generators/torcx-generator[1290]: time="2024-12-13T01:52:43Z" level=info msg="torcx already run" Dec 13 01:52:43.188433 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 01:52:43.189276 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:52:43.189296 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:52:43.205949 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:52:43.270000 audit: BPF prog-id=27 op=LOAD Dec 13 01:52:43.270000 audit: BPF prog-id=28 op=LOAD Dec 13 01:52:43.270000 audit: BPF prog-id=21 op=UNLOAD Dec 13 01:52:43.270000 audit: BPF prog-id=22 op=UNLOAD Dec 13 01:52:43.271000 audit: BPF prog-id=29 op=LOAD Dec 13 01:52:43.271000 audit: BPF prog-id=18 op=UNLOAD Dec 13 01:52:43.271000 audit: BPF prog-id=30 op=LOAD Dec 13 01:52:43.271000 audit: BPF prog-id=31 op=LOAD Dec 13 01:52:43.271000 audit: BPF prog-id=19 op=UNLOAD Dec 13 01:52:43.271000 audit: BPF prog-id=20 op=UNLOAD Dec 13 01:52:43.272000 audit: BPF prog-id=32 op=LOAD Dec 13 01:52:43.272000 audit: BPF prog-id=23 op=UNLOAD Dec 13 01:52:43.274000 audit: BPF prog-id=33 op=LOAD Dec 13 01:52:43.274000 audit: BPF prog-id=24 op=UNLOAD Dec 13 01:52:43.274000 audit: BPF prog-id=34 op=LOAD Dec 13 01:52:43.274000 audit: BPF prog-id=35 op=LOAD Dec 13 01:52:43.274000 audit: BPF prog-id=25 op=UNLOAD Dec 13 01:52:43.274000 audit: BPF prog-id=26 op=UNLOAD Dec 13 01:52:43.289195 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:52:43.289520 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:52:43.290933 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:52:43.294008 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:52:43.297274 systemd[1]: Starting modprobe@loop.service... Dec 13 01:52:43.299215 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:52:43.299453 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:52:43.299603 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:52:43.300653 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:52:43.300812 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:52:43.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.303228 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:52:43.303385 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:52:43.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.305808 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:52:43.305946 systemd[1]: Finished modprobe@loop.service. Dec 13 01:52:43.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.309721 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:52:43.310020 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:52:43.311329 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:52:43.314794 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:52:43.317881 systemd[1]: Starting modprobe@loop.service... Dec 13 01:52:43.320009 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:52:43.320196 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:52:43.320336 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:52:43.321432 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:52:43.321577 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:52:43.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.324035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:52:43.324173 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:52:43.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.326616 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:52:43.326751 systemd[1]: Finished modprobe@loop.service. Dec 13 01:52:43.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.331305 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:52:43.331639 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:52:43.332913 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:52:43.336111 systemd[1]: Starting modprobe@drm.service... Dec 13 01:52:43.339038 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:52:43.342065 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:52:43.342332 systemd[1]: Starting modprobe@loop.service... Dec 13 01:52:43.346399 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:52:43.346589 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:52:43.346787 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:52:43.347932 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:52:43.348082 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:52:43.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.350481 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:52:43.350626 systemd[1]: Finished modprobe@drm.service. Dec 13 01:52:43.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.353020 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:52:43.353161 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:52:43.354252 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:52:43.354372 systemd[1]: Finished modprobe@loop.service. Dec 13 01:52:43.355099 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:52:43.355195 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:52:43.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.357362 systemd[1]: Finished ensure-sysext.service. Dec 13 01:52:43.448840 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:52:43.463658 systemd-fsck[1260]: fsck.fat 4.2 (2021-01-31) Dec 13 01:52:43.463658 systemd-fsck[1260]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 01:52:43.465717 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 01:52:43.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:43.470744 systemd[1]: Mounting boot.mount... Dec 13 01:52:43.483244 systemd[1]: Mounted boot.mount. Dec 13 01:52:43.496861 systemd[1]: Finished systemd-boot-update.service. Dec 13 01:52:43.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:44.231539 systemd-networkd[1181]: eth0: Gained IPv6LL Dec 13 01:52:44.237306 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:52:44.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:44.246610 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 01:52:44.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:44.250752 systemd[1]: Starting audit-rules.service... Dec 13 01:52:44.254062 systemd[1]: Starting clean-ca-certificates.service... Dec 13 01:52:44.257452 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 01:52:44.259000 audit: BPF prog-id=36 op=LOAD Dec 13 01:52:44.261722 systemd[1]: Starting systemd-resolved.service... Dec 13 01:52:44.264000 audit: BPF prog-id=37 op=LOAD Dec 13 01:52:44.267321 systemd[1]: Starting systemd-timesyncd.service... Dec 13 01:52:44.271264 systemd[1]: Starting systemd-update-utmp.service... Dec 13 01:52:44.302000 audit[1373]: SYSTEM_BOOT pid=1373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 01:52:44.308221 systemd[1]: Finished systemd-update-utmp.service. Dec 13 01:52:44.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:44.337935 systemd[1]: Finished clean-ca-certificates.service. Dec 13 01:52:44.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:44.340559 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:52:44.369756 systemd[1]: Started systemd-timesyncd.service. Dec 13 01:52:44.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:44.372374 systemd[1]: Reached target time-set.target. Dec 13 01:52:44.398416 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 01:52:44.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:44.449850 systemd-resolved[1371]: Positive Trust Anchors: Dec 13 01:52:44.449867 systemd-resolved[1371]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:52:44.449905 systemd-resolved[1371]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:52:44.552430 systemd-resolved[1371]: Using system hostname 'ci-3510.3.6-a-36ffbd9cb7'. Dec 13 01:52:44.554681 systemd[1]: Started systemd-resolved.service. Dec 13 01:52:44.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:44.556845 systemd[1]: Reached target network.target. Dec 13 01:52:44.559209 kernel: kauditd_printk_skb: 133 callbacks suppressed Dec 13 01:52:44.559269 kernel: audit: type=1130 audit(1734054764.555:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:52:44.570941 systemd[1]: Reached target network-online.target. Dec 13 01:52:44.572715 systemd[1]: Reached target nss-lookup.target. Dec 13 01:52:44.617291 systemd-timesyncd[1372]: Contacted time server 193.1.12.167:123 (0.flatcar.pool.ntp.org). Dec 13 01:52:44.617398 systemd-timesyncd[1372]: Initial clock synchronization to Fri 2024-12-13 01:52:44.617280 UTC. Dec 13 01:52:44.647000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:52:44.649631 augenrules[1389]: No rules Dec 13 01:52:44.650745 systemd[1]: Finished audit-rules.service. Dec 13 01:52:44.657365 kernel: audit: type=1305 audit(1734054764.647:217): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:52:44.657440 kernel: audit: type=1300 audit(1734054764.647:217): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcc8b7cd60 a2=420 a3=0 items=0 ppid=1368 pid=1389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:52:44.657465 kernel: audit: type=1327 audit(1734054764.647:217): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 01:52:44.647000 audit[1389]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcc8b7cd60 a2=420 a3=0 items=0 ppid=1368 pid=1389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:52:44.647000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 01:52:50.860068 ldconfig[1251]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:52:50.871641 systemd[1]: Finished ldconfig.service. Dec 13 01:52:50.877220 systemd[1]: Starting systemd-update-done.service... Dec 13 01:52:50.903221 systemd[1]: Finished systemd-update-done.service. Dec 13 01:52:50.906125 systemd[1]: Reached target sysinit.target. Dec 13 01:52:50.908393 systemd[1]: Started motdgen.path. Dec 13 01:52:50.910382 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 01:52:50.913628 systemd[1]: Started logrotate.timer. Dec 13 01:52:50.915405 systemd[1]: Started mdadm.timer. Dec 13 01:52:50.916799 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 01:52:50.918596 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:52:50.918622 systemd[1]: Reached target paths.target. Dec 13 01:52:50.920176 systemd[1]: Reached target timers.target. Dec 13 01:52:50.922281 systemd[1]: Listening on dbus.socket. Dec 13 01:52:50.924844 systemd[1]: Starting docker.socket... Dec 13 01:52:50.929203 systemd[1]: Listening on sshd.socket. Dec 13 01:52:50.931290 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:52:50.931754 systemd[1]: Listening on docker.socket. Dec 13 01:52:50.933606 systemd[1]: Reached target sockets.target. Dec 13 01:52:50.935595 systemd[1]: Reached target basic.target. Dec 13 01:52:50.937412 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:52:50.937447 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:52:50.938448 systemd[1]: Starting containerd.service... Dec 13 01:52:50.941005 systemd[1]: Starting dbus.service... Dec 13 01:52:50.943483 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 01:52:50.946492 systemd[1]: Starting extend-filesystems.service... Dec 13 01:52:50.948430 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 01:52:50.964036 systemd[1]: Starting kubelet.service... Dec 13 01:52:50.966923 systemd[1]: Starting motdgen.service... Dec 13 01:52:50.969874 systemd[1]: Started nvidia.service. Dec 13 01:52:50.973448 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 01:52:50.976541 systemd[1]: Starting sshd-keygen.service... Dec 13 01:52:50.981169 systemd[1]: Starting systemd-logind.service... Dec 13 01:52:50.983727 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:52:50.983823 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:52:50.984357 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:52:50.985172 systemd[1]: Starting update-engine.service... Dec 13 01:52:50.991326 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 01:52:50.998425 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:52:50.998680 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 01:52:51.009254 jq[1399]: false Dec 13 01:52:51.012795 jq[1414]: true Dec 13 01:52:51.017977 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:52:51.018267 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 01:52:51.032535 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:52:51.032727 systemd[1]: Finished motdgen.service. Dec 13 01:52:51.045370 jq[1419]: true Dec 13 01:52:51.056322 extend-filesystems[1400]: Found loop1 Dec 13 01:52:51.056322 extend-filesystems[1400]: Found sda Dec 13 01:52:51.056322 extend-filesystems[1400]: Found sda1 Dec 13 01:52:51.056322 extend-filesystems[1400]: Found sda2 Dec 13 01:52:51.056322 extend-filesystems[1400]: Found sda3 Dec 13 01:52:51.056322 extend-filesystems[1400]: Found usr Dec 13 01:52:51.056322 extend-filesystems[1400]: Found sda4 Dec 13 01:52:51.056322 extend-filesystems[1400]: Found sda6 Dec 13 01:52:51.056322 extend-filesystems[1400]: Found sda7 Dec 13 01:52:51.056322 extend-filesystems[1400]: Found sda9 Dec 13 01:52:51.056322 extend-filesystems[1400]: Checking size of /dev/sda9 Dec 13 01:52:51.096050 env[1422]: time="2024-12-13T01:52:51.095784707Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 01:52:51.127668 env[1422]: time="2024-12-13T01:52:51.127580732Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:52:51.127912 env[1422]: time="2024-12-13T01:52:51.127891833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:52:51.129949 env[1422]: time="2024-12-13T01:52:51.129920441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:52:51.130031 env[1422]: time="2024-12-13T01:52:51.130020241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:52:51.130330 env[1422]: time="2024-12-13T01:52:51.130303342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:52:51.130460 env[1422]: time="2024-12-13T01:52:51.130442943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:52:51.130540 env[1422]: time="2024-12-13T01:52:51.130524243Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 01:52:51.130610 env[1422]: time="2024-12-13T01:52:51.130597144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:52:51.130757 env[1422]: time="2024-12-13T01:52:51.130741244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:52:51.131081 env[1422]: time="2024-12-13T01:52:51.131060045Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:52:51.131371 env[1422]: time="2024-12-13T01:52:51.131326646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:52:51.131471 env[1422]: time="2024-12-13T01:52:51.131454847Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:52:51.131605 env[1422]: time="2024-12-13T01:52:51.131585047Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 01:52:51.131693 env[1422]: time="2024-12-13T01:52:51.131676548Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:52:51.148956 extend-filesystems[1400]: Old size kept for /dev/sda9 Dec 13 01:52:51.148956 extend-filesystems[1400]: Found sr0 Dec 13 01:52:51.156097 env[1422]: time="2024-12-13T01:52:51.155383641Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:52:51.156097 env[1422]: time="2024-12-13T01:52:51.155419941Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:52:51.156097 env[1422]: time="2024-12-13T01:52:51.155435241Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:52:51.156097 env[1422]: time="2024-12-13T01:52:51.155475341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:52:51.156097 env[1422]: time="2024-12-13T01:52:51.155490441Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:52:51.156097 env[1422]: time="2024-12-13T01:52:51.155504241Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:52:51.156097 env[1422]: time="2024-12-13T01:52:51.155516941Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:52:51.156097 env[1422]: time="2024-12-13T01:52:51.155534141Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:52:51.156097 env[1422]: time="2024-12-13T01:52:51.155558041Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 01:52:51.156097 env[1422]: time="2024-12-13T01:52:51.155575241Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:52:51.156097 env[1422]: time="2024-12-13T01:52:51.155589441Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:52:51.156097 env[1422]: time="2024-12-13T01:52:51.155603341Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:52:51.153241 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:52:51.156581 env[1422]: time="2024-12-13T01:52:51.156492145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:52:51.153494 systemd[1]: Finished extend-filesystems.service. Dec 13 01:52:51.156689 env[1422]: time="2024-12-13T01:52:51.156611445Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:52:51.157511 env[1422]: time="2024-12-13T01:52:51.157017347Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:52:51.157511 env[1422]: time="2024-12-13T01:52:51.157077847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:52:51.157511 env[1422]: time="2024-12-13T01:52:51.157099847Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:52:51.157511 env[1422]: time="2024-12-13T01:52:51.157171848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:52:51.157511 env[1422]: time="2024-12-13T01:52:51.157194248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:52:51.157511 env[1422]: time="2024-12-13T01:52:51.157213148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:52:51.157511 env[1422]: time="2024-12-13T01:52:51.157244548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:52:51.157511 env[1422]: time="2024-12-13T01:52:51.157263348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:52:51.157511 env[1422]: time="2024-12-13T01:52:51.157281748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:52:51.157511 env[1422]: time="2024-12-13T01:52:51.157309348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:52:51.157511 env[1422]: time="2024-12-13T01:52:51.157327448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:52:51.157511 env[1422]: time="2024-12-13T01:52:51.157368248Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:52:51.158331 env[1422]: time="2024-12-13T01:52:51.158100351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:52:51.158331 env[1422]: time="2024-12-13T01:52:51.158129751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:52:51.158331 env[1422]: time="2024-12-13T01:52:51.158151751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:52:51.158331 env[1422]: time="2024-12-13T01:52:51.158180652Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:52:51.158331 env[1422]: time="2024-12-13T01:52:51.158203152Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 01:52:51.158331 env[1422]: time="2024-12-13T01:52:51.158219752Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:52:51.158331 env[1422]: time="2024-12-13T01:52:51.158258952Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 01:52:51.158331 env[1422]: time="2024-12-13T01:52:51.158303152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:52:51.159161 env[1422]: time="2024-12-13T01:52:51.159082655Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:52:51.199851 env[1422]: time="2024-12-13T01:52:51.159296456Z" level=info msg="Connect containerd service" Dec 13 01:52:51.199851 env[1422]: time="2024-12-13T01:52:51.165819581Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:52:51.199851 env[1422]: time="2024-12-13T01:52:51.171718105Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:52:51.199851 env[1422]: time="2024-12-13T01:52:51.172055406Z" level=info msg="Start subscribing containerd event" Dec 13 01:52:51.199851 env[1422]: time="2024-12-13T01:52:51.172112806Z" level=info msg="Start recovering state" Dec 13 01:52:51.199851 env[1422]: time="2024-12-13T01:52:51.172182806Z" level=info msg="Start event monitor" Dec 13 01:52:51.199851 env[1422]: time="2024-12-13T01:52:51.172207606Z" level=info msg="Start snapshots syncer" Dec 13 01:52:51.199851 env[1422]: time="2024-12-13T01:52:51.172220407Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:52:51.199851 env[1422]: time="2024-12-13T01:52:51.172229907Z" level=info msg="Start streaming server" Dec 13 01:52:51.199851 env[1422]: time="2024-12-13T01:52:51.175537820Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:52:51.199851 env[1422]: time="2024-12-13T01:52:51.175592020Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:52:51.199851 env[1422]: time="2024-12-13T01:52:51.177126726Z" level=info msg="containerd successfully booted in 0.084270s" Dec 13 01:52:51.173759 systemd-logind[1410]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:52:51.173965 systemd-logind[1410]: New seat seat0. Dec 13 01:52:51.175708 systemd[1]: Started containerd.service. Dec 13 01:52:51.305896 dbus-daemon[1398]: [system] SELinux support is enabled Dec 13 01:52:51.306076 systemd[1]: Started dbus.service. Dec 13 01:52:51.310499 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:52:51.310528 systemd[1]: Reached target system-config.target. Dec 13 01:52:51.312848 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:52:51.312867 systemd[1]: Reached target user-config.target. Dec 13 01:52:51.328056 systemd[1]: Started systemd-logind.service. Dec 13 01:52:51.328299 dbus-daemon[1398]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:52:51.340208 bash[1449]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:52:51.342704 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 01:52:51.407785 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 01:52:52.055126 update_engine[1412]: I1213 01:52:52.054737 1412 main.cc:92] Flatcar Update Engine starting Dec 13 01:52:52.125534 systemd[1]: Started update-engine.service. Dec 13 01:52:52.128078 update_engine[1412]: I1213 01:52:52.127022 1412 update_check_scheduler.cc:74] Next update check in 10m33s Dec 13 01:52:52.131199 systemd[1]: Started locksmithd.service. Dec 13 01:52:52.338606 systemd[1]: Started kubelet.service. Dec 13 01:52:53.261382 kubelet[1504]: E1213 01:52:53.261289 1504 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:52:53.263520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:52:53.263695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:52:53.263990 systemd[1]: kubelet.service: Consumed 1.176s CPU time. Dec 13 01:52:53.579884 locksmithd[1501]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:52:53.730592 sshd_keygen[1418]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:52:53.750587 systemd[1]: Finished sshd-keygen.service. Dec 13 01:52:53.754120 systemd[1]: Starting issuegen.service... Dec 13 01:52:53.757635 systemd[1]: Started waagent.service. Dec 13 01:52:53.764014 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:52:53.764150 systemd[1]: Finished issuegen.service. Dec 13 01:52:53.767719 systemd[1]: Starting systemd-user-sessions.service... Dec 13 01:52:53.775234 systemd[1]: Finished systemd-user-sessions.service. Dec 13 01:52:53.778925 systemd[1]: Started getty@tty1.service. Dec 13 01:52:53.782222 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 01:52:53.784271 systemd[1]: Reached target getty.target. Dec 13 01:52:53.785939 systemd[1]: Reached target multi-user.target. Dec 13 01:52:53.789210 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 01:52:53.799473 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 01:52:53.799646 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 01:52:53.802059 systemd[1]: Startup finished in 718ms (firmware) + 29.701s (loader) + 856ms (kernel) + 14.539s (initrd) + 28.166s (userspace) = 1min 13.982s. Dec 13 01:52:54.226762 login[1527]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:52:54.228158 login[1528]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:52:54.251365 systemd[1]: Created slice user-500.slice. Dec 13 01:52:54.252833 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 01:52:54.257557 systemd-logind[1410]: New session 2 of user core. Dec 13 01:52:54.263382 systemd-logind[1410]: New session 1 of user core. Dec 13 01:52:54.267244 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 01:52:54.268927 systemd[1]: Starting user@500.service... Dec 13 01:52:54.299638 (systemd)[1531]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:54.486422 systemd[1531]: Queued start job for default target default.target. Dec 13 01:52:54.487003 systemd[1531]: Reached target paths.target. Dec 13 01:52:54.487030 systemd[1531]: Reached target sockets.target. Dec 13 01:52:54.487047 systemd[1531]: Reached target timers.target. Dec 13 01:52:54.487062 systemd[1531]: Reached target basic.target. Dec 13 01:52:54.487111 systemd[1531]: Reached target default.target. Dec 13 01:52:54.487147 systemd[1531]: Startup finished in 180ms. Dec 13 01:52:54.487505 systemd[1]: Started user@500.service. Dec 13 01:52:54.488839 systemd[1]: Started session-1.scope. Dec 13 01:52:54.489666 systemd[1]: Started session-2.scope. Dec 13 01:53:00.360473 waagent[1522]: 2024-12-13T01:53:00.360323Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 01:53:00.365053 waagent[1522]: 2024-12-13T01:53:00.364965Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 01:53:00.367670 waagent[1522]: 2024-12-13T01:53:00.367602Z INFO Daemon Daemon Python: 3.9.16 Dec 13 01:53:00.369951 waagent[1522]: 2024-12-13T01:53:00.369876Z INFO Daemon Daemon Run daemon Dec 13 01:53:00.372108 waagent[1522]: 2024-12-13T01:53:00.372045Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 01:53:00.384353 waagent[1522]: 2024-12-13T01:53:00.384231Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 01:53:00.391766 waagent[1522]: 2024-12-13T01:53:00.391663Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:53:00.396245 waagent[1522]: 2024-12-13T01:53:00.396182Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:53:00.398732 waagent[1522]: 2024-12-13T01:53:00.398671Z INFO Daemon Daemon Using waagent for provisioning Dec 13 01:53:00.401628 waagent[1522]: 2024-12-13T01:53:00.401570Z INFO Daemon Daemon Activate resource disk Dec 13 01:53:00.404064 waagent[1522]: 2024-12-13T01:53:00.404002Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 01:53:00.413897 waagent[1522]: 2024-12-13T01:53:00.413831Z INFO Daemon Daemon Found device: None Dec 13 01:53:00.417108 waagent[1522]: 2024-12-13T01:53:00.417046Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 01:53:00.420952 waagent[1522]: 2024-12-13T01:53:00.420894Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 01:53:00.426656 waagent[1522]: 2024-12-13T01:53:00.426596Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:53:00.429745 waagent[1522]: 2024-12-13T01:53:00.429686Z INFO Daemon Daemon Running default provisioning handler Dec 13 01:53:00.439864 waagent[1522]: 2024-12-13T01:53:00.439742Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 01:53:00.446109 waagent[1522]: 2024-12-13T01:53:00.446009Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:53:00.453771 waagent[1522]: 2024-12-13T01:53:00.446397Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:53:00.453771 waagent[1522]: 2024-12-13T01:53:00.447088Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 01:53:00.545265 waagent[1522]: 2024-12-13T01:53:00.542618Z INFO Daemon Daemon Successfully mounted dvd Dec 13 01:53:00.598759 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 01:53:00.619980 waagent[1522]: 2024-12-13T01:53:00.619815Z INFO Daemon Daemon Detect protocol endpoint Dec 13 01:53:00.622961 waagent[1522]: 2024-12-13T01:53:00.622894Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:53:00.625824 waagent[1522]: 2024-12-13T01:53:00.625764Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 01:53:00.629428 waagent[1522]: 2024-12-13T01:53:00.629371Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 01:53:00.632132 waagent[1522]: 2024-12-13T01:53:00.632072Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 01:53:00.634508 waagent[1522]: 2024-12-13T01:53:00.634449Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 01:53:00.775334 waagent[1522]: 2024-12-13T01:53:00.775255Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 01:53:00.779112 waagent[1522]: 2024-12-13T01:53:00.779060Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 01:53:00.781575 waagent[1522]: 2024-12-13T01:53:00.781508Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 01:53:01.014481 waagent[1522]: 2024-12-13T01:53:01.014257Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 01:53:01.023479 waagent[1522]: 2024-12-13T01:53:01.023408Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 01:53:01.028318 waagent[1522]: 2024-12-13T01:53:01.023807Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 01:53:01.121735 waagent[1522]: 2024-12-13T01:53:01.121613Z INFO Daemon Daemon Found private key matching thumbprint D66F373247A9C0292F01C86FEEE99EC9BD6DA8AB Dec 13 01:53:01.126458 waagent[1522]: 2024-12-13T01:53:01.126385Z INFO Daemon Daemon Certificate with thumbprint C7D55787252222B34189968218C85B6649517D37 has no matching private key. Dec 13 01:53:01.131394 waagent[1522]: 2024-12-13T01:53:01.131311Z INFO Daemon Daemon Fetch goal state completed Dec 13 01:53:01.184200 waagent[1522]: 2024-12-13T01:53:01.184110Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 76c0a867-40d8-44a6-ae51-e9c07c0fe088 New eTag: 9252563703313213604] Dec 13 01:53:01.191139 waagent[1522]: 2024-12-13T01:53:01.191045Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 01:53:01.205041 waagent[1522]: 2024-12-13T01:53:01.204967Z INFO Daemon Daemon Starting provisioning Dec 13 01:53:01.207576 waagent[1522]: 2024-12-13T01:53:01.207504Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 01:53:01.209768 waagent[1522]: 2024-12-13T01:53:01.209708Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-36ffbd9cb7] Dec 13 01:53:01.230639 waagent[1522]: 2024-12-13T01:53:01.230491Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-36ffbd9cb7] Dec 13 01:53:01.234404 waagent[1522]: 2024-12-13T01:53:01.234294Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 01:53:01.237847 waagent[1522]: 2024-12-13T01:53:01.237783Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 01:53:01.251799 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 01:53:01.252062 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 01:53:01.252139 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 01:53:01.252506 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:53:01.256388 systemd-networkd[1181]: eth0: DHCPv6 lease lost Dec 13 01:53:01.257654 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:53:01.257815 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:53:01.260112 systemd[1]: Starting systemd-networkd.service... Dec 13 01:53:01.291754 systemd-networkd[1575]: enP28924s1: Link UP Dec 13 01:53:01.291765 systemd-networkd[1575]: enP28924s1: Gained carrier Dec 13 01:53:01.293082 systemd-networkd[1575]: eth0: Link UP Dec 13 01:53:01.293091 systemd-networkd[1575]: eth0: Gained carrier Dec 13 01:53:01.293532 systemd-networkd[1575]: lo: Link UP Dec 13 01:53:01.293541 systemd-networkd[1575]: lo: Gained carrier Dec 13 01:53:01.293848 systemd-networkd[1575]: eth0: Gained IPv6LL Dec 13 01:53:01.294116 systemd-networkd[1575]: Enumeration completed Dec 13 01:53:01.294221 systemd[1]: Started systemd-networkd.service. Dec 13 01:53:01.296320 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:53:01.299361 waagent[1522]: 2024-12-13T01:53:01.298172Z INFO Daemon Daemon Create user account if not exists Dec 13 01:53:01.303234 waagent[1522]: 2024-12-13T01:53:01.303134Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 01:53:01.305443 systemd-networkd[1575]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:53:01.306026 waagent[1522]: 2024-12-13T01:53:01.305940Z INFO Daemon Daemon Configure sudoer Dec 13 01:53:01.308616 waagent[1522]: 2024-12-13T01:53:01.308527Z INFO Daemon Daemon Configure sshd Dec 13 01:53:01.310695 waagent[1522]: 2024-12-13T01:53:01.310634Z INFO Daemon Daemon Deploy ssh public key. Dec 13 01:53:01.336401 systemd-networkd[1575]: eth0: DHCPv4 address 10.200.8.16/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:53:01.339477 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:53:02.467426 waagent[1522]: 2024-12-13T01:53:02.467298Z INFO Daemon Daemon Provisioning complete Dec 13 01:53:02.483824 waagent[1522]: 2024-12-13T01:53:02.483735Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 01:53:02.490109 waagent[1522]: 2024-12-13T01:53:02.484251Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 01:53:02.490109 waagent[1522]: 2024-12-13T01:53:02.485887Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 01:53:02.753464 waagent[1584]: 2024-12-13T01:53:02.753280Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 01:53:02.754185 waagent[1584]: 2024-12-13T01:53:02.754114Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:53:02.754331 waagent[1584]: 2024-12-13T01:53:02.754276Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:53:02.765875 waagent[1584]: 2024-12-13T01:53:02.765801Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 01:53:02.766036 waagent[1584]: 2024-12-13T01:53:02.765985Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 01:53:02.828225 waagent[1584]: 2024-12-13T01:53:02.828094Z INFO ExtHandler ExtHandler Found private key matching thumbprint D66F373247A9C0292F01C86FEEE99EC9BD6DA8AB Dec 13 01:53:02.828469 waagent[1584]: 2024-12-13T01:53:02.828404Z INFO ExtHandler ExtHandler Certificate with thumbprint C7D55787252222B34189968218C85B6649517D37 has no matching private key. Dec 13 01:53:02.828705 waagent[1584]: 2024-12-13T01:53:02.828655Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 01:53:02.842935 waagent[1584]: 2024-12-13T01:53:02.842872Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: deccd36a-dc76-4708-8989-d852d9a8a4f8 New eTag: 9252563703313213604] Dec 13 01:53:02.843471 waagent[1584]: 2024-12-13T01:53:02.843411Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 01:53:02.956471 waagent[1584]: 2024-12-13T01:53:02.956265Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:53:02.981668 waagent[1584]: 2024-12-13T01:53:02.981569Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1584 Dec 13 01:53:02.985166 waagent[1584]: 2024-12-13T01:53:02.985088Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:53:02.986403 waagent[1584]: 2024-12-13T01:53:02.986326Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:53:03.157236 waagent[1584]: 2024-12-13T01:53:03.157169Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:53:03.157668 waagent[1584]: 2024-12-13T01:53:03.157601Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:53:03.165646 waagent[1584]: 2024-12-13T01:53:03.165590Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:53:03.166099 waagent[1584]: 2024-12-13T01:53:03.166041Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 01:53:03.167131 waagent[1584]: 2024-12-13T01:53:03.167068Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 01:53:03.168404 waagent[1584]: 2024-12-13T01:53:03.168327Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:53:03.169158 waagent[1584]: 2024-12-13T01:53:03.169102Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:53:03.169239 waagent[1584]: 2024-12-13T01:53:03.169190Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:53:03.169766 waagent[1584]: 2024-12-13T01:53:03.169712Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:53:03.170105 waagent[1584]: 2024-12-13T01:53:03.170044Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:53:03.170273 waagent[1584]: 2024-12-13T01:53:03.170224Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:53:03.170509 waagent[1584]: 2024-12-13T01:53:03.170456Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:53:03.171626 waagent[1584]: 2024-12-13T01:53:03.171572Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:53:03.171742 waagent[1584]: 2024-12-13T01:53:03.171672Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:53:03.171921 waagent[1584]: 2024-12-13T01:53:03.171855Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:53:03.172225 waagent[1584]: 2024-12-13T01:53:03.172157Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:53:03.172918 waagent[1584]: 2024-12-13T01:53:03.172864Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:53:03.173325 waagent[1584]: 2024-12-13T01:53:03.173270Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:53:03.173420 waagent[1584]: 2024-12-13T01:53:03.173362Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:53:03.173420 waagent[1584]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:53:03.173420 waagent[1584]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:53:03.173420 waagent[1584]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:53:03.173420 waagent[1584]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:53:03.173420 waagent[1584]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:53:03.173420 waagent[1584]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:53:03.176219 waagent[1584]: 2024-12-13T01:53:03.176022Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:53:03.176389 waagent[1584]: 2024-12-13T01:53:03.176322Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:53:03.190266 waagent[1584]: 2024-12-13T01:53:03.190217Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 01:53:03.191196 waagent[1584]: 2024-12-13T01:53:03.191151Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 01:53:03.192728 waagent[1584]: 2024-12-13T01:53:03.192681Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 01:53:03.233862 waagent[1584]: 2024-12-13T01:53:03.233784Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 01:53:03.242190 waagent[1584]: 2024-12-13T01:53:03.242126Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1575' Dec 13 01:53:03.265214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:53:03.265489 systemd[1]: Stopped kubelet.service. Dec 13 01:53:03.265542 systemd[1]: kubelet.service: Consumed 1.176s CPU time. Dec 13 01:53:03.267167 systemd[1]: Starting kubelet.service... Dec 13 01:53:03.384773 systemd[1]: Started kubelet.service. Dec 13 01:53:03.914056 waagent[1584]: 2024-12-13T01:53:03.913912Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:53:03.914056 waagent[1584]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:53:03.914056 waagent[1584]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:53:03.914056 waagent[1584]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e1:95:2a brd ff:ff:ff:ff:ff:ff Dec 13 01:53:03.914056 waagent[1584]: 3: enP28924s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e1:95:2a brd ff:ff:ff:ff:ff:ff\ altname enP28924p0s2 Dec 13 01:53:03.914056 waagent[1584]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:53:03.914056 waagent[1584]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:53:03.914056 waagent[1584]: 2: eth0 inet 10.200.8.16/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:53:03.914056 waagent[1584]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:53:03.914056 waagent[1584]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 01:53:03.914056 waagent[1584]: 2: eth0 inet6 fe80::6245:bdff:fee1:952a/64 scope link \ valid_lft forever preferred_lft forever Dec 13 01:53:03.937640 kubelet[1613]: E1213 01:53:03.937583 1613 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:53:03.941543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:53:03.941696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:53:04.156934 waagent[1584]: 2024-12-13T01:53:04.156860Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 01:53:04.491065 waagent[1522]: 2024-12-13T01:53:04.490883Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 01:53:04.496944 waagent[1522]: 2024-12-13T01:53:04.496879Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 01:53:05.533463 waagent[1633]: 2024-12-13T01:53:05.533350Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 01:53:05.534183 waagent[1633]: 2024-12-13T01:53:05.534113Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 01:53:05.534333 waagent[1633]: 2024-12-13T01:53:05.534278Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 01:53:05.534500 waagent[1633]: 2024-12-13T01:53:05.534451Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 13 01:53:05.543785 waagent[1633]: 2024-12-13T01:53:05.543685Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:53:05.544164 waagent[1633]: 2024-12-13T01:53:05.544107Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:53:05.544326 waagent[1633]: 2024-12-13T01:53:05.544278Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:53:05.556163 waagent[1633]: 2024-12-13T01:53:05.556089Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:53:05.565196 waagent[1633]: 2024-12-13T01:53:05.565129Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 01:53:05.566147 waagent[1633]: 2024-12-13T01:53:05.566086Z INFO ExtHandler Dec 13 01:53:05.566298 waagent[1633]: 2024-12-13T01:53:05.566244Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4e2e9dee-0d74-42c2-869b-d42db36d9b77 eTag: 9252563703313213604 source: Fabric] Dec 13 01:53:05.567005 waagent[1633]: 2024-12-13T01:53:05.566948Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 01:53:05.568091 waagent[1633]: 2024-12-13T01:53:05.568031Z INFO ExtHandler Dec 13 01:53:05.568229 waagent[1633]: 2024-12-13T01:53:05.568178Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:53:05.574899 waagent[1633]: 2024-12-13T01:53:05.574847Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 01:53:05.575319 waagent[1633]: 2024-12-13T01:53:05.575272Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 01:53:05.593945 waagent[1633]: 2024-12-13T01:53:05.593880Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 01:53:05.658681 waagent[1633]: 2024-12-13T01:53:05.658550Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D66F373247A9C0292F01C86FEEE99EC9BD6DA8AB', 'hasPrivateKey': True} Dec 13 01:53:05.659649 waagent[1633]: 2024-12-13T01:53:05.659581Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C7D55787252222B34189968218C85B6649517D37', 'hasPrivateKey': False} Dec 13 01:53:05.660616 waagent[1633]: 2024-12-13T01:53:05.660556Z INFO ExtHandler Fetch goal state completed Dec 13 01:53:05.681366 waagent[1633]: 2024-12-13T01:53:05.681261Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 01:53:05.692618 waagent[1633]: 2024-12-13T01:53:05.692533Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1633 Dec 13 01:53:05.695684 waagent[1633]: 2024-12-13T01:53:05.695623Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:53:05.696645 waagent[1633]: 2024-12-13T01:53:05.696586Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 01:53:05.696929 waagent[1633]: 2024-12-13T01:53:05.696872Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 01:53:05.698866 waagent[1633]: 2024-12-13T01:53:05.698809Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:53:05.703474 waagent[1633]: 2024-12-13T01:53:05.703420Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:53:05.703856 waagent[1633]: 2024-12-13T01:53:05.703799Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:53:05.711849 waagent[1633]: 2024-12-13T01:53:05.711796Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:53:05.712294 waagent[1633]: 2024-12-13T01:53:05.712239Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 01:53:05.718065 waagent[1633]: 2024-12-13T01:53:05.717974Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 01:53:05.719091 waagent[1633]: 2024-12-13T01:53:05.719024Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 01:53:05.720507 waagent[1633]: 2024-12-13T01:53:05.720449Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:53:05.720918 waagent[1633]: 2024-12-13T01:53:05.720863Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:53:05.721675 waagent[1633]: 2024-12-13T01:53:05.721618Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:53:05.722213 waagent[1633]: 2024-12-13T01:53:05.722158Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:53:05.722552 waagent[1633]: 2024-12-13T01:53:05.722496Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:53:05.722552 waagent[1633]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:53:05.722552 waagent[1633]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:53:05.722552 waagent[1633]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:53:05.722552 waagent[1633]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:53:05.722552 waagent[1633]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:53:05.722552 waagent[1633]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:53:05.725844 waagent[1633]: 2024-12-13T01:53:05.725768Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:53:05.725982 waagent[1633]: 2024-12-13T01:53:05.725376Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:53:05.726407 waagent[1633]: 2024-12-13T01:53:05.726318Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:53:05.727014 waagent[1633]: 2024-12-13T01:53:05.726936Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:53:05.727286 waagent[1633]: 2024-12-13T01:53:05.727236Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:53:05.727451 waagent[1633]: 2024-12-13T01:53:05.727404Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:53:05.728783 waagent[1633]: 2024-12-13T01:53:05.728729Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:53:05.731419 waagent[1633]: 2024-12-13T01:53:05.731269Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:53:05.732697 waagent[1633]: 2024-12-13T01:53:05.732640Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:53:05.732824 waagent[1633]: 2024-12-13T01:53:05.732767Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:53:05.736024 waagent[1633]: 2024-12-13T01:53:05.735953Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:53:05.751089 waagent[1633]: 2024-12-13T01:53:05.751025Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 01:53:05.753164 waagent[1633]: 2024-12-13T01:53:05.753089Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:53:05.753164 waagent[1633]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:53:05.753164 waagent[1633]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:53:05.753164 waagent[1633]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e1:95:2a brd ff:ff:ff:ff:ff:ff Dec 13 01:53:05.753164 waagent[1633]: 3: enP28924s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e1:95:2a brd ff:ff:ff:ff:ff:ff\ altname enP28924p0s2 Dec 13 01:53:05.753164 waagent[1633]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:53:05.753164 waagent[1633]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:53:05.753164 waagent[1633]: 2: eth0 inet 10.200.8.16/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:53:05.753164 waagent[1633]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:53:05.753164 waagent[1633]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 01:53:05.753164 waagent[1633]: 2: eth0 inet6 fe80::6245:bdff:fee1:952a/64 scope link \ valid_lft forever preferred_lft forever Dec 13 01:53:05.795421 waagent[1633]: 2024-12-13T01:53:05.795302Z INFO ExtHandler ExtHandler Dec 13 01:53:05.795694 waagent[1633]: 2024-12-13T01:53:05.795647Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4dfe7cc0-59d0-4b40-a05c-1c780f843207 correlation debeeb48-df42-4326-bc5b-f12df6dd96f3 created: 2024-12-13T01:51:27.736244Z] Dec 13 01:53:05.797024 waagent[1633]: 2024-12-13T01:53:05.796973Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 01:53:05.798875 waagent[1633]: 2024-12-13T01:53:05.798827Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Dec 13 01:53:05.825777 waagent[1633]: 2024-12-13T01:53:05.825705Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 01:53:05.848752 waagent[1633]: 2024-12-13T01:53:05.848624Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 43F64F3B-BADD-43A9-84AC-B3C07531644E;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 01:53:05.863530 waagent[1633]: 2024-12-13T01:53:05.863429Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 13 01:53:05.863530 waagent[1633]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:53:05.863530 waagent[1633]: pkts bytes target prot opt in out source destination Dec 13 01:53:05.863530 waagent[1633]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:53:05.863530 waagent[1633]: pkts bytes target prot opt in out source destination Dec 13 01:53:05.863530 waagent[1633]: Chain OUTPUT (policy ACCEPT 7 packets, 936 bytes) Dec 13 01:53:05.863530 waagent[1633]: pkts bytes target prot opt in out source destination Dec 13 01:53:05.863530 waagent[1633]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:53:05.863530 waagent[1633]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:53:05.863530 waagent[1633]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:53:05.870529 waagent[1633]: 2024-12-13T01:53:05.870430Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 01:53:05.870529 waagent[1633]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:53:05.870529 waagent[1633]: pkts bytes target prot opt in out source destination Dec 13 01:53:05.870529 waagent[1633]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:53:05.870529 waagent[1633]: pkts bytes target prot opt in out source destination Dec 13 01:53:05.870529 waagent[1633]: Chain OUTPUT (policy ACCEPT 7 packets, 936 bytes) Dec 13 01:53:05.870529 waagent[1633]: pkts bytes target prot opt in out source destination Dec 13 01:53:05.870529 waagent[1633]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:53:05.870529 waagent[1633]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:53:05.870529 waagent[1633]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:53:05.871064 waagent[1633]: 2024-12-13T01:53:05.871012Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 01:53:14.015485 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:53:14.015804 systemd[1]: Stopped kubelet.service. Dec 13 01:53:14.017848 systemd[1]: Starting kubelet.service... Dec 13 01:53:14.201644 systemd[1]: Started kubelet.service. Dec 13 01:53:14.636942 kubelet[1685]: E1213 01:53:14.636884 1685 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:53:14.638842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:53:14.639002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:53:24.765581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:53:24.765903 systemd[1]: Stopped kubelet.service. Dec 13 01:53:24.767968 systemd[1]: Starting kubelet.service... Dec 13 01:53:25.093257 systemd[1]: Started kubelet.service. Dec 13 01:53:25.372771 kubelet[1695]: E1213 01:53:25.372653 1695 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:53:25.374745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:53:25.374904 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:53:29.972437 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 01:53:34.984526 systemd[1]: Created slice system-sshd.slice. Dec 13 01:53:34.986765 systemd[1]: Started sshd@0-10.200.8.16:22-10.200.16.10:40522.service. Dec 13 01:53:35.515518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:53:35.515810 systemd[1]: Stopped kubelet.service. Dec 13 01:53:35.517831 systemd[1]: Starting kubelet.service... Dec 13 01:53:35.923959 sshd[1702]: Accepted publickey for core from 10.200.16.10 port 40522 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:35.926186 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:35.933078 systemd[1]: Started session-3.scope. Dec 13 01:53:35.933828 systemd-logind[1410]: New session 3 of user core. Dec 13 01:53:36.026101 systemd[1]: Started kubelet.service. Dec 13 01:53:36.123033 kubelet[1709]: E1213 01:53:36.122978 1709 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:53:36.124890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:53:36.125003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:53:36.466238 systemd[1]: Started sshd@1-10.200.8.16:22-10.200.16.10:40524.service. Dec 13 01:53:37.091122 sshd[1717]: Accepted publickey for core from 10.200.16.10 port 40524 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:37.092826 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:37.098516 systemd[1]: Started session-4.scope. Dec 13 01:53:37.098938 systemd-logind[1410]: New session 4 of user core. Dec 13 01:53:37.240097 update_engine[1412]: I1213 01:53:37.240010 1412 update_attempter.cc:509] Updating boot flags... Dec 13 01:53:37.547581 sshd[1717]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:37.550790 systemd[1]: sshd@1-10.200.8.16:22-10.200.16.10:40524.service: Deactivated successfully. Dec 13 01:53:37.551774 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:53:37.552601 systemd-logind[1410]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:53:37.553568 systemd-logind[1410]: Removed session 4. Dec 13 01:53:37.651739 systemd[1]: Started sshd@2-10.200.8.16:22-10.200.16.10:40532.service. Dec 13 01:53:38.280034 sshd[1789]: Accepted publickey for core from 10.200.16.10 port 40532 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:38.281646 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:38.286278 systemd[1]: Started session-5.scope. Dec 13 01:53:38.287017 systemd-logind[1410]: New session 5 of user core. Dec 13 01:53:38.722074 sshd[1789]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:38.725533 systemd[1]: sshd@2-10.200.8.16:22-10.200.16.10:40532.service: Deactivated successfully. Dec 13 01:53:38.726499 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:53:38.727288 systemd-logind[1410]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:53:38.728247 systemd-logind[1410]: Removed session 5. Dec 13 01:53:38.827058 systemd[1]: Started sshd@3-10.200.8.16:22-10.200.16.10:59522.service. Dec 13 01:53:39.467473 sshd[1795]: Accepted publickey for core from 10.200.16.10 port 59522 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:39.469122 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:39.474440 systemd-logind[1410]: New session 6 of user core. Dec 13 01:53:39.475071 systemd[1]: Started session-6.scope. Dec 13 01:53:39.916170 sshd[1795]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:39.919399 systemd[1]: sshd@3-10.200.8.16:22-10.200.16.10:59522.service: Deactivated successfully. Dec 13 01:53:39.920329 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:53:39.921117 systemd-logind[1410]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:53:39.922043 systemd-logind[1410]: Removed session 6. Dec 13 01:53:40.020901 systemd[1]: Started sshd@4-10.200.8.16:22-10.200.16.10:59536.service. Dec 13 01:53:40.649487 sshd[1801]: Accepted publickey for core from 10.200.16.10 port 59536 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:53:40.651157 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:40.656861 systemd[1]: Started session-7.scope. Dec 13 01:53:40.657311 systemd-logind[1410]: New session 7 of user core. Dec 13 01:53:41.238084 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:53:41.238474 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:53:41.252877 systemd[1]: Starting coreos-metadata.service... Dec 13 01:53:41.327785 coreos-metadata[1808]: Dec 13 01:53:41.327 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:53:41.330576 coreos-metadata[1808]: Dec 13 01:53:41.330 INFO Fetch successful Dec 13 01:53:41.330837 coreos-metadata[1808]: Dec 13 01:53:41.330 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 01:53:41.332002 coreos-metadata[1808]: Dec 13 01:53:41.331 INFO Fetch successful Dec 13 01:53:41.332428 coreos-metadata[1808]: Dec 13 01:53:41.332 INFO Fetching http://168.63.129.16/machine/f7d8820b-926f-4ba5-af60-f83bb12b2202/3aaf33f8%2D9f14%2D45a2%2D9342%2D00aa54f8d3b8.%5Fci%2D3510.3.6%2Da%2D36ffbd9cb7?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 01:53:41.333779 coreos-metadata[1808]: Dec 13 01:53:41.333 INFO Fetch successful Dec 13 01:53:41.366586 coreos-metadata[1808]: Dec 13 01:53:41.366 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:53:41.379141 coreos-metadata[1808]: Dec 13 01:53:41.379 INFO Fetch successful Dec 13 01:53:41.387903 systemd[1]: Finished coreos-metadata.service. Dec 13 01:53:45.202778 systemd[1]: Stopped kubelet.service. Dec 13 01:53:45.205973 systemd[1]: Starting kubelet.service... Dec 13 01:53:45.235846 systemd[1]: Reloading. Dec 13 01:53:45.324376 /usr/lib/systemd/system-generators/torcx-generator[1870]: time="2024-12-13T01:53:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:53:45.324413 /usr/lib/systemd/system-generators/torcx-generator[1870]: time="2024-12-13T01:53:45Z" level=info msg="torcx already run" Dec 13 01:53:45.432726 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:53:45.432746 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:53:45.449046 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:53:45.557954 systemd[1]: Started kubelet.service. Dec 13 01:53:45.561240 systemd[1]: Stopping kubelet.service... Dec 13 01:53:45.561734 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:53:45.561933 systemd[1]: Stopped kubelet.service. Dec 13 01:53:45.563736 systemd[1]: Starting kubelet.service... Dec 13 01:53:45.780800 systemd[1]: Started kubelet.service. Dec 13 01:53:45.821055 kubelet[1940]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:53:45.821055 kubelet[1940]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:53:45.821055 kubelet[1940]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:53:45.821055 kubelet[1940]: I1213 01:53:45.820625 1940 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:53:46.410206 kubelet[1940]: I1213 01:53:46.410160 1940 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:53:46.410206 kubelet[1940]: I1213 01:53:46.410196 1940 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:53:46.410564 kubelet[1940]: I1213 01:53:46.410536 1940 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:53:46.441740 kubelet[1940]: I1213 01:53:46.440824 1940 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:53:46.454051 kubelet[1940]: I1213 01:53:46.454021 1940 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:53:46.454297 kubelet[1940]: I1213 01:53:46.454274 1940 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:53:46.454521 kubelet[1940]: I1213 01:53:46.454498 1940 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:53:46.454694 kubelet[1940]: I1213 01:53:46.454533 1940 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:53:46.454694 kubelet[1940]: I1213 01:53:46.454546 1940 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:53:46.454694 kubelet[1940]: I1213 01:53:46.454657 1940 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:53:46.454830 kubelet[1940]: I1213 01:53:46.454759 1940 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:53:46.454830 kubelet[1940]: I1213 01:53:46.454777 1940 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:53:46.454830 kubelet[1940]: I1213 01:53:46.454805 1940 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:53:46.454830 kubelet[1940]: I1213 01:53:46.454824 1940 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:53:46.455309 kubelet[1940]: E1213 01:53:46.455291 1940 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:46.455479 kubelet[1940]: E1213 01:53:46.455467 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:46.456021 kubelet[1940]: I1213 01:53:46.456001 1940 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:53:46.459269 kubelet[1940]: I1213 01:53:46.459244 1940 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:53:46.462097 kubelet[1940]: W1213 01:53:46.462045 1940 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:53:46.462323 kubelet[1940]: W1213 01:53:46.462304 1940 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.200.8.16" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:53:46.462447 kubelet[1940]: E1213 01:53:46.462435 1940 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.16" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:53:46.462643 kubelet[1940]: W1213 01:53:46.462627 1940 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:53:46.462740 kubelet[1940]: I1213 01:53:46.462719 1940 server.go:1256] "Started kubelet" Dec 13 01:53:46.462800 kubelet[1940]: E1213 01:53:46.462791 1940 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:53:46.462986 kubelet[1940]: I1213 01:53:46.462967 1940 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:53:46.463889 kubelet[1940]: I1213 01:53:46.463863 1940 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:53:46.465081 kubelet[1940]: I1213 01:53:46.464962 1940 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:53:46.465209 kubelet[1940]: I1213 01:53:46.465189 1940 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:53:46.476953 kubelet[1940]: E1213 01:53:46.476940 1940 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:53:46.479679 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 01:53:46.479863 kubelet[1940]: I1213 01:53:46.479841 1940 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:53:46.483829 kubelet[1940]: E1213 01:53:46.483815 1940 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.16\" not found" Dec 13 01:53:46.483957 kubelet[1940]: I1213 01:53:46.483946 1940 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:53:46.484116 kubelet[1940]: I1213 01:53:46.484102 1940 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:53:46.484232 kubelet[1940]: I1213 01:53:46.484223 1940 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:53:46.485077 kubelet[1940]: I1213 01:53:46.485063 1940 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:53:46.485273 kubelet[1940]: I1213 01:53:46.485252 1940 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:53:46.493147 kubelet[1940]: I1213 01:53:46.493115 1940 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:53:46.506716 kubelet[1940]: E1213 01:53:46.506694 1940 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.16\" not found" node="10.200.8.16" Dec 13 01:53:46.509143 kubelet[1940]: I1213 01:53:46.509129 1940 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:53:46.509253 kubelet[1940]: I1213 01:53:46.509244 1940 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:53:46.509327 kubelet[1940]: I1213 01:53:46.509319 1940 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:53:46.513919 kubelet[1940]: I1213 01:53:46.513902 1940 policy_none.go:49] "None policy: Start" Dec 13 01:53:46.515120 kubelet[1940]: I1213 01:53:46.515100 1940 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:53:46.515203 kubelet[1940]: I1213 01:53:46.515138 1940 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:53:46.516156 kubelet[1940]: I1213 01:53:46.516133 1940 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:53:46.517121 kubelet[1940]: I1213 01:53:46.517100 1940 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:53:46.517121 kubelet[1940]: I1213 01:53:46.517123 1940 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:53:46.519590 kubelet[1940]: I1213 01:53:46.517142 1940 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:53:46.519590 kubelet[1940]: E1213 01:53:46.517187 1940 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:53:46.525965 systemd[1]: Created slice kubepods.slice. Dec 13 01:53:46.530061 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 01:53:46.533475 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 01:53:46.542038 kubelet[1940]: I1213 01:53:46.542016 1940 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:53:46.542389 kubelet[1940]: I1213 01:53:46.542240 1940 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:53:46.544969 kubelet[1940]: E1213 01:53:46.544919 1940 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.16\" not found" Dec 13 01:53:46.585851 kubelet[1940]: I1213 01:53:46.585817 1940 kubelet_node_status.go:73] "Attempting to register node" node="10.200.8.16" Dec 13 01:53:46.591154 kubelet[1940]: I1213 01:53:46.591119 1940 kubelet_node_status.go:76] "Successfully registered node" node="10.200.8.16" Dec 13 01:53:46.599611 kubelet[1940]: E1213 01:53:46.599586 1940 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.16\" not found" Dec 13 01:53:46.700330 kubelet[1940]: E1213 01:53:46.700199 1940 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.16\" not found" Dec 13 01:53:46.800704 kubelet[1940]: E1213 01:53:46.800666 1940 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.16\" not found" Dec 13 01:53:46.901682 kubelet[1940]: E1213 01:53:46.901625 1940 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.16\" not found" Dec 13 01:53:47.002787 kubelet[1940]: E1213 01:53:47.002671 1940 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.16\" not found" Dec 13 01:53:47.103594 kubelet[1940]: E1213 01:53:47.103538 1940 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.16\" not found" Dec 13 01:53:47.204363 kubelet[1940]: E1213 01:53:47.204309 1940 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.16\" not found" Dec 13 01:53:47.301557 sudo[1804]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:47.304989 kubelet[1940]: E1213 01:53:47.304960 1940 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.16\" not found" Dec 13 01:53:47.405939 kubelet[1940]: E1213 01:53:47.405842 1940 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.16\" not found" Dec 13 01:53:47.413119 kubelet[1940]: I1213 01:53:47.413084 1940 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 01:53:47.413442 kubelet[1940]: W1213 01:53:47.413351 1940 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:53:47.413442 kubelet[1940]: W1213 01:53:47.413358 1940 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:53:47.418178 sshd[1801]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:47.421554 systemd[1]: sshd@4-10.200.8.16:22-10.200.16.10:59536.service: Deactivated successfully. Dec 13 01:53:47.422568 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:53:47.423464 systemd-logind[1410]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:53:47.424297 systemd-logind[1410]: Removed session 7. Dec 13 01:53:47.456486 kubelet[1940]: E1213 01:53:47.456439 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:47.506503 kubelet[1940]: E1213 01:53:47.506444 1940 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.16\" not found" Dec 13 01:53:47.607115 kubelet[1940]: E1213 01:53:47.606972 1940 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.16\" not found" Dec 13 01:53:47.707872 kubelet[1940]: E1213 01:53:47.707821 1940 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.16\" not found" Dec 13 01:53:47.808435 kubelet[1940]: E1213 01:53:47.808391 1940 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.16\" not found" Dec 13 01:53:47.910038 kubelet[1940]: I1213 01:53:47.909936 1940 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 01:53:47.911062 env[1422]: time="2024-12-13T01:53:47.911014940Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:53:47.911592 kubelet[1940]: I1213 01:53:47.911566 1940 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 01:53:48.457365 kubelet[1940]: E1213 01:53:48.457298 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:48.457555 kubelet[1940]: I1213 01:53:48.457397 1940 apiserver.go:52] "Watching apiserver" Dec 13 01:53:48.462079 kubelet[1940]: I1213 01:53:48.462042 1940 topology_manager.go:215] "Topology Admit Handler" podUID="c5b0d8a2-f019-4323-a054-72c1d0137110" podNamespace="kube-system" podName="cilium-2smzx" Dec 13 01:53:48.462239 kubelet[1940]: I1213 01:53:48.462199 1940 topology_manager.go:215] "Topology Admit Handler" podUID="62a5c6bc-09cb-4d80-9f53-40f42f722298" podNamespace="kube-system" podName="kube-proxy-wpmgx" Dec 13 01:53:48.469643 systemd[1]: Created slice kubepods-besteffort-pod62a5c6bc_09cb_4d80_9f53_40f42f722298.slice. Dec 13 01:53:48.479558 systemd[1]: Created slice kubepods-burstable-podc5b0d8a2_f019_4323_a054_72c1d0137110.slice. Dec 13 01:53:48.485108 kubelet[1940]: I1213 01:53:48.485086 1940 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:53:48.497191 kubelet[1940]: I1213 01:53:48.497166 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-cni-path\") pod \"cilium-2smzx\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " pod="kube-system/cilium-2smzx" Dec 13 01:53:48.497316 kubelet[1940]: I1213 01:53:48.497210 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5b0d8a2-f019-4323-a054-72c1d0137110-clustermesh-secrets\") pod \"cilium-2smzx\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " pod="kube-system/cilium-2smzx" Dec 13 01:53:48.497316 kubelet[1940]: I1213 01:53:48.497238 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-host-proc-sys-net\") pod \"cilium-2smzx\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " pod="kube-system/cilium-2smzx" Dec 13 01:53:48.497316 kubelet[1940]: I1213 01:53:48.497263 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62a5c6bc-09cb-4d80-9f53-40f42f722298-lib-modules\") pod \"kube-proxy-wpmgx\" (UID: \"62a5c6bc-09cb-4d80-9f53-40f42f722298\") " pod="kube-system/kube-proxy-wpmgx" Dec 13 01:53:48.497316 kubelet[1940]: I1213 01:53:48.497291 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-xtables-lock\") pod \"cilium-2smzx\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " pod="kube-system/cilium-2smzx" Dec 13 01:53:48.497316 kubelet[1940]: I1213 01:53:48.497315 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5b0d8a2-f019-4323-a054-72c1d0137110-hubble-tls\") pod \"cilium-2smzx\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " pod="kube-system/cilium-2smzx" Dec 13 01:53:48.497543 kubelet[1940]: I1213 01:53:48.497366 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/62a5c6bc-09cb-4d80-9f53-40f42f722298-kube-proxy\") pod \"kube-proxy-wpmgx\" (UID: \"62a5c6bc-09cb-4d80-9f53-40f42f722298\") " pod="kube-system/kube-proxy-wpmgx" Dec 13 01:53:48.497543 kubelet[1940]: I1213 01:53:48.497402 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zddsk\" (UniqueName: \"kubernetes.io/projected/62a5c6bc-09cb-4d80-9f53-40f42f722298-kube-api-access-zddsk\") pod \"kube-proxy-wpmgx\" (UID: \"62a5c6bc-09cb-4d80-9f53-40f42f722298\") " pod="kube-system/kube-proxy-wpmgx" Dec 13 01:53:48.497543 kubelet[1940]: I1213 01:53:48.497431 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-cilium-run\") pod \"cilium-2smzx\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " pod="kube-system/cilium-2smzx" Dec 13 01:53:48.497543 kubelet[1940]: I1213 01:53:48.497472 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-bpf-maps\") pod \"cilium-2smzx\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " pod="kube-system/cilium-2smzx" Dec 13 01:53:48.497543 kubelet[1940]: I1213 01:53:48.497499 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-hostproc\") pod \"cilium-2smzx\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " pod="kube-system/cilium-2smzx" Dec 13 01:53:48.497543 kubelet[1940]: I1213 01:53:48.497539 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5b0d8a2-f019-4323-a054-72c1d0137110-cilium-config-path\") pod \"cilium-2smzx\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " pod="kube-system/cilium-2smzx" Dec 13 01:53:48.497768 kubelet[1940]: I1213 01:53:48.497569 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-host-proc-sys-kernel\") pod \"cilium-2smzx\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " pod="kube-system/cilium-2smzx" Dec 13 01:53:48.497768 kubelet[1940]: I1213 01:53:48.497597 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jvh\" (UniqueName: \"kubernetes.io/projected/c5b0d8a2-f019-4323-a054-72c1d0137110-kube-api-access-d6jvh\") pod \"cilium-2smzx\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " pod="kube-system/cilium-2smzx" Dec 13 01:53:48.497768 kubelet[1940]: I1213 01:53:48.497629 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-cilium-cgroup\") pod \"cilium-2smzx\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " pod="kube-system/cilium-2smzx" Dec 13 01:53:48.497768 kubelet[1940]: I1213 01:53:48.497657 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-etc-cni-netd\") pod \"cilium-2smzx\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " pod="kube-system/cilium-2smzx" Dec 13 01:53:48.497768 kubelet[1940]: I1213 01:53:48.497692 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-lib-modules\") pod \"cilium-2smzx\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " pod="kube-system/cilium-2smzx" Dec 13 01:53:48.497968 kubelet[1940]: I1213 01:53:48.497725 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62a5c6bc-09cb-4d80-9f53-40f42f722298-xtables-lock\") pod \"kube-proxy-wpmgx\" (UID: \"62a5c6bc-09cb-4d80-9f53-40f42f722298\") " pod="kube-system/kube-proxy-wpmgx" Dec 13 01:53:48.778058 env[1422]: time="2024-12-13T01:53:48.778014441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wpmgx,Uid:62a5c6bc-09cb-4d80-9f53-40f42f722298,Namespace:kube-system,Attempt:0,}" Dec 13 01:53:48.784732 env[1422]: time="2024-12-13T01:53:48.784702656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2smzx,Uid:c5b0d8a2-f019-4323-a054-72c1d0137110,Namespace:kube-system,Attempt:0,}" Dec 13 01:53:49.458078 kubelet[1940]: E1213 01:53:49.458039 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:50.458326 kubelet[1940]: E1213 01:53:50.458288 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:51.459165 kubelet[1940]: E1213 01:53:51.459103 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:51.913120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1187274320.mount: Deactivated successfully. Dec 13 01:53:51.935302 env[1422]: time="2024-12-13T01:53:51.935251577Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:51.938428 env[1422]: time="2024-12-13T01:53:51.938393437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:51.950232 env[1422]: time="2024-12-13T01:53:51.950193461Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:51.953384 env[1422]: time="2024-12-13T01:53:51.953332420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:51.957224 env[1422]: time="2024-12-13T01:53:51.957190493Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:51.959705 env[1422]: time="2024-12-13T01:53:51.959673340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:51.963434 env[1422]: time="2024-12-13T01:53:51.963403811Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:51.967487 env[1422]: time="2024-12-13T01:53:51.967456588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:52.043133 env[1422]: time="2024-12-13T01:53:52.043069998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:53:52.043423 env[1422]: time="2024-12-13T01:53:52.043310003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:53:52.043423 env[1422]: time="2024-12-13T01:53:52.043335803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:53:52.043771 env[1422]: time="2024-12-13T01:53:52.043725110Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1 pid=1984 runtime=io.containerd.runc.v2 Dec 13 01:53:52.055413 env[1422]: time="2024-12-13T01:53:52.055329424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:53:52.055559 env[1422]: time="2024-12-13T01:53:52.055419326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:53:52.055559 env[1422]: time="2024-12-13T01:53:52.055448426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:53:52.055654 env[1422]: time="2024-12-13T01:53:52.055581429Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bda28e6f18960c6064ce0c376f2d68726a61e4dccdbd51675ac4e0f3a482338 pid=2001 runtime=io.containerd.runc.v2 Dec 13 01:53:52.064873 systemd[1]: Started cri-containerd-70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1.scope. Dec 13 01:53:52.093153 systemd[1]: Started cri-containerd-1bda28e6f18960c6064ce0c376f2d68726a61e4dccdbd51675ac4e0f3a482338.scope. Dec 13 01:53:52.107481 env[1422]: time="2024-12-13T01:53:52.107369383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2smzx,Uid:c5b0d8a2-f019-4323-a054-72c1d0137110,Namespace:kube-system,Attempt:0,} returns sandbox id \"70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1\"" Dec 13 01:53:52.110592 env[1422]: time="2024-12-13T01:53:52.110557542Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:53:52.121397 env[1422]: time="2024-12-13T01:53:52.121360441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wpmgx,Uid:62a5c6bc-09cb-4d80-9f53-40f42f722298,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bda28e6f18960c6064ce0c376f2d68726a61e4dccdbd51675ac4e0f3a482338\"" Dec 13 01:53:52.460060 kubelet[1940]: E1213 01:53:52.459993 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:53.460858 kubelet[1940]: E1213 01:53:53.460795 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:54.461607 kubelet[1940]: E1213 01:53:54.461565 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:55.462285 kubelet[1940]: E1213 01:53:55.462223 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:56.463131 kubelet[1940]: E1213 01:53:56.463063 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:57.463998 kubelet[1940]: E1213 01:53:57.463942 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:57.809581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1348552976.mount: Deactivated successfully. Dec 13 01:53:58.464322 kubelet[1940]: E1213 01:53:58.464275 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:59.464661 kubelet[1940]: E1213 01:53:59.464619 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:00.465004 kubelet[1940]: E1213 01:54:00.464953 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:00.490041 env[1422]: time="2024-12-13T01:54:00.489983217Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:00.495527 env[1422]: time="2024-12-13T01:54:00.495395997Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:00.500621 env[1422]: time="2024-12-13T01:54:00.500586374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:00.501055 env[1422]: time="2024-12-13T01:54:00.501023180Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:54:00.502384 env[1422]: time="2024-12-13T01:54:00.502352200Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:54:00.503438 env[1422]: time="2024-12-13T01:54:00.503407816Z" level=info msg="CreateContainer within sandbox \"70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:54:00.549175 env[1422]: time="2024-12-13T01:54:00.549130792Z" level=info msg="CreateContainer within sandbox \"70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9\"" Dec 13 01:54:00.549909 env[1422]: time="2024-12-13T01:54:00.549878703Z" level=info msg="StartContainer for \"4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9\"" Dec 13 01:54:00.569626 systemd[1]: Started cri-containerd-4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9.scope. Dec 13 01:54:00.609577 env[1422]: time="2024-12-13T01:54:00.609537585Z" level=info msg="StartContainer for \"4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9\" returns successfully" Dec 13 01:54:00.618153 systemd[1]: cri-containerd-4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9.scope: Deactivated successfully. Dec 13 01:54:01.466060 kubelet[1940]: E1213 01:54:01.465997 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:01.531690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9-rootfs.mount: Deactivated successfully. Dec 13 01:54:02.466274 kubelet[1940]: E1213 01:54:02.466206 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:04.270477 kubelet[1940]: E1213 01:54:03.467329 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:04.318797 env[1422]: time="2024-12-13T01:54:04.318737568Z" level=info msg="shim disconnected" id=4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9 Dec 13 01:54:04.318797 env[1422]: time="2024-12-13T01:54:04.318792568Z" level=warning msg="cleaning up after shim disconnected" id=4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9 namespace=k8s.io Dec 13 01:54:04.319375 env[1422]: time="2024-12-13T01:54:04.318807668Z" level=info msg="cleaning up dead shim" Dec 13 01:54:04.327412 env[1422]: time="2024-12-13T01:54:04.327325282Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2110 runtime=io.containerd.runc.v2\n" Dec 13 01:54:04.467876 kubelet[1940]: E1213 01:54:04.467843 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:04.639663 env[1422]: time="2024-12-13T01:54:04.639294628Z" level=info msg="CreateContainer within sandbox \"70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:54:04.824692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2338152029.mount: Deactivated successfully. Dec 13 01:54:04.837537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320773115.mount: Deactivated successfully. Dec 13 01:54:04.848170 env[1422]: time="2024-12-13T01:54:04.848122704Z" level=info msg="CreateContainer within sandbox \"70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9\"" Dec 13 01:54:04.849030 env[1422]: time="2024-12-13T01:54:04.848996415Z" level=info msg="StartContainer for \"5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9\"" Dec 13 01:54:04.872114 systemd[1]: Started cri-containerd-5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9.scope. Dec 13 01:54:04.919281 env[1422]: time="2024-12-13T01:54:04.918913745Z" level=info msg="StartContainer for \"5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9\" returns successfully" Dec 13 01:54:04.926205 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:54:04.926560 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:54:04.926755 systemd[1]: Stopping systemd-sysctl.service... Dec 13 01:54:04.929982 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:54:04.933122 systemd[1]: cri-containerd-5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9.scope: Deactivated successfully. Dec 13 01:54:04.943172 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:54:05.021922 env[1422]: time="2024-12-13T01:54:05.021865706Z" level=info msg="shim disconnected" id=5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9 Dec 13 01:54:05.021922 env[1422]: time="2024-12-13T01:54:05.021922206Z" level=warning msg="cleaning up after shim disconnected" id=5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9 namespace=k8s.io Dec 13 01:54:05.022219 env[1422]: time="2024-12-13T01:54:05.021934307Z" level=info msg="cleaning up dead shim" Dec 13 01:54:05.042080 env[1422]: time="2024-12-13T01:54:05.042030667Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2175 runtime=io.containerd.runc.v2\n" Dec 13 01:54:05.468594 kubelet[1940]: E1213 01:54:05.468547 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:05.650492 env[1422]: time="2024-12-13T01:54:05.650445143Z" level=info msg="CreateContainer within sandbox \"70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:54:05.694313 env[1422]: time="2024-12-13T01:54:05.694259210Z" level=info msg="CreateContainer within sandbox \"70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc\"" Dec 13 01:54:05.695280 env[1422]: time="2024-12-13T01:54:05.695249023Z" level=info msg="StartContainer for \"7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc\"" Dec 13 01:54:05.715453 systemd[1]: Started cri-containerd-7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc.scope. Dec 13 01:54:05.763165 systemd[1]: cri-containerd-7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc.scope: Deactivated successfully. Dec 13 01:54:05.769192 env[1422]: time="2024-12-13T01:54:05.769154280Z" level=info msg="StartContainer for \"7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc\" returns successfully" Dec 13 01:54:05.795693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3526144477.mount: Deactivated successfully. Dec 13 01:54:06.384063 env[1422]: time="2024-12-13T01:54:06.384004914Z" level=info msg="shim disconnected" id=7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc Dec 13 01:54:06.384440 env[1422]: time="2024-12-13T01:54:06.384400419Z" level=warning msg="cleaning up after shim disconnected" id=7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc namespace=k8s.io Dec 13 01:54:06.384583 env[1422]: time="2024-12-13T01:54:06.384566421Z" level=info msg="cleaning up dead shim" Dec 13 01:54:06.395083 env[1422]: time="2024-12-13T01:54:06.395045054Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2235 runtime=io.containerd.runc.v2\n" Dec 13 01:54:06.403465 env[1422]: time="2024-12-13T01:54:06.403431459Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:06.411569 env[1422]: time="2024-12-13T01:54:06.411534862Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:06.415835 env[1422]: time="2024-12-13T01:54:06.415805715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:06.420490 env[1422]: time="2024-12-13T01:54:06.420458974Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:06.420937 env[1422]: time="2024-12-13T01:54:06.420905880Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:54:06.423059 env[1422]: time="2024-12-13T01:54:06.423028907Z" level=info msg="CreateContainer within sandbox \"1bda28e6f18960c6064ce0c376f2d68726a61e4dccdbd51675ac4e0f3a482338\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:54:06.451082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount509896321.mount: Deactivated successfully. Dec 13 01:54:06.455782 kubelet[1940]: E1213 01:54:06.455741 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:06.469058 kubelet[1940]: E1213 01:54:06.469025 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:06.470082 env[1422]: time="2024-12-13T01:54:06.470047599Z" level=info msg="CreateContainer within sandbox \"1bda28e6f18960c6064ce0c376f2d68726a61e4dccdbd51675ac4e0f3a482338\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3e90038b3700744fb42f9c98e36174d61497d0c6c76f2d9a08fc16f7df927184\"" Dec 13 01:54:06.470567 env[1422]: time="2024-12-13T01:54:06.470536306Z" level=info msg="StartContainer for \"3e90038b3700744fb42f9c98e36174d61497d0c6c76f2d9a08fc16f7df927184\"" Dec 13 01:54:06.495122 systemd[1]: Started cri-containerd-3e90038b3700744fb42f9c98e36174d61497d0c6c76f2d9a08fc16f7df927184.scope. Dec 13 01:54:06.532990 env[1422]: time="2024-12-13T01:54:06.532932492Z" level=info msg="StartContainer for \"3e90038b3700744fb42f9c98e36174d61497d0c6c76f2d9a08fc16f7df927184\" returns successfully" Dec 13 01:54:06.656302 env[1422]: time="2024-12-13T01:54:06.655691241Z" level=info msg="CreateContainer within sandbox \"70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:54:06.663569 kubelet[1940]: I1213 01:54:06.663540 1940 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wpmgx" podStartSLOduration=6.36493352 podStartE2EDuration="20.663483439s" podCreationTimestamp="2024-12-13 01:53:46 +0000 UTC" firstStartedPulling="2024-12-13 01:53:52.122637764 +0000 UTC m=+6.336725860" lastFinishedPulling="2024-12-13 01:54:06.421187783 +0000 UTC m=+20.635275779" observedRunningTime="2024-12-13 01:54:06.663156435 +0000 UTC m=+20.877244431" watchObservedRunningTime="2024-12-13 01:54:06.663483439 +0000 UTC m=+20.877571535" Dec 13 01:54:06.689929 env[1422]: time="2024-12-13T01:54:06.688568955Z" level=info msg="CreateContainer within sandbox \"70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3\"" Dec 13 01:54:06.689929 env[1422]: time="2024-12-13T01:54:06.689464866Z" level=info msg="StartContainer for \"df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3\"" Dec 13 01:54:06.709304 systemd[1]: Started cri-containerd-df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3.scope. Dec 13 01:54:06.748353 systemd[1]: cri-containerd-df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3.scope: Deactivated successfully. Dec 13 01:54:06.751421 env[1422]: time="2024-12-13T01:54:06.751380947Z" level=info msg="StartContainer for \"df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3\" returns successfully" Dec 13 01:54:06.783846 env[1422]: time="2024-12-13T01:54:06.783786856Z" level=info msg="shim disconnected" id=df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3 Dec 13 01:54:06.783846 env[1422]: time="2024-12-13T01:54:06.783847757Z" level=warning msg="cleaning up after shim disconnected" id=df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3 namespace=k8s.io Dec 13 01:54:06.784146 env[1422]: time="2024-12-13T01:54:06.783861857Z" level=info msg="cleaning up dead shim" Dec 13 01:54:06.792944 env[1422]: time="2024-12-13T01:54:06.792906971Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2389 runtime=io.containerd.runc.v2\n" Dec 13 01:54:06.795074 systemd[1]: run-containerd-runc-k8s.io-3e90038b3700744fb42f9c98e36174d61497d0c6c76f2d9a08fc16f7df927184-runc.ei6sCD.mount: Deactivated successfully. Dec 13 01:54:07.469667 kubelet[1940]: E1213 01:54:07.469628 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:07.660411 env[1422]: time="2024-12-13T01:54:07.660369398Z" level=info msg="CreateContainer within sandbox \"70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:54:07.691373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779682985.mount: Deactivated successfully. Dec 13 01:54:07.699093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2335222885.mount: Deactivated successfully. Dec 13 01:54:07.707975 env[1422]: time="2024-12-13T01:54:07.707935482Z" level=info msg="CreateContainer within sandbox \"70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\"" Dec 13 01:54:07.708628 env[1422]: time="2024-12-13T01:54:07.708597990Z" level=info msg="StartContainer for \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\"" Dec 13 01:54:07.723861 systemd[1]: Started cri-containerd-0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8.scope. Dec 13 01:54:07.768033 env[1422]: time="2024-12-13T01:54:07.767986920Z" level=info msg="StartContainer for \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\" returns successfully" Dec 13 01:54:07.870089 kubelet[1940]: I1213 01:54:07.870054 1940 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:54:08.361374 kernel: Initializing XFRM netlink socket Dec 13 01:54:08.470520 kubelet[1940]: E1213 01:54:08.470452 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:08.680762 kubelet[1940]: I1213 01:54:08.680462 1940 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2smzx" podStartSLOduration=14.288733957 podStartE2EDuration="22.680424017s" podCreationTimestamp="2024-12-13 01:53:46 +0000 UTC" firstStartedPulling="2024-12-13 01:53:52.109741826 +0000 UTC m=+6.323829822" lastFinishedPulling="2024-12-13 01:54:00.501431786 +0000 UTC m=+14.715519882" observedRunningTime="2024-12-13 01:54:08.680304715 +0000 UTC m=+22.894392811" watchObservedRunningTime="2024-12-13 01:54:08.680424017 +0000 UTC m=+22.894512013" Dec 13 01:54:09.471508 kubelet[1940]: E1213 01:54:09.471442 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:10.031480 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 01:54:10.033049 systemd-networkd[1575]: cilium_host: Link UP Dec 13 01:54:10.033265 systemd-networkd[1575]: cilium_net: Link UP Dec 13 01:54:10.033271 systemd-networkd[1575]: cilium_net: Gained carrier Dec 13 01:54:10.033519 systemd-networkd[1575]: cilium_host: Gained carrier Dec 13 01:54:10.033829 systemd-networkd[1575]: cilium_host: Gained IPv6LL Dec 13 01:54:10.167515 systemd-networkd[1575]: cilium_net: Gained IPv6LL Dec 13 01:54:10.216870 systemd-networkd[1575]: cilium_vxlan: Link UP Dec 13 01:54:10.216882 systemd-networkd[1575]: cilium_vxlan: Gained carrier Dec 13 01:54:10.472512 kubelet[1940]: E1213 01:54:10.472398 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:10.482436 kernel: NET: Registered PF_ALG protocol family Dec 13 01:54:11.306459 systemd-networkd[1575]: lxc_health: Link UP Dec 13 01:54:11.317117 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 01:54:11.318189 systemd-networkd[1575]: lxc_health: Gained carrier Dec 13 01:54:11.473461 kubelet[1940]: E1213 01:54:11.473400 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:12.105522 systemd-networkd[1575]: cilium_vxlan: Gained IPv6LL Dec 13 01:54:12.474134 kubelet[1940]: E1213 01:54:12.473706 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:12.615524 systemd-networkd[1575]: lxc_health: Gained IPv6LL Dec 13 01:54:13.474703 kubelet[1940]: E1213 01:54:13.474643 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:14.475790 kubelet[1940]: E1213 01:54:14.475736 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:14.795294 kubelet[1940]: I1213 01:54:14.795260 1940 topology_manager.go:215] "Topology Admit Handler" podUID="bf2ccd69-1c30-4566-8863-64287622915c" podNamespace="default" podName="nginx-deployment-6d5f899847-gqzrv" Dec 13 01:54:14.802841 systemd[1]: Created slice kubepods-besteffort-podbf2ccd69_1c30_4566_8863_64287622915c.slice. Dec 13 01:54:14.887454 kubelet[1940]: I1213 01:54:14.887410 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d9hg\" (UniqueName: \"kubernetes.io/projected/bf2ccd69-1c30-4566-8863-64287622915c-kube-api-access-6d9hg\") pod \"nginx-deployment-6d5f899847-gqzrv\" (UID: \"bf2ccd69-1c30-4566-8863-64287622915c\") " pod="default/nginx-deployment-6d5f899847-gqzrv" Dec 13 01:54:15.108889 env[1422]: time="2024-12-13T01:54:15.108749195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-gqzrv,Uid:bf2ccd69-1c30-4566-8863-64287622915c,Namespace:default,Attempt:0,}" Dec 13 01:54:15.195750 systemd-networkd[1575]: lxc95f691420c02: Link UP Dec 13 01:54:15.205379 kernel: eth0: renamed from tmpd6238 Dec 13 01:54:15.225986 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:54:15.226104 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc95f691420c02: link becomes ready Dec 13 01:54:15.226416 systemd-networkd[1575]: lxc95f691420c02: Gained carrier Dec 13 01:54:15.476818 kubelet[1940]: E1213 01:54:15.476656 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:15.617739 env[1422]: time="2024-12-13T01:54:15.617666300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:15.617739 env[1422]: time="2024-12-13T01:54:15.617703600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:15.617739 env[1422]: time="2024-12-13T01:54:15.617717300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:15.618165 env[1422]: time="2024-12-13T01:54:15.618099304Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d623828304fec003e4a6dd8850d001e0531a6dc2f97994ec8e4b774e394955ae pid=2980 runtime=io.containerd.runc.v2 Dec 13 01:54:15.634792 systemd[1]: Started cri-containerd-d623828304fec003e4a6dd8850d001e0531a6dc2f97994ec8e4b774e394955ae.scope. Dec 13 01:54:15.689096 env[1422]: time="2024-12-13T01:54:15.689054316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-gqzrv,Uid:bf2ccd69-1c30-4566-8863-64287622915c,Namespace:default,Attempt:0,} returns sandbox id \"d623828304fec003e4a6dd8850d001e0531a6dc2f97994ec8e4b774e394955ae\"" Dec 13 01:54:15.690959 env[1422]: time="2024-12-13T01:54:15.690914834Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:54:16.476850 kubelet[1940]: E1213 01:54:16.476796 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:17.223674 systemd-networkd[1575]: lxc95f691420c02: Gained IPv6LL Dec 13 01:54:17.477825 kubelet[1940]: E1213 01:54:17.477676 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:18.478292 kubelet[1940]: E1213 01:54:18.478214 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:18.664681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4180446906.mount: Deactivated successfully. Dec 13 01:54:19.478533 kubelet[1940]: E1213 01:54:19.478484 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:20.135523 env[1422]: time="2024-12-13T01:54:20.135471596Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:20.140930 env[1422]: time="2024-12-13T01:54:20.140888744Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:20.145076 env[1422]: time="2024-12-13T01:54:20.145040781Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:20.148277 env[1422]: time="2024-12-13T01:54:20.148241309Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:20.148943 env[1422]: time="2024-12-13T01:54:20.148911315Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:54:20.151077 env[1422]: time="2024-12-13T01:54:20.151047634Z" level=info msg="CreateContainer within sandbox \"d623828304fec003e4a6dd8850d001e0531a6dc2f97994ec8e4b774e394955ae\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 01:54:20.181108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1087541589.mount: Deactivated successfully. Dec 13 01:54:20.190051 env[1422]: time="2024-12-13T01:54:20.190006280Z" level=info msg="CreateContainer within sandbox \"d623828304fec003e4a6dd8850d001e0531a6dc2f97994ec8e4b774e394955ae\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"fec1629b4fd65fdb7e5734880134a00770e93787d1dd4f7f14bd874e2eaf42fe\"" Dec 13 01:54:20.190775 env[1422]: time="2024-12-13T01:54:20.190737987Z" level=info msg="StartContainer for \"fec1629b4fd65fdb7e5734880134a00770e93787d1dd4f7f14bd874e2eaf42fe\"" Dec 13 01:54:20.212030 systemd[1]: Started cri-containerd-fec1629b4fd65fdb7e5734880134a00770e93787d1dd4f7f14bd874e2eaf42fe.scope. Dec 13 01:54:20.243224 env[1422]: time="2024-12-13T01:54:20.243180453Z" level=info msg="StartContainer for \"fec1629b4fd65fdb7e5734880134a00770e93787d1dd4f7f14bd874e2eaf42fe\" returns successfully" Dec 13 01:54:20.478892 kubelet[1940]: E1213 01:54:20.478751 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:20.697862 kubelet[1940]: I1213 01:54:20.697827 1940 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-gqzrv" podStartSLOduration=2.238848902 podStartE2EDuration="6.697796592s" podCreationTimestamp="2024-12-13 01:54:14 +0000 UTC" firstStartedPulling="2024-12-13 01:54:15.690286528 +0000 UTC m=+29.904374624" lastFinishedPulling="2024-12-13 01:54:20.149234318 +0000 UTC m=+34.363322314" observedRunningTime="2024-12-13 01:54:20.69764579 +0000 UTC m=+34.911733886" watchObservedRunningTime="2024-12-13 01:54:20.697796592 +0000 UTC m=+34.911884688" Dec 13 01:54:21.479036 kubelet[1940]: E1213 01:54:21.478971 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:22.479497 kubelet[1940]: E1213 01:54:22.479433 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:23.479778 kubelet[1940]: E1213 01:54:23.479710 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:24.480366 kubelet[1940]: E1213 01:54:24.480284 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:25.481406 kubelet[1940]: E1213 01:54:25.481354 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:26.243025 kubelet[1940]: I1213 01:54:26.242982 1940 topology_manager.go:215] "Topology Admit Handler" podUID="f12dde36-4754-463d-9de1-99656f9e0e1d" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 01:54:26.249700 systemd[1]: Created slice kubepods-besteffort-podf12dde36_4754_463d_9de1_99656f9e0e1d.slice. Dec 13 01:54:26.358958 kubelet[1940]: I1213 01:54:26.358910 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/f12dde36-4754-463d-9de1-99656f9e0e1d-data\") pod \"nfs-server-provisioner-0\" (UID: \"f12dde36-4754-463d-9de1-99656f9e0e1d\") " pod="default/nfs-server-provisioner-0" Dec 13 01:54:26.359222 kubelet[1940]: I1213 01:54:26.359200 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vk9j\" (UniqueName: \"kubernetes.io/projected/f12dde36-4754-463d-9de1-99656f9e0e1d-kube-api-access-7vk9j\") pod \"nfs-server-provisioner-0\" (UID: \"f12dde36-4754-463d-9de1-99656f9e0e1d\") " pod="default/nfs-server-provisioner-0" Dec 13 01:54:26.455855 kubelet[1940]: E1213 01:54:26.455806 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:26.482132 kubelet[1940]: E1213 01:54:26.482091 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:26.552893 env[1422]: time="2024-12-13T01:54:26.552835399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f12dde36-4754-463d-9de1-99656f9e0e1d,Namespace:default,Attempt:0,}" Dec 13 01:54:26.622876 systemd-networkd[1575]: lxcb307a765ec07: Link UP Dec 13 01:54:26.633725 kernel: eth0: renamed from tmp11f1b Dec 13 01:54:26.644361 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:54:26.644451 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb307a765ec07: link becomes ready Dec 13 01:54:26.645958 systemd-networkd[1575]: lxcb307a765ec07: Gained carrier Dec 13 01:54:26.780607 env[1422]: time="2024-12-13T01:54:26.780519559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:26.780607 env[1422]: time="2024-12-13T01:54:26.780570360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:26.780607 env[1422]: time="2024-12-13T01:54:26.780584660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:26.781142 env[1422]: time="2024-12-13T01:54:26.781093964Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11f1bcbea57f53313954e5f6c85970d25590d5399b5e6848bcc4a9c7216b06d1 pid=3105 runtime=io.containerd.runc.v2 Dec 13 01:54:26.799477 systemd[1]: Started cri-containerd-11f1bcbea57f53313954e5f6c85970d25590d5399b5e6848bcc4a9c7216b06d1.scope. Dec 13 01:54:26.838266 env[1422]: time="2024-12-13T01:54:26.837635701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f12dde36-4754-463d-9de1-99656f9e0e1d,Namespace:default,Attempt:0,} returns sandbox id \"11f1bcbea57f53313954e5f6c85970d25590d5399b5e6848bcc4a9c7216b06d1\"" Dec 13 01:54:26.839443 env[1422]: time="2024-12-13T01:54:26.839408414Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 01:54:27.483263 kubelet[1940]: E1213 01:54:27.483190 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:28.359544 systemd-networkd[1575]: lxcb307a765ec07: Gained IPv6LL Dec 13 01:54:28.483406 kubelet[1940]: E1213 01:54:28.483333 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:29.483971 kubelet[1940]: E1213 01:54:29.483897 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:29.550726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495257170.mount: Deactivated successfully. Dec 13 01:54:30.484782 kubelet[1940]: E1213 01:54:30.484712 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:31.485682 kubelet[1940]: E1213 01:54:31.485633 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:31.550081 env[1422]: time="2024-12-13T01:54:31.550032717Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:31.557600 env[1422]: time="2024-12-13T01:54:31.557555369Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:31.563096 env[1422]: time="2024-12-13T01:54:31.563052507Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:31.568715 env[1422]: time="2024-12-13T01:54:31.568674846Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:31.569387 env[1422]: time="2024-12-13T01:54:31.569333250Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 01:54:31.571673 env[1422]: time="2024-12-13T01:54:31.571641066Z" level=info msg="CreateContainer within sandbox \"11f1bcbea57f53313954e5f6c85970d25590d5399b5e6848bcc4a9c7216b06d1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 01:54:31.611197 env[1422]: time="2024-12-13T01:54:31.611143640Z" level=info msg="CreateContainer within sandbox \"11f1bcbea57f53313954e5f6c85970d25590d5399b5e6848bcc4a9c7216b06d1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"c8e4d5035831bdf22a8318efa95fd1781169da44d29b3dbf7df60178e0b693fb\"" Dec 13 01:54:31.612004 env[1422]: time="2024-12-13T01:54:31.611970145Z" level=info msg="StartContainer for \"c8e4d5035831bdf22a8318efa95fd1781169da44d29b3dbf7df60178e0b693fb\"" Dec 13 01:54:31.639487 systemd[1]: run-containerd-runc-k8s.io-c8e4d5035831bdf22a8318efa95fd1781169da44d29b3dbf7df60178e0b693fb-runc.GRbF2K.mount: Deactivated successfully. Dec 13 01:54:31.643391 systemd[1]: Started cri-containerd-c8e4d5035831bdf22a8318efa95fd1781169da44d29b3dbf7df60178e0b693fb.scope. Dec 13 01:54:31.676670 env[1422]: time="2024-12-13T01:54:31.676629093Z" level=info msg="StartContainer for \"c8e4d5035831bdf22a8318efa95fd1781169da44d29b3dbf7df60178e0b693fb\" returns successfully" Dec 13 01:54:31.729036 kubelet[1940]: I1213 01:54:31.728988 1940 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=0.998197013 podStartE2EDuration="5.728926355s" podCreationTimestamp="2024-12-13 01:54:26 +0000 UTC" firstStartedPulling="2024-12-13 01:54:26.838971111 +0000 UTC m=+41.053059107" lastFinishedPulling="2024-12-13 01:54:31.569700453 +0000 UTC m=+45.783788449" observedRunningTime="2024-12-13 01:54:31.728641953 +0000 UTC m=+45.942730049" watchObservedRunningTime="2024-12-13 01:54:31.728926355 +0000 UTC m=+45.943014351" Dec 13 01:54:32.485845 kubelet[1940]: E1213 01:54:32.485786 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:33.486225 kubelet[1940]: E1213 01:54:33.486167 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:34.487126 kubelet[1940]: E1213 01:54:34.487065 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:35.487960 kubelet[1940]: E1213 01:54:35.487890 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:36.489138 kubelet[1940]: E1213 01:54:36.489078 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:37.489951 kubelet[1940]: E1213 01:54:37.489897 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:38.490364 kubelet[1940]: E1213 01:54:38.490301 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:39.490904 kubelet[1940]: E1213 01:54:39.490837 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:40.491604 kubelet[1940]: E1213 01:54:40.491545 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:41.184954 kubelet[1940]: I1213 01:54:41.184914 1940 topology_manager.go:215] "Topology Admit Handler" podUID="9bb102b9-d25a-4c10-9017-27dc06fd98a3" podNamespace="default" podName="test-pod-1" Dec 13 01:54:41.189917 systemd[1]: Created slice kubepods-besteffort-pod9bb102b9_d25a_4c10_9017_27dc06fd98a3.slice. Dec 13 01:54:41.251246 kubelet[1940]: I1213 01:54:41.251203 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzncb\" (UniqueName: \"kubernetes.io/projected/9bb102b9-d25a-4c10-9017-27dc06fd98a3-kube-api-access-zzncb\") pod \"test-pod-1\" (UID: \"9bb102b9-d25a-4c10-9017-27dc06fd98a3\") " pod="default/test-pod-1" Dec 13 01:54:41.251480 kubelet[1940]: I1213 01:54:41.251279 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eb83e873-94bb-4fda-8b37-f80c4b339c88\" (UniqueName: \"kubernetes.io/nfs/9bb102b9-d25a-4c10-9017-27dc06fd98a3-pvc-eb83e873-94bb-4fda-8b37-f80c4b339c88\") pod \"test-pod-1\" (UID: \"9bb102b9-d25a-4c10-9017-27dc06fd98a3\") " pod="default/test-pod-1" Dec 13 01:54:41.492262 kubelet[1940]: E1213 01:54:41.492135 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:41.536371 kernel: FS-Cache: Loaded Dec 13 01:54:41.658587 kernel: RPC: Registered named UNIX socket transport module. Dec 13 01:54:41.658727 kernel: RPC: Registered udp transport module. Dec 13 01:54:41.658753 kernel: RPC: Registered tcp transport module. Dec 13 01:54:41.664118 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 01:54:41.854367 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 01:54:42.081419 kernel: NFS: Registering the id_resolver key type Dec 13 01:54:42.081578 kernel: Key type id_resolver registered Dec 13 01:54:42.081608 kernel: Key type id_legacy registered Dec 13 01:54:42.398844 nfsidmap[3222]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-36ffbd9cb7' Dec 13 01:54:42.420757 nfsidmap[3223]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-36ffbd9cb7' Dec 13 01:54:42.493397 kubelet[1940]: E1213 01:54:42.493322 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:42.694237 env[1422]: time="2024-12-13T01:54:42.693887943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9bb102b9-d25a-4c10-9017-27dc06fd98a3,Namespace:default,Attempt:0,}" Dec 13 01:54:42.756165 systemd-networkd[1575]: lxca2190c1773b8: Link UP Dec 13 01:54:42.764396 kernel: eth0: renamed from tmp87110 Dec 13 01:54:42.775231 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:54:42.777448 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca2190c1773b8: link becomes ready Dec 13 01:54:42.777843 systemd-networkd[1575]: lxca2190c1773b8: Gained carrier Dec 13 01:54:42.957017 env[1422]: time="2024-12-13T01:54:42.956686699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:42.957017 env[1422]: time="2024-12-13T01:54:42.956747999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:42.957017 env[1422]: time="2024-12-13T01:54:42.956762499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:42.957714 env[1422]: time="2024-12-13T01:54:42.957628904Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/87110aef2066eeb079363bc0491d92f86b15adf173aa7e198ae2fcfd73a397c1 pid=3250 runtime=io.containerd.runc.v2 Dec 13 01:54:42.973038 systemd[1]: Started cri-containerd-87110aef2066eeb079363bc0491d92f86b15adf173aa7e198ae2fcfd73a397c1.scope. Dec 13 01:54:43.012779 env[1422]: time="2024-12-13T01:54:43.012734808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9bb102b9-d25a-4c10-9017-27dc06fd98a3,Namespace:default,Attempt:0,} returns sandbox id \"87110aef2066eeb079363bc0491d92f86b15adf173aa7e198ae2fcfd73a397c1\"" Dec 13 01:54:43.014649 env[1422]: time="2024-12-13T01:54:43.014623518Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:54:43.315266 env[1422]: time="2024-12-13T01:54:43.315210152Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:43.321282 env[1422]: time="2024-12-13T01:54:43.321237784Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:43.325559 env[1422]: time="2024-12-13T01:54:43.325527608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:43.329732 env[1422]: time="2024-12-13T01:54:43.329702630Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:43.330265 env[1422]: time="2024-12-13T01:54:43.330233933Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:54:43.332617 env[1422]: time="2024-12-13T01:54:43.332586046Z" level=info msg="CreateContainer within sandbox \"87110aef2066eeb079363bc0491d92f86b15adf173aa7e198ae2fcfd73a397c1\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 01:54:43.369370 env[1422]: time="2024-12-13T01:54:43.369316746Z" level=info msg="CreateContainer within sandbox \"87110aef2066eeb079363bc0491d92f86b15adf173aa7e198ae2fcfd73a397c1\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b283f4e2a0a87233b6c89b335070f3b2ceffd8acde26439ed0d68da6b3033267\"" Dec 13 01:54:43.369940 env[1422]: time="2024-12-13T01:54:43.369869649Z" level=info msg="StartContainer for \"b283f4e2a0a87233b6c89b335070f3b2ceffd8acde26439ed0d68da6b3033267\"" Dec 13 01:54:43.386140 systemd[1]: Started cri-containerd-b283f4e2a0a87233b6c89b335070f3b2ceffd8acde26439ed0d68da6b3033267.scope. Dec 13 01:54:43.418810 env[1422]: time="2024-12-13T01:54:43.418764814Z" level=info msg="StartContainer for \"b283f4e2a0a87233b6c89b335070f3b2ceffd8acde26439ed0d68da6b3033267\" returns successfully" Dec 13 01:54:43.494515 kubelet[1940]: E1213 01:54:43.494468 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:43.749108 kubelet[1940]: I1213 01:54:43.748965 1940 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.432406189 podStartE2EDuration="16.748920309s" podCreationTimestamp="2024-12-13 01:54:27 +0000 UTC" firstStartedPulling="2024-12-13 01:54:43.014065315 +0000 UTC m=+57.228153311" lastFinishedPulling="2024-12-13 01:54:43.330579335 +0000 UTC m=+57.544667431" observedRunningTime="2024-12-13 01:54:43.748629107 +0000 UTC m=+57.962717103" watchObservedRunningTime="2024-12-13 01:54:43.748920309 +0000 UTC m=+57.963008305" Dec 13 01:54:44.495678 kubelet[1940]: E1213 01:54:44.495616 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:44.679585 systemd-networkd[1575]: lxca2190c1773b8: Gained IPv6LL Dec 13 01:54:45.496820 kubelet[1940]: E1213 01:54:45.496753 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:46.455132 kubelet[1940]: E1213 01:54:46.455069 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:46.497322 kubelet[1940]: E1213 01:54:46.497283 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:47.497492 kubelet[1940]: E1213 01:54:47.497431 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:48.498301 kubelet[1940]: E1213 01:54:48.498258 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:49.499238 kubelet[1940]: E1213 01:54:49.499178 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:49.952569 systemd[1]: run-containerd-runc-k8s.io-0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8-runc.5VHy2b.mount: Deactivated successfully. Dec 13 01:54:49.966629 env[1422]: time="2024-12-13T01:54:49.966561021Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:54:49.971643 env[1422]: time="2024-12-13T01:54:49.971605746Z" level=info msg="StopContainer for \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\" with timeout 2 (s)" Dec 13 01:54:49.971905 env[1422]: time="2024-12-13T01:54:49.971868747Z" level=info msg="Stop container \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\" with signal terminated" Dec 13 01:54:49.979589 systemd-networkd[1575]: lxc_health: Link DOWN Dec 13 01:54:49.979597 systemd-networkd[1575]: lxc_health: Lost carrier Dec 13 01:54:50.007962 systemd[1]: cri-containerd-0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8.scope: Deactivated successfully. Dec 13 01:54:50.008259 systemd[1]: cri-containerd-0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8.scope: Consumed 6.207s CPU time. Dec 13 01:54:50.026812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8-rootfs.mount: Deactivated successfully. Dec 13 01:54:50.499786 kubelet[1940]: E1213 01:54:50.499726 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:51.500250 kubelet[1940]: E1213 01:54:51.500193 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:51.554698 kubelet[1940]: E1213 01:54:51.554641 1940 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:54:51.978901 env[1422]: time="2024-12-13T01:54:51.978812895Z" level=info msg="Kill container \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\"" Dec 13 01:54:52.500708 kubelet[1940]: E1213 01:54:52.500643 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:53.501417 kubelet[1940]: E1213 01:54:53.501329 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:53.515272 env[1422]: time="2024-12-13T01:54:53.515215484Z" level=info msg="shim disconnected" id=0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8 Dec 13 01:54:53.515720 env[1422]: time="2024-12-13T01:54:53.515311884Z" level=warning msg="cleaning up after shim disconnected" id=0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8 namespace=k8s.io Dec 13 01:54:53.515720 env[1422]: time="2024-12-13T01:54:53.515332884Z" level=info msg="cleaning up dead shim" Dec 13 01:54:53.524867 env[1422]: time="2024-12-13T01:54:53.524832228Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3382 runtime=io.containerd.runc.v2\n" Dec 13 01:54:53.531376 env[1422]: time="2024-12-13T01:54:53.531335657Z" level=info msg="StopContainer for \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\" returns successfully" Dec 13 01:54:53.532133 env[1422]: time="2024-12-13T01:54:53.532099761Z" level=info msg="StopPodSandbox for \"70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1\"" Dec 13 01:54:53.532222 env[1422]: time="2024-12-13T01:54:53.532163361Z" level=info msg="Container to stop \"7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:54:53.532222 env[1422]: time="2024-12-13T01:54:53.532182261Z" level=info msg="Container to stop \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:54:53.532222 env[1422]: time="2024-12-13T01:54:53.532197761Z" level=info msg="Container to stop \"4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:54:53.532222 env[1422]: time="2024-12-13T01:54:53.532214961Z" level=info msg="Container to stop \"df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:54:53.535502 env[1422]: time="2024-12-13T01:54:53.532230362Z" level=info msg="Container to stop \"5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:54:53.534709 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1-shm.mount: Deactivated successfully. Dec 13 01:54:53.541246 systemd[1]: cri-containerd-70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1.scope: Deactivated successfully. Dec 13 01:54:53.562831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1-rootfs.mount: Deactivated successfully. Dec 13 01:54:53.572531 env[1422]: time="2024-12-13T01:54:53.572478845Z" level=info msg="shim disconnected" id=70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1 Dec 13 01:54:53.572681 env[1422]: time="2024-12-13T01:54:53.572536145Z" level=warning msg="cleaning up after shim disconnected" id=70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1 namespace=k8s.io Dec 13 01:54:53.572681 env[1422]: time="2024-12-13T01:54:53.572549345Z" level=info msg="cleaning up dead shim" Dec 13 01:54:53.580282 env[1422]: time="2024-12-13T01:54:53.580242881Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3412 runtime=io.containerd.runc.v2\n" Dec 13 01:54:53.580613 env[1422]: time="2024-12-13T01:54:53.580581482Z" level=info msg="TearDown network for sandbox \"70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1\" successfully" Dec 13 01:54:53.580699 env[1422]: time="2024-12-13T01:54:53.580613482Z" level=info msg="StopPodSandbox for \"70ecc3022a8f8e4896eca4e4038afadd8983f60fa4abcb16bf71060fd8b398c1\" returns successfully" Dec 13 01:54:53.595933 kubelet[1940]: I1213 01:54:53.595907 1940 topology_manager.go:215] "Topology Admit Handler" podUID="557d3fec-1d10-47ff-a0e9-149aec907052" podNamespace="kube-system" podName="cilium-operator-5cc964979-t5qkx" Dec 13 01:54:53.596220 kubelet[1940]: E1213 01:54:53.595955 1940 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5b0d8a2-f019-4323-a054-72c1d0137110" containerName="clean-cilium-state" Dec 13 01:54:53.596220 kubelet[1940]: E1213 01:54:53.595969 1940 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5b0d8a2-f019-4323-a054-72c1d0137110" containerName="cilium-agent" Dec 13 01:54:53.596220 kubelet[1940]: E1213 01:54:53.595977 1940 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5b0d8a2-f019-4323-a054-72c1d0137110" containerName="mount-bpf-fs" Dec 13 01:54:53.596220 kubelet[1940]: E1213 01:54:53.595985 1940 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5b0d8a2-f019-4323-a054-72c1d0137110" containerName="mount-cgroup" Dec 13 01:54:53.596220 kubelet[1940]: E1213 01:54:53.595995 1940 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5b0d8a2-f019-4323-a054-72c1d0137110" containerName="apply-sysctl-overwrites" Dec 13 01:54:53.596220 kubelet[1940]: I1213 01:54:53.596019 1940 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5b0d8a2-f019-4323-a054-72c1d0137110" containerName="cilium-agent" Dec 13 01:54:53.600748 systemd[1]: Created slice kubepods-besteffort-pod557d3fec_1d10_47ff_a0e9_149aec907052.slice. Dec 13 01:54:53.632474 kubelet[1940]: I1213 01:54:53.632439 1940 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-host-proc-sys-kernel\") pod \"c5b0d8a2-f019-4323-a054-72c1d0137110\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " Dec 13 01:54:53.632660 kubelet[1940]: I1213 01:54:53.632550 1940 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6jvh\" (UniqueName: \"kubernetes.io/projected/c5b0d8a2-f019-4323-a054-72c1d0137110-kube-api-access-d6jvh\") pod \"c5b0d8a2-f019-4323-a054-72c1d0137110\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " Dec 13 01:54:53.632660 kubelet[1940]: I1213 01:54:53.632578 1940 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-cilium-cgroup\") pod \"c5b0d8a2-f019-4323-a054-72c1d0137110\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " Dec 13 01:54:53.632660 kubelet[1940]: I1213 01:54:53.632618 1940 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5b0d8a2-f019-4323-a054-72c1d0137110-clustermesh-secrets\") pod \"c5b0d8a2-f019-4323-a054-72c1d0137110\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " Dec 13 01:54:53.632660 kubelet[1940]: I1213 01:54:53.632641 1940 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-xtables-lock\") pod \"c5b0d8a2-f019-4323-a054-72c1d0137110\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " Dec 13 01:54:53.632844 kubelet[1940]: I1213 01:54:53.632676 1940 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-bpf-maps\") pod \"c5b0d8a2-f019-4323-a054-72c1d0137110\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " Dec 13 01:54:53.632844 kubelet[1940]: I1213 01:54:53.632701 1940 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-etc-cni-netd\") pod \"c5b0d8a2-f019-4323-a054-72c1d0137110\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " Dec 13 01:54:53.632844 kubelet[1940]: I1213 01:54:53.632726 1940 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-cilium-run\") pod \"c5b0d8a2-f019-4323-a054-72c1d0137110\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " Dec 13 01:54:53.632844 kubelet[1940]: I1213 01:54:53.632765 1940 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-hostproc\") pod \"c5b0d8a2-f019-4323-a054-72c1d0137110\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " Dec 13 01:54:53.632844 kubelet[1940]: I1213 01:54:53.632793 1940 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-lib-modules\") pod \"c5b0d8a2-f019-4323-a054-72c1d0137110\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " Dec 13 01:54:53.632844 kubelet[1940]: I1213 01:54:53.632831 1940 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-cni-path\") pod \"c5b0d8a2-f019-4323-a054-72c1d0137110\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " Dec 13 01:54:53.633086 kubelet[1940]: I1213 01:54:53.632857 1940 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-host-proc-sys-net\") pod \"c5b0d8a2-f019-4323-a054-72c1d0137110\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " Dec 13 01:54:53.633086 kubelet[1940]: I1213 01:54:53.632886 1940 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5b0d8a2-f019-4323-a054-72c1d0137110-hubble-tls\") pod \"c5b0d8a2-f019-4323-a054-72c1d0137110\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " Dec 13 01:54:53.633086 kubelet[1940]: I1213 01:54:53.632928 1940 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5b0d8a2-f019-4323-a054-72c1d0137110-cilium-config-path\") pod \"c5b0d8a2-f019-4323-a054-72c1d0137110\" (UID: \"c5b0d8a2-f019-4323-a054-72c1d0137110\") " Dec 13 01:54:53.633086 kubelet[1940]: I1213 01:54:53.633017 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf99z\" (UniqueName: \"kubernetes.io/projected/557d3fec-1d10-47ff-a0e9-149aec907052-kube-api-access-zf99z\") pod \"cilium-operator-5cc964979-t5qkx\" (UID: \"557d3fec-1d10-47ff-a0e9-149aec907052\") " pod="kube-system/cilium-operator-5cc964979-t5qkx" Dec 13 01:54:53.633086 kubelet[1940]: I1213 01:54:53.633051 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/557d3fec-1d10-47ff-a0e9-149aec907052-cilium-config-path\") pod \"cilium-operator-5cc964979-t5qkx\" (UID: \"557d3fec-1d10-47ff-a0e9-149aec907052\") " pod="kube-system/cilium-operator-5cc964979-t5qkx" Dec 13 01:54:53.633289 kubelet[1940]: I1213 01:54:53.632483 1940 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c5b0d8a2-f019-4323-a054-72c1d0137110" (UID: "c5b0d8a2-f019-4323-a054-72c1d0137110"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:54:53.633530 kubelet[1940]: I1213 01:54:53.633329 1940 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-hostproc" (OuterVolumeSpecName: "hostproc") pod "c5b0d8a2-f019-4323-a054-72c1d0137110" (UID: "c5b0d8a2-f019-4323-a054-72c1d0137110"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:54:53.633530 kubelet[1940]: I1213 01:54:53.633380 1940 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c5b0d8a2-f019-4323-a054-72c1d0137110" (UID: "c5b0d8a2-f019-4323-a054-72c1d0137110"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:54:53.633530 kubelet[1940]: I1213 01:54:53.633401 1940 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-cni-path" (OuterVolumeSpecName: "cni-path") pod "c5b0d8a2-f019-4323-a054-72c1d0137110" (UID: "c5b0d8a2-f019-4323-a054-72c1d0137110"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:54:53.633530 kubelet[1940]: I1213 01:54:53.633416 1940 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c5b0d8a2-f019-4323-a054-72c1d0137110" (UID: "c5b0d8a2-f019-4323-a054-72c1d0137110"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:54:53.633530 kubelet[1940]: I1213 01:54:53.633426 1940 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c5b0d8a2-f019-4323-a054-72c1d0137110" (UID: "c5b0d8a2-f019-4323-a054-72c1d0137110"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:54:53.633888 kubelet[1940]: I1213 01:54:53.633867 1940 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c5b0d8a2-f019-4323-a054-72c1d0137110" (UID: "c5b0d8a2-f019-4323-a054-72c1d0137110"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:54:53.633990 kubelet[1940]: I1213 01:54:53.633975 1940 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c5b0d8a2-f019-4323-a054-72c1d0137110" (UID: "c5b0d8a2-f019-4323-a054-72c1d0137110"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:54:53.634079 kubelet[1940]: I1213 01:54:53.634065 1940 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c5b0d8a2-f019-4323-a054-72c1d0137110" (UID: "c5b0d8a2-f019-4323-a054-72c1d0137110"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:54:53.634164 kubelet[1940]: I1213 01:54:53.634149 1940 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c5b0d8a2-f019-4323-a054-72c1d0137110" (UID: "c5b0d8a2-f019-4323-a054-72c1d0137110"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:54:53.636707 kubelet[1940]: I1213 01:54:53.636674 1940 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5b0d8a2-f019-4323-a054-72c1d0137110-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c5b0d8a2-f019-4323-a054-72c1d0137110" (UID: "c5b0d8a2-f019-4323-a054-72c1d0137110"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:54:53.640713 systemd[1]: var-lib-kubelet-pods-c5b0d8a2\x2df019\x2d4323\x2da054\x2d72c1d0137110-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd6jvh.mount: Deactivated successfully. Dec 13 01:54:53.642050 kubelet[1940]: I1213 01:54:53.642023 1940 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b0d8a2-f019-4323-a054-72c1d0137110-kube-api-access-d6jvh" (OuterVolumeSpecName: "kube-api-access-d6jvh") pod "c5b0d8a2-f019-4323-a054-72c1d0137110" (UID: "c5b0d8a2-f019-4323-a054-72c1d0137110"). InnerVolumeSpecName "kube-api-access-d6jvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:54:53.645695 systemd[1]: var-lib-kubelet-pods-c5b0d8a2\x2df019\x2d4323\x2da054\x2d72c1d0137110-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:54:53.647754 kubelet[1940]: I1213 01:54:53.647722 1940 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5b0d8a2-f019-4323-a054-72c1d0137110-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c5b0d8a2-f019-4323-a054-72c1d0137110" (UID: "c5b0d8a2-f019-4323-a054-72c1d0137110"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:54:53.649968 systemd[1]: var-lib-kubelet-pods-c5b0d8a2\x2df019\x2d4323\x2da054\x2d72c1d0137110-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:54:53.651082 kubelet[1940]: I1213 01:54:53.651056 1940 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b0d8a2-f019-4323-a054-72c1d0137110-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c5b0d8a2-f019-4323-a054-72c1d0137110" (UID: "c5b0d8a2-f019-4323-a054-72c1d0137110"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:54:53.733741 kubelet[1940]: I1213 01:54:53.733690 1940 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-cilium-run\") on node \"10.200.8.16\" DevicePath \"\"" Dec 13 01:54:53.734012 kubelet[1940]: I1213 01:54:53.733991 1940 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-hostproc\") on node \"10.200.8.16\" DevicePath \"\"" Dec 13 01:54:53.734132 kubelet[1940]: I1213 01:54:53.734119 1940 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-lib-modules\") on node \"10.200.8.16\" DevicePath \"\"" Dec 13 01:54:53.734236 kubelet[1940]: I1213 01:54:53.734224 1940 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-cni-path\") on node \"10.200.8.16\" DevicePath \"\"" Dec 13 01:54:53.734392 kubelet[1940]: I1213 01:54:53.734334 1940 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-host-proc-sys-net\") on node \"10.200.8.16\" DevicePath \"\"" Dec 13 01:54:53.734528 kubelet[1940]: I1213 01:54:53.734513 1940 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5b0d8a2-f019-4323-a054-72c1d0137110-cilium-config-path\") on node \"10.200.8.16\" DevicePath \"\"" Dec 13 01:54:53.734636 kubelet[1940]: I1213 01:54:53.734625 1940 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-host-proc-sys-kernel\") on node \"10.200.8.16\" DevicePath \"\"" Dec 13 01:54:53.734738 kubelet[1940]: I1213 01:54:53.734726 1940 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d6jvh\" (UniqueName: \"kubernetes.io/projected/c5b0d8a2-f019-4323-a054-72c1d0137110-kube-api-access-d6jvh\") on node \"10.200.8.16\" DevicePath \"\"" Dec 13 01:54:53.734839 kubelet[1940]: I1213 01:54:53.734827 1940 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5b0d8a2-f019-4323-a054-72c1d0137110-clustermesh-secrets\") on node \"10.200.8.16\" DevicePath \"\"" Dec 13 01:54:53.734946 kubelet[1940]: I1213 01:54:53.734933 1940 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-xtables-lock\") on node \"10.200.8.16\" DevicePath \"\"" Dec 13 01:54:53.735045 kubelet[1940]: I1213 01:54:53.735032 1940 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-bpf-maps\") on node \"10.200.8.16\" DevicePath \"\"" Dec 13 01:54:53.735143 kubelet[1940]: I1213 01:54:53.735131 1940 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-etc-cni-netd\") on node \"10.200.8.16\" DevicePath \"\"" Dec 13 01:54:53.735241 kubelet[1940]: I1213 01:54:53.735229 1940 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5b0d8a2-f019-4323-a054-72c1d0137110-hubble-tls\") on node \"10.200.8.16\" DevicePath \"\"" Dec 13 01:54:53.735357 kubelet[1940]: I1213 01:54:53.735328 1940 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5b0d8a2-f019-4323-a054-72c1d0137110-cilium-cgroup\") on node \"10.200.8.16\" DevicePath \"\"" Dec 13 01:54:53.762377 kubelet[1940]: I1213 01:54:53.760611 1940 scope.go:117] "RemoveContainer" containerID="0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8" Dec 13 01:54:53.763991 env[1422]: time="2024-12-13T01:54:53.763948519Z" level=info msg="RemoveContainer for \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\"" Dec 13 01:54:53.769195 systemd[1]: Removed slice kubepods-burstable-podc5b0d8a2_f019_4323_a054_72c1d0137110.slice. Dec 13 01:54:53.769330 systemd[1]: kubepods-burstable-podc5b0d8a2_f019_4323_a054_72c1d0137110.slice: Consumed 6.323s CPU time. Dec 13 01:54:53.774151 env[1422]: time="2024-12-13T01:54:53.774112465Z" level=info msg="RemoveContainer for \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\" returns successfully" Dec 13 01:54:53.774423 kubelet[1940]: I1213 01:54:53.774402 1940 scope.go:117] "RemoveContainer" containerID="df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3" Dec 13 01:54:53.775429 env[1422]: time="2024-12-13T01:54:53.775399571Z" level=info msg="RemoveContainer for \"df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3\"" Dec 13 01:54:53.781212 env[1422]: time="2024-12-13T01:54:53.781177197Z" level=info msg="RemoveContainer for \"df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3\" returns successfully" Dec 13 01:54:53.781400 kubelet[1940]: I1213 01:54:53.781381 1940 scope.go:117] "RemoveContainer" containerID="7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc" Dec 13 01:54:53.782923 env[1422]: time="2024-12-13T01:54:53.782400303Z" level=info msg="RemoveContainer for \"7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc\"" Dec 13 01:54:53.790192 env[1422]: time="2024-12-13T01:54:53.790157338Z" level=info msg="RemoveContainer for \"7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc\" returns successfully" Dec 13 01:54:53.790663 kubelet[1940]: I1213 01:54:53.790641 1940 scope.go:117] "RemoveContainer" containerID="5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9" Dec 13 01:54:53.792295 env[1422]: time="2024-12-13T01:54:53.792266848Z" level=info msg="RemoveContainer for \"5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9\"" Dec 13 01:54:53.797377 env[1422]: time="2024-12-13T01:54:53.797325671Z" level=info msg="RemoveContainer for \"5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9\" returns successfully" Dec 13 01:54:53.797540 kubelet[1940]: I1213 01:54:53.797517 1940 scope.go:117] "RemoveContainer" containerID="4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9" Dec 13 01:54:53.798446 env[1422]: time="2024-12-13T01:54:53.798419576Z" level=info msg="RemoveContainer for \"4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9\"" Dec 13 01:54:53.807061 env[1422]: time="2024-12-13T01:54:53.807029415Z" level=info msg="RemoveContainer for \"4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9\" returns successfully" Dec 13 01:54:53.807203 kubelet[1940]: I1213 01:54:53.807181 1940 scope.go:117] "RemoveContainer" containerID="0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8" Dec 13 01:54:53.807476 env[1422]: time="2024-12-13T01:54:53.807407217Z" level=error msg="ContainerStatus for \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\": not found" Dec 13 01:54:53.807637 kubelet[1940]: E1213 01:54:53.807616 1940 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\": not found" containerID="0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8" Dec 13 01:54:53.807757 kubelet[1940]: I1213 01:54:53.807743 1940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8"} err="failed to get container status \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b26ceecae76f0e1c6195b4409c143422b0c165d853c52bc8ad382783d5065c8\": not found" Dec 13 01:54:53.807823 kubelet[1940]: I1213 01:54:53.807761 1940 scope.go:117] "RemoveContainer" containerID="df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3" Dec 13 01:54:53.807990 env[1422]: time="2024-12-13T01:54:53.807940419Z" level=error msg="ContainerStatus for \"df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3\": not found" Dec 13 01:54:53.808099 kubelet[1940]: E1213 01:54:53.808080 1940 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3\": not found" containerID="df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3" Dec 13 01:54:53.808178 kubelet[1940]: I1213 01:54:53.808117 1940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3"} err="failed to get container status \"df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"df256ff2d5ea3040551a3ba19da7de48a0aefa66b140b83370b4bdb9b19ae5d3\": not found" Dec 13 01:54:53.808178 kubelet[1940]: I1213 01:54:53.808132 1940 scope.go:117] "RemoveContainer" containerID="7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc" Dec 13 01:54:53.808371 env[1422]: time="2024-12-13T01:54:53.808311021Z" level=error msg="ContainerStatus for \"7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc\": not found" Dec 13 01:54:53.808519 kubelet[1940]: E1213 01:54:53.808500 1940 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc\": not found" containerID="7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc" Dec 13 01:54:53.808583 kubelet[1940]: I1213 01:54:53.808532 1940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc"} err="failed to get container status \"7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c2a342c69d815766075a2f999e9e1b945a0ad5c8ab3ae8508223ac8962af9bc\": not found" Dec 13 01:54:53.808583 kubelet[1940]: I1213 01:54:53.808545 1940 scope.go:117] "RemoveContainer" containerID="5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9" Dec 13 01:54:53.808794 env[1422]: time="2024-12-13T01:54:53.808750223Z" level=error msg="ContainerStatus for \"5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9\": not found" Dec 13 01:54:53.808941 kubelet[1940]: E1213 01:54:53.808924 1940 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9\": not found" containerID="5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9" Dec 13 01:54:53.809020 kubelet[1940]: I1213 01:54:53.808956 1940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9"} err="failed to get container status \"5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e351d1495f4e90a7ff59b8467712ee89dc7d4662b3e2422e7f4b239b50e53d9\": not found" Dec 13 01:54:53.809020 kubelet[1940]: I1213 01:54:53.808970 1940 scope.go:117] "RemoveContainer" containerID="4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9" Dec 13 01:54:53.809187 env[1422]: time="2024-12-13T01:54:53.809142625Z" level=error msg="ContainerStatus for \"4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9\": not found" Dec 13 01:54:53.809300 kubelet[1940]: E1213 01:54:53.809281 1940 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9\": not found" containerID="4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9" Dec 13 01:54:53.809391 kubelet[1940]: I1213 01:54:53.809312 1940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9"} err="failed to get container status \"4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"4445b97c28c0e356ba84c6d377181cfcf720a2e020ec9808af9ec03ec04ba7a9\": not found" Dec 13 01:54:53.903892 env[1422]: time="2024-12-13T01:54:53.903836457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-t5qkx,Uid:557d3fec-1d10-47ff-a0e9-149aec907052,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:53.933754 env[1422]: time="2024-12-13T01:54:53.933676893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:53.933754 env[1422]: time="2024-12-13T01:54:53.933711093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:53.933987 env[1422]: time="2024-12-13T01:54:53.933736693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:53.934825 env[1422]: time="2024-12-13T01:54:53.934162995Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/570d2bd15ec2d130861c9ee6693a41a4674f25ca60bb49662ca126d5c5405daa pid=3439 runtime=io.containerd.runc.v2 Dec 13 01:54:53.947107 systemd[1]: Started cri-containerd-570d2bd15ec2d130861c9ee6693a41a4674f25ca60bb49662ca126d5c5405daa.scope. Dec 13 01:54:53.988673 env[1422]: time="2024-12-13T01:54:53.988618044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-t5qkx,Uid:557d3fec-1d10-47ff-a0e9-149aec907052,Namespace:kube-system,Attempt:0,} returns sandbox id \"570d2bd15ec2d130861c9ee6693a41a4674f25ca60bb49662ca126d5c5405daa\"" Dec 13 01:54:53.990473 env[1422]: time="2024-12-13T01:54:53.990426452Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:54:54.501872 kubelet[1940]: E1213 01:54:54.501806 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:54.520908 kubelet[1940]: I1213 01:54:54.520633 1940 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c5b0d8a2-f019-4323-a054-72c1d0137110" path="/var/lib/kubelet/pods/c5b0d8a2-f019-4323-a054-72c1d0137110/volumes" Dec 13 01:54:54.887288 kubelet[1940]: I1213 01:54:54.887241 1940 topology_manager.go:215] "Topology Admit Handler" podUID="8b4ef50a-a2ca-43ee-89c3-550cf48bd068" podNamespace="kube-system" podName="cilium-nhlkh" Dec 13 01:54:54.893175 systemd[1]: Created slice kubepods-burstable-pod8b4ef50a_a2ca_43ee_89c3_550cf48bd068.slice. Dec 13 01:54:54.943641 kubelet[1940]: I1213 01:54:54.943605 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-hostproc\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:54.943888 kubelet[1940]: I1213 01:54:54.943870 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-cilium-config-path\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:54.944036 kubelet[1940]: I1213 01:54:54.944007 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-cilium-run\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:54.944123 kubelet[1940]: I1213 01:54:54.944047 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-etc-cni-netd\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:54.944123 kubelet[1940]: I1213 01:54:54.944084 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-clustermesh-secrets\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:54.944123 kubelet[1940]: I1213 01:54:54.944120 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-hubble-tls\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:54.944292 kubelet[1940]: I1213 01:54:54.944155 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-cilium-ipsec-secrets\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:54.944292 kubelet[1940]: I1213 01:54:54.944194 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-host-proc-sys-net\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:54.944292 kubelet[1940]: I1213 01:54:54.944232 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-xtables-lock\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:54.944292 kubelet[1940]: I1213 01:54:54.944267 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-bpf-maps\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:54.944522 kubelet[1940]: I1213 01:54:54.944303 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-cni-path\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:54.944522 kubelet[1940]: I1213 01:54:54.944362 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-lib-modules\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:54.944522 kubelet[1940]: I1213 01:54:54.944401 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-cilium-cgroup\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:54.944522 kubelet[1940]: I1213 01:54:54.944444 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-host-proc-sys-kernel\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:54.944522 kubelet[1940]: I1213 01:54:54.944486 1940 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hwg2\" (UniqueName: \"kubernetes.io/projected/8b4ef50a-a2ca-43ee-89c3-550cf48bd068-kube-api-access-2hwg2\") pod \"cilium-nhlkh\" (UID: \"8b4ef50a-a2ca-43ee-89c3-550cf48bd068\") " pod="kube-system/cilium-nhlkh" Dec 13 01:54:55.202007 env[1422]: time="2024-12-13T01:54:55.201886378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nhlkh,Uid:8b4ef50a-a2ca-43ee-89c3-550cf48bd068,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:55.240537 env[1422]: time="2024-12-13T01:54:55.240467148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:55.240742 env[1422]: time="2024-12-13T01:54:55.240511348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:55.240742 env[1422]: time="2024-12-13T01:54:55.240525148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:55.240742 env[1422]: time="2024-12-13T01:54:55.240667449Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1a26f73fd332c1b1acee3deb5e5b1e9983164384bdf5dd7bf5d01766915ab4c pid=3486 runtime=io.containerd.runc.v2 Dec 13 01:54:55.253262 systemd[1]: Started cri-containerd-d1a26f73fd332c1b1acee3deb5e5b1e9983164384bdf5dd7bf5d01766915ab4c.scope. Dec 13 01:54:55.277315 env[1422]: time="2024-12-13T01:54:55.277266411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nhlkh,Uid:8b4ef50a-a2ca-43ee-89c3-550cf48bd068,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1a26f73fd332c1b1acee3deb5e5b1e9983164384bdf5dd7bf5d01766915ab4c\"" Dec 13 01:54:55.280303 env[1422]: time="2024-12-13T01:54:55.280097123Z" level=info msg="CreateContainer within sandbox \"d1a26f73fd332c1b1acee3deb5e5b1e9983164384bdf5dd7bf5d01766915ab4c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:54:55.314528 env[1422]: time="2024-12-13T01:54:55.314475275Z" level=info msg="CreateContainer within sandbox \"d1a26f73fd332c1b1acee3deb5e5b1e9983164384bdf5dd7bf5d01766915ab4c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"820f4e795d10e9cc141e5252e0a1ecf7ed880f3d795266f4bf7b70165cf567bb\"" Dec 13 01:54:55.315129 env[1422]: time="2024-12-13T01:54:55.315084478Z" level=info msg="StartContainer for \"820f4e795d10e9cc141e5252e0a1ecf7ed880f3d795266f4bf7b70165cf567bb\"" Dec 13 01:54:55.331657 systemd[1]: Started cri-containerd-820f4e795d10e9cc141e5252e0a1ecf7ed880f3d795266f4bf7b70165cf567bb.scope. Dec 13 01:54:55.362243 env[1422]: time="2024-12-13T01:54:55.362183986Z" level=info msg="StartContainer for \"820f4e795d10e9cc141e5252e0a1ecf7ed880f3d795266f4bf7b70165cf567bb\" returns successfully" Dec 13 01:54:55.368358 systemd[1]: cri-containerd-820f4e795d10e9cc141e5252e0a1ecf7ed880f3d795266f4bf7b70165cf567bb.scope: Deactivated successfully. Dec 13 01:54:55.420226 env[1422]: time="2024-12-13T01:54:55.420175142Z" level=info msg="shim disconnected" id=820f4e795d10e9cc141e5252e0a1ecf7ed880f3d795266f4bf7b70165cf567bb Dec 13 01:54:55.420226 env[1422]: time="2024-12-13T01:54:55.420220142Z" level=warning msg="cleaning up after shim disconnected" id=820f4e795d10e9cc141e5252e0a1ecf7ed880f3d795266f4bf7b70165cf567bb namespace=k8s.io Dec 13 01:54:55.420226 env[1422]: time="2024-12-13T01:54:55.420231542Z" level=info msg="cleaning up dead shim" Dec 13 01:54:55.430397 env[1422]: time="2024-12-13T01:54:55.430334287Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3569 runtime=io.containerd.runc.v2\n" Dec 13 01:54:55.502499 kubelet[1940]: E1213 01:54:55.502395 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:55.769461 env[1422]: time="2024-12-13T01:54:55.769418185Z" level=info msg="CreateContainer within sandbox \"d1a26f73fd332c1b1acee3deb5e5b1e9983164384bdf5dd7bf5d01766915ab4c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:54:55.796513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369569713.mount: Deactivated successfully. Dec 13 01:54:55.831689 env[1422]: time="2024-12-13T01:54:55.831628960Z" level=info msg="CreateContainer within sandbox \"d1a26f73fd332c1b1acee3deb5e5b1e9983164384bdf5dd7bf5d01766915ab4c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c2ea3bc0a80f1d80af683a8c831246e7ca67f1d5e7a218630e157e4ed8d7b746\"" Dec 13 01:54:55.832516 env[1422]: time="2024-12-13T01:54:55.832473864Z" level=info msg="StartContainer for \"c2ea3bc0a80f1d80af683a8c831246e7ca67f1d5e7a218630e157e4ed8d7b746\"" Dec 13 01:54:55.851180 systemd[1]: Started cri-containerd-c2ea3bc0a80f1d80af683a8c831246e7ca67f1d5e7a218630e157e4ed8d7b746.scope. Dec 13 01:54:55.884091 env[1422]: time="2024-12-13T01:54:55.884039792Z" level=info msg="StartContainer for \"c2ea3bc0a80f1d80af683a8c831246e7ca67f1d5e7a218630e157e4ed8d7b746\" returns successfully" Dec 13 01:54:55.888251 systemd[1]: cri-containerd-c2ea3bc0a80f1d80af683a8c831246e7ca67f1d5e7a218630e157e4ed8d7b746.scope: Deactivated successfully. Dec 13 01:54:55.931806 env[1422]: time="2024-12-13T01:54:55.931744802Z" level=info msg="shim disconnected" id=c2ea3bc0a80f1d80af683a8c831246e7ca67f1d5e7a218630e157e4ed8d7b746 Dec 13 01:54:55.931806 env[1422]: time="2024-12-13T01:54:55.931801903Z" level=warning msg="cleaning up after shim disconnected" id=c2ea3bc0a80f1d80af683a8c831246e7ca67f1d5e7a218630e157e4ed8d7b746 namespace=k8s.io Dec 13 01:54:55.931806 env[1422]: time="2024-12-13T01:54:55.931814703Z" level=info msg="cleaning up dead shim" Dec 13 01:54:55.939100 env[1422]: time="2024-12-13T01:54:55.939060335Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3633 runtime=io.containerd.runc.v2\n" Dec 13 01:54:56.503358 kubelet[1940]: E1213 01:54:56.503242 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:56.535157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2ea3bc0a80f1d80af683a8c831246e7ca67f1d5e7a218630e157e4ed8d7b746-rootfs.mount: Deactivated successfully. Dec 13 01:54:56.555452 kubelet[1940]: E1213 01:54:56.555411 1940 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:54:56.772757 env[1422]: time="2024-12-13T01:54:56.772715966Z" level=info msg="CreateContainer within sandbox \"d1a26f73fd332c1b1acee3deb5e5b1e9983164384bdf5dd7bf5d01766915ab4c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:54:56.811116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3965596851.mount: Deactivated successfully. Dec 13 01:54:56.819704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308277635.mount: Deactivated successfully. Dec 13 01:54:56.837161 env[1422]: time="2024-12-13T01:54:56.837102946Z" level=info msg="CreateContainer within sandbox \"d1a26f73fd332c1b1acee3deb5e5b1e9983164384bdf5dd7bf5d01766915ab4c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6c8a7c74c4ed5260e4581f569303ba5018aab05f5947752b31d28e71bdb4a57c\"" Dec 13 01:54:56.837821 env[1422]: time="2024-12-13T01:54:56.837775449Z" level=info msg="StartContainer for \"6c8a7c74c4ed5260e4581f569303ba5018aab05f5947752b31d28e71bdb4a57c\"" Dec 13 01:54:56.877132 systemd[1]: Started cri-containerd-6c8a7c74c4ed5260e4581f569303ba5018aab05f5947752b31d28e71bdb4a57c.scope. Dec 13 01:54:56.937922 systemd[1]: cri-containerd-6c8a7c74c4ed5260e4581f569303ba5018aab05f5947752b31d28e71bdb4a57c.scope: Deactivated successfully. Dec 13 01:54:56.940615 env[1422]: time="2024-12-13T01:54:56.940571796Z" level=info msg="StartContainer for \"6c8a7c74c4ed5260e4581f569303ba5018aab05f5947752b31d28e71bdb4a57c\" returns successfully" Dec 13 01:54:57.076776 env[1422]: time="2024-12-13T01:54:57.076652584Z" level=info msg="shim disconnected" id=6c8a7c74c4ed5260e4581f569303ba5018aab05f5947752b31d28e71bdb4a57c Dec 13 01:54:57.077112 env[1422]: time="2024-12-13T01:54:57.077088986Z" level=warning msg="cleaning up after shim disconnected" id=6c8a7c74c4ed5260e4581f569303ba5018aab05f5947752b31d28e71bdb4a57c namespace=k8s.io Dec 13 01:54:57.077224 env[1422]: time="2024-12-13T01:54:57.077209086Z" level=info msg="cleaning up dead shim" Dec 13 01:54:57.098314 env[1422]: time="2024-12-13T01:54:57.098269177Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3694 runtime=io.containerd.runc.v2\n" Dec 13 01:54:57.499739 env[1422]: time="2024-12-13T01:54:57.499272494Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:57.504169 kubelet[1940]: E1213 01:54:57.504135 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:57.506591 env[1422]: time="2024-12-13T01:54:57.506548326Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:57.510868 env[1422]: time="2024-12-13T01:54:57.510829944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:54:57.511314 env[1422]: time="2024-12-13T01:54:57.511280946Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:54:57.513246 env[1422]: time="2024-12-13T01:54:57.513215354Z" level=info msg="CreateContainer within sandbox \"570d2bd15ec2d130861c9ee6693a41a4674f25ca60bb49662ca126d5c5405daa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:54:57.543100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2109485523.mount: Deactivated successfully. Dec 13 01:54:57.560676 env[1422]: time="2024-12-13T01:54:57.560611957Z" level=info msg="CreateContainer within sandbox \"570d2bd15ec2d130861c9ee6693a41a4674f25ca60bb49662ca126d5c5405daa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1c4983f984daaf5adbc0f58d5a8904a1f278e1b38a8dace55be72b048f11e3fe\"" Dec 13 01:54:57.561282 env[1422]: time="2024-12-13T01:54:57.561253660Z" level=info msg="StartContainer for \"1c4983f984daaf5adbc0f58d5a8904a1f278e1b38a8dace55be72b048f11e3fe\"" Dec 13 01:54:57.583104 systemd[1]: Started cri-containerd-1c4983f984daaf5adbc0f58d5a8904a1f278e1b38a8dace55be72b048f11e3fe.scope. Dec 13 01:54:57.615822 env[1422]: time="2024-12-13T01:54:57.615768793Z" level=info msg="StartContainer for \"1c4983f984daaf5adbc0f58d5a8904a1f278e1b38a8dace55be72b048f11e3fe\" returns successfully" Dec 13 01:54:57.778055 env[1422]: time="2024-12-13T01:54:57.778012988Z" level=info msg="CreateContainer within sandbox \"d1a26f73fd332c1b1acee3deb5e5b1e9983164384bdf5dd7bf5d01766915ab4c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:54:57.785890 kubelet[1940]: I1213 01:54:57.785860 1940 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-t5qkx" podStartSLOduration=1.264115824 podStartE2EDuration="4.785812522s" podCreationTimestamp="2024-12-13 01:54:53 +0000 UTC" firstStartedPulling="2024-12-13 01:54:53.989892149 +0000 UTC m=+68.203980245" lastFinishedPulling="2024-12-13 01:54:57.511588947 +0000 UTC m=+71.725676943" observedRunningTime="2024-12-13 01:54:57.785560321 +0000 UTC m=+71.999648317" watchObservedRunningTime="2024-12-13 01:54:57.785812522 +0000 UTC m=+71.999900518" Dec 13 01:54:57.808968 env[1422]: time="2024-12-13T01:54:57.808921321Z" level=info msg="CreateContainer within sandbox \"d1a26f73fd332c1b1acee3deb5e5b1e9983164384bdf5dd7bf5d01766915ab4c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d5999b60601fe0be2853ea47148e94e5f1bf497c7ee273b62cec1b982ca800a8\"" Dec 13 01:54:57.809571 env[1422]: time="2024-12-13T01:54:57.809533523Z" level=info msg="StartContainer for \"d5999b60601fe0be2853ea47148e94e5f1bf497c7ee273b62cec1b982ca800a8\"" Dec 13 01:54:57.825096 systemd[1]: Started cri-containerd-d5999b60601fe0be2853ea47148e94e5f1bf497c7ee273b62cec1b982ca800a8.scope. Dec 13 01:54:57.852752 systemd[1]: cri-containerd-d5999b60601fe0be2853ea47148e94e5f1bf497c7ee273b62cec1b982ca800a8.scope: Deactivated successfully. Dec 13 01:54:57.857513 env[1422]: time="2024-12-13T01:54:57.857417228Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b4ef50a_a2ca_43ee_89c3_550cf48bd068.slice/cri-containerd-d5999b60601fe0be2853ea47148e94e5f1bf497c7ee273b62cec1b982ca800a8.scope/cgroup.events\": no such file or directory" Dec 13 01:54:57.858532 env[1422]: time="2024-12-13T01:54:57.858488233Z" level=info msg="StartContainer for \"d5999b60601fe0be2853ea47148e94e5f1bf497c7ee273b62cec1b982ca800a8\" returns successfully" Dec 13 01:54:58.225210 env[1422]: time="2024-12-13T01:54:58.224513788Z" level=info msg="shim disconnected" id=d5999b60601fe0be2853ea47148e94e5f1bf497c7ee273b62cec1b982ca800a8 Dec 13 01:54:58.225210 env[1422]: time="2024-12-13T01:54:58.224567688Z" level=warning msg="cleaning up after shim disconnected" id=d5999b60601fe0be2853ea47148e94e5f1bf497c7ee273b62cec1b982ca800a8 namespace=k8s.io Dec 13 01:54:58.225210 env[1422]: time="2024-12-13T01:54:58.224578988Z" level=info msg="cleaning up dead shim" Dec 13 01:54:58.232309 env[1422]: time="2024-12-13T01:54:58.232253920Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:54:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3785 runtime=io.containerd.runc.v2\n" Dec 13 01:54:58.469880 kubelet[1940]: I1213 01:54:58.469680 1940 setters.go:568] "Node became not ready" node="10.200.8.16" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:54:58Z","lastTransitionTime":"2024-12-13T01:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:54:58.504431 kubelet[1940]: E1213 01:54:58.504279 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:54:58.784685 env[1422]: time="2024-12-13T01:54:58.784620751Z" level=info msg="CreateContainer within sandbox \"d1a26f73fd332c1b1acee3deb5e5b1e9983164384bdf5dd7bf5d01766915ab4c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:54:58.814480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3234862310.mount: Deactivated successfully. Dec 13 01:54:58.830737 env[1422]: time="2024-12-13T01:54:58.830688445Z" level=info msg="CreateContainer within sandbox \"d1a26f73fd332c1b1acee3deb5e5b1e9983164384bdf5dd7bf5d01766915ab4c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9240b4af70d151d832931c586dcd58e8c23bbee9880d5826f4ba749c50f328c9\"" Dec 13 01:54:58.831334 env[1422]: time="2024-12-13T01:54:58.831236848Z" level=info msg="StartContainer for \"9240b4af70d151d832931c586dcd58e8c23bbee9880d5826f4ba749c50f328c9\"" Dec 13 01:54:58.847804 systemd[1]: Started cri-containerd-9240b4af70d151d832931c586dcd58e8c23bbee9880d5826f4ba749c50f328c9.scope. Dec 13 01:54:58.885700 env[1422]: time="2024-12-13T01:54:58.885641377Z" level=info msg="StartContainer for \"9240b4af70d151d832931c586dcd58e8c23bbee9880d5826f4ba749c50f328c9\" returns successfully" Dec 13 01:54:59.260371 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:54:59.505053 kubelet[1940]: E1213 01:54:59.504992 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:00.165158 systemd[1]: run-containerd-runc-k8s.io-9240b4af70d151d832931c586dcd58e8c23bbee9880d5826f4ba749c50f328c9-runc.zpreMF.mount: Deactivated successfully. Dec 13 01:55:00.505419 kubelet[1940]: E1213 01:55:00.505252 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:01.506558 kubelet[1940]: E1213 01:55:01.506473 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:01.926675 systemd-networkd[1575]: lxc_health: Link UP Dec 13 01:55:01.946456 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 01:55:01.946805 systemd-networkd[1575]: lxc_health: Gained carrier Dec 13 01:55:02.297164 systemd[1]: run-containerd-runc-k8s.io-9240b4af70d151d832931c586dcd58e8c23bbee9880d5826f4ba749c50f328c9-runc.n4RtkA.mount: Deactivated successfully. Dec 13 01:55:02.506960 kubelet[1940]: E1213 01:55:02.506905 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:03.220644 kubelet[1940]: I1213 01:55:03.220599 1940 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-nhlkh" podStartSLOduration=9.220550001 podStartE2EDuration="9.220550001s" podCreationTimestamp="2024-12-13 01:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:59.804017802 +0000 UTC m=+74.018105798" watchObservedRunningTime="2024-12-13 01:55:03.220550001 +0000 UTC m=+77.434638097" Dec 13 01:55:03.507773 kubelet[1940]: E1213 01:55:03.507636 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:03.559611 systemd-networkd[1575]: lxc_health: Gained IPv6LL Dec 13 01:55:04.508234 kubelet[1940]: E1213 01:55:04.508130 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:04.537648 systemd[1]: run-containerd-runc-k8s.io-9240b4af70d151d832931c586dcd58e8c23bbee9880d5826f4ba749c50f328c9-runc.JvhzEI.mount: Deactivated successfully. Dec 13 01:55:05.509370 kubelet[1940]: E1213 01:55:05.509311 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:06.455544 kubelet[1940]: E1213 01:55:06.455481 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:06.509868 kubelet[1940]: E1213 01:55:06.509805 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:06.752309 systemd[1]: run-containerd-runc-k8s.io-9240b4af70d151d832931c586dcd58e8c23bbee9880d5826f4ba749c50f328c9-runc.qv2fdS.mount: Deactivated successfully. Dec 13 01:55:07.510365 kubelet[1940]: E1213 01:55:07.510309 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:08.511176 kubelet[1940]: E1213 01:55:08.511115 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:08.906464 systemd[1]: run-containerd-runc-k8s.io-9240b4af70d151d832931c586dcd58e8c23bbee9880d5826f4ba749c50f328c9-runc.ieXzgA.mount: Deactivated successfully. Dec 13 01:55:09.512254 kubelet[1940]: E1213 01:55:09.512193 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:10.513224 kubelet[1940]: E1213 01:55:10.513164 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:11.514101 kubelet[1940]: E1213 01:55:11.514035 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:12.514713 kubelet[1940]: E1213 01:55:12.514646 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"