Dec 13 01:50:45.008224 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 01:50:45.008259 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:50:45.008273 kernel: BIOS-provided physical RAM map: Dec 13 01:50:45.008282 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:50:45.008290 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 01:50:45.008299 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 01:50:45.008315 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 01:50:45.008325 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 01:50:45.008334 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 01:50:45.008343 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 01:50:45.008353 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 01:50:45.008405 kernel: printk: bootconsole [earlyser0] enabled Dec 13 01:50:45.008413 kernel: NX (Execute Disable) protection: active Dec 13 01:50:45.008421 kernel: efi: EFI v2.70 by Microsoft Dec 13 01:50:45.008433 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Dec 13 01:50:45.008443 kernel: random: crng init done Dec 13 01:50:45.008449 kernel: SMBIOS 3.1.0 present. Dec 13 01:50:45.008456 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 01:50:45.008464 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 01:50:45.008472 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 01:50:45.008480 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Dec 13 01:50:45.008488 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 01:50:45.008496 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 01:50:45.008505 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 01:50:45.008511 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 01:50:45.008520 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 01:50:45.008527 kernel: tsc: Detected 2593.907 MHz processor Dec 13 01:50:45.008534 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:50:45.008543 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:50:45.008550 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 01:50:45.008558 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:50:45.008566 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 01:50:45.008574 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 01:50:45.008586 kernel: Using GB pages for direct mapping Dec 13 01:50:45.008594 kernel: Secure boot disabled Dec 13 01:50:45.008603 kernel: ACPI: Early table checksum verification disabled Dec 13 01:50:45.008609 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 01:50:45.008615 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:50:45.008625 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:50:45.008631 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 01:50:45.008646 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 01:50:45.008652 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:50:45.008662 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:50:45.008669 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:50:45.008679 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:50:45.008686 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:50:45.008695 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:50:45.008705 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:50:45.008713 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 01:50:45.008722 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 01:50:45.008729 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 01:50:45.008737 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 01:50:45.008746 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 01:50:45.008753 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 01:50:45.008764 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 01:50:45.008771 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 01:50:45.008779 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 01:50:45.008787 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 01:50:45.008796 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:50:45.008804 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:50:45.008811 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 01:50:45.008821 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 01:50:45.008828 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 01:50:45.008839 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 01:50:45.008846 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 01:50:45.008853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 01:50:45.008863 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 01:50:45.008869 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 01:50:45.008879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 01:50:45.008886 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 01:50:45.008893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 01:50:45.008902 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 01:50:45.008913 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 01:50:45.008921 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 01:50:45.008928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 01:50:45.008936 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 01:50:45.008944 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 01:50:45.008953 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 01:50:45.008961 kernel: Zone ranges: Dec 13 01:50:45.008967 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:50:45.008984 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:50:45.008997 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 01:50:45.009004 kernel: Movable zone start for each node Dec 13 01:50:45.009011 kernel: Early memory node ranges Dec 13 01:50:45.009020 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:50:45.009027 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 01:50:45.009037 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 01:50:45.009044 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 01:50:45.009051 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 01:50:45.009060 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:50:45.009070 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:50:45.009078 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 01:50:45.009085 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 01:50:45.009093 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 01:50:45.009102 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:50:45.009109 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:50:45.009118 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:50:45.009125 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 01:50:45.009133 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:50:45.009143 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 01:50:45.009152 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 01:50:45.009160 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:50:45.009167 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:50:45.009176 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 01:50:45.009183 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 01:50:45.009191 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:50:45.009199 kernel: Hyper-V: PV spinlocks enabled Dec 13 01:50:45.009206 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:50:45.009217 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 01:50:45.009224 kernel: Policy zone: Normal Dec 13 01:50:45.009235 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:50:45.009242 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:50:45.009249 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:50:45.009259 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:50:45.009266 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:50:45.009276 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 308056K reserved, 0K cma-reserved) Dec 13 01:50:45.009285 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:50:45.009293 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 01:50:45.009308 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 01:50:45.009320 kernel: rcu: Hierarchical RCU implementation. Dec 13 01:50:45.009328 kernel: rcu: RCU event tracing is enabled. Dec 13 01:50:45.009336 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:50:45.009346 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:50:45.009353 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:50:45.009363 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:50:45.009370 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:50:45.009378 kernel: Using NULL legacy PIC Dec 13 01:50:45.009389 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 01:50:45.009398 kernel: Console: colour dummy device 80x25 Dec 13 01:50:45.009407 kernel: printk: console [tty1] enabled Dec 13 01:50:45.009414 kernel: printk: console [ttyS0] enabled Dec 13 01:50:45.009424 kernel: printk: bootconsole [earlyser0] disabled Dec 13 01:50:45.009433 kernel: ACPI: Core revision 20210730 Dec 13 01:50:45.009443 kernel: Failed to register legacy timer interrupt Dec 13 01:50:45.009450 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:50:45.009457 kernel: Hyper-V: Using IPI hypercalls Dec 13 01:50:45.009466 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Dec 13 01:50:45.009475 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:50:45.009484 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:50:45.009493 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:50:45.009500 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:50:45.009509 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:50:45.009519 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:50:45.009529 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 01:50:45.009536 kernel: RETBleed: Vulnerable Dec 13 01:50:45.009544 kernel: Speculative Store Bypass: Vulnerable Dec 13 01:50:45.009553 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:50:45.009561 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:50:45.009570 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:50:45.009577 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:50:45.009585 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:50:45.009595 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:50:45.009606 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 01:50:45.009614 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 01:50:45.009621 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 01:50:45.009631 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:50:45.009638 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 01:50:45.009648 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 01:50:45.009655 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 01:50:45.009662 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 01:50:45.009672 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:50:45.009679 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:50:45.009690 kernel: LSM: Security Framework initializing Dec 13 01:50:45.009697 kernel: SELinux: Initializing. Dec 13 01:50:45.009707 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:50:45.009716 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:50:45.009725 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 01:50:45.009733 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 01:50:45.009740 kernel: signal: max sigframe size: 3632 Dec 13 01:50:45.009750 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:50:45.009758 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:50:45.009765 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:50:45.009772 kernel: x86: Booting SMP configuration: Dec 13 01:50:45.009779 kernel: .... node #0, CPUs: #1 Dec 13 01:50:45.009789 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 01:50:45.009796 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:50:45.009804 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:50:45.009811 kernel: smpboot: Max logical packages: 1 Dec 13 01:50:45.009818 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Dec 13 01:50:45.009825 kernel: devtmpfs: initialized Dec 13 01:50:45.009832 kernel: x86/mm: Memory block size: 128MB Dec 13 01:50:45.009841 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 01:50:45.009852 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:50:45.009859 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:50:45.009866 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:50:45.009873 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:50:45.009881 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:50:45.009888 kernel: audit: type=2000 audit(1734054644.023:1): state=initialized audit_enabled=0 res=1 Dec 13 01:50:45.009898 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:50:45.009905 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:50:45.009915 kernel: cpuidle: using governor menu Dec 13 01:50:45.009924 kernel: ACPI: bus type PCI registered Dec 13 01:50:45.009932 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:50:45.009942 kernel: dca service started, version 1.12.1 Dec 13 01:50:45.009949 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:50:45.009959 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:50:45.009966 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:50:45.009975 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:50:45.009988 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:50:45.009998 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:50:45.010008 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:50:45.010017 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 01:50:45.010026 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 01:50:45.010033 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 01:50:45.010040 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:50:45.010049 kernel: ACPI: Interpreter enabled Dec 13 01:50:45.010058 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:50:45.010065 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:50:45.010073 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:50:45.010085 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 01:50:45.010094 kernel: iommu: Default domain type: Translated Dec 13 01:50:45.010102 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:50:45.010109 kernel: vgaarb: loaded Dec 13 01:50:45.010120 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:50:45.010127 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:50:45.010137 kernel: PTP clock support registered Dec 13 01:50:45.010144 kernel: Registered efivars operations Dec 13 01:50:45.010152 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:50:45.010162 kernel: PCI: System does not support PCI Dec 13 01:50:45.010173 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 01:50:45.010181 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:50:45.010188 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:50:45.010197 kernel: pnp: PnP ACPI init Dec 13 01:50:45.010206 kernel: pnp: PnP ACPI: found 3 devices Dec 13 01:50:45.010215 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:50:45.010224 kernel: NET: Registered PF_INET protocol family Dec 13 01:50:45.010231 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:50:45.010243 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:50:45.010250 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:50:45.010260 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:50:45.010268 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 01:50:45.010276 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:50:45.010285 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:50:45.010293 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:50:45.010302 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:50:45.010309 kernel: NET: Registered PF_XDP protocol family Dec 13 01:50:45.010322 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:50:45.010329 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:50:45.010339 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Dec 13 01:50:45.010347 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:50:45.010354 kernel: Initialise system trusted keyrings Dec 13 01:50:45.010364 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:50:45.010372 kernel: Key type asymmetric registered Dec 13 01:50:45.010381 kernel: Asymmetric key parser 'x509' registered Dec 13 01:50:45.010388 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 01:50:45.010400 kernel: io scheduler mq-deadline registered Dec 13 01:50:45.010408 kernel: io scheduler kyber registered Dec 13 01:50:45.010418 kernel: io scheduler bfq registered Dec 13 01:50:45.010425 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:50:45.010433 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:50:45.010443 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:50:45.010451 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:50:45.010460 kernel: i8042: PNP: No PS/2 controller found. Dec 13 01:50:45.010588 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 01:50:45.010674 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T01:50:44 UTC (1734054644) Dec 13 01:50:45.010752 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 01:50:45.010765 kernel: fail to initialize ptp_kvm Dec 13 01:50:45.010773 kernel: intel_pstate: CPU model not supported Dec 13 01:50:45.010783 kernel: efifb: probing for efifb Dec 13 01:50:45.010790 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:50:45.010799 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:50:45.010808 kernel: efifb: scrolling: redraw Dec 13 01:50:45.010820 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:50:45.010828 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:50:45.010836 kernel: fb0: EFI VGA frame buffer device Dec 13 01:50:45.010846 kernel: pstore: Registered efi as persistent store backend Dec 13 01:50:45.010854 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:50:45.010863 kernel: Segment Routing with IPv6 Dec 13 01:50:45.010870 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:50:45.010879 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:50:45.010888 kernel: Key type dns_resolver registered Dec 13 01:50:45.010900 kernel: IPI shorthand broadcast: enabled Dec 13 01:50:45.010907 kernel: sched_clock: Marking stable (842566600, 25358900)->(1064994400, -197068900) Dec 13 01:50:45.010914 kernel: registered taskstats version 1 Dec 13 01:50:45.010925 kernel: Loading compiled-in X.509 certificates Dec 13 01:50:45.010932 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 01:50:45.010942 kernel: Key type .fscrypt registered Dec 13 01:50:45.010949 kernel: Key type fscrypt-provisioning registered Dec 13 01:50:45.010957 kernel: pstore: Using crash dump compression: deflate Dec 13 01:50:45.010968 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:50:45.010984 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:50:45.010992 kernel: ima: No architecture policies found Dec 13 01:50:45.011002 kernel: clk: Disabling unused clocks Dec 13 01:50:45.011009 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 01:50:45.011019 kernel: Write protecting the kernel read-only data: 28672k Dec 13 01:50:45.011026 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 01:50:45.011034 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 01:50:45.011044 kernel: Run /init as init process Dec 13 01:50:45.011053 kernel: with arguments: Dec 13 01:50:45.011063 kernel: /init Dec 13 01:50:45.011070 kernel: with environment: Dec 13 01:50:45.011080 kernel: HOME=/ Dec 13 01:50:45.011087 kernel: TERM=linux Dec 13 01:50:45.011097 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:50:45.011106 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:50:45.011117 systemd[1]: Detected virtualization microsoft. Dec 13 01:50:45.011128 systemd[1]: Detected architecture x86-64. Dec 13 01:50:45.011139 systemd[1]: Running in initrd. Dec 13 01:50:45.011147 systemd[1]: No hostname configured, using default hostname. Dec 13 01:50:45.011160 systemd[1]: Hostname set to . Dec 13 01:50:45.011175 systemd[1]: Initializing machine ID from random generator. Dec 13 01:50:45.011186 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:50:45.011193 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:50:45.011201 systemd[1]: Reached target cryptsetup.target. Dec 13 01:50:45.011208 systemd[1]: Reached target paths.target. Dec 13 01:50:45.011223 systemd[1]: Reached target slices.target. Dec 13 01:50:45.011239 systemd[1]: Reached target swap.target. Dec 13 01:50:45.011253 systemd[1]: Reached target timers.target. Dec 13 01:50:45.011269 systemd[1]: Listening on iscsid.socket. Dec 13 01:50:45.011284 systemd[1]: Listening on iscsiuio.socket. Dec 13 01:50:45.011299 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:50:45.011310 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:50:45.011320 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:50:45.011332 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:50:45.011348 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:50:45.011362 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:50:45.011370 systemd[1]: Reached target sockets.target. Dec 13 01:50:45.011378 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:50:45.011393 systemd[1]: Finished network-cleanup.service. Dec 13 01:50:45.011409 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:50:45.011422 systemd[1]: Starting systemd-journald.service... Dec 13 01:50:45.011432 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:50:45.011441 systemd[1]: Starting systemd-resolved.service... Dec 13 01:50:45.011457 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 01:50:45.011471 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:50:45.011484 systemd-journald[183]: Journal started Dec 13 01:50:45.011548 systemd-journald[183]: Runtime Journal (/run/log/journal/69e1552a1eff4a3e8fab911b4d078ed4) is 8.0M, max 159.0M, 151.0M free. Dec 13 01:50:44.998351 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 01:50:45.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.041764 kernel: audit: type=1130 audit(1734054645.026:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.041810 systemd[1]: Started systemd-journald.service. Dec 13 01:50:45.044579 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:50:45.070051 kernel: audit: type=1130 audit(1734054645.044:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.070074 kernel: audit: type=1130 audit(1734054645.046:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.070084 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:50:45.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.049128 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 01:50:45.082527 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 01:50:45.101098 kernel: audit: type=1130 audit(1734054645.081:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.095640 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:50:45.128878 kernel: Bridge firewalling registered Dec 13 01:50:45.128942 kernel: audit: type=1130 audit(1734054645.111:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.112429 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:50:45.129848 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 01:50:45.132230 systemd-resolved[185]: Positive Trust Anchors: Dec 13 01:50:45.132240 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:50:45.132289 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:50:45.132711 systemd[1]: Starting dracut-cmdline.service... Dec 13 01:50:45.158446 systemd-resolved[185]: Defaulting to hostname 'linux'. Dec 13 01:50:45.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.161761 systemd[1]: Started systemd-resolved.service. Dec 13 01:50:45.180585 kernel: audit: type=1130 audit(1734054645.131:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.162749 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 01:50:45.176619 systemd[1]: Reached target nss-lookup.target. Dec 13 01:50:45.196033 kernel: audit: type=1130 audit(1734054645.176:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.196117 dracut-cmdline[201]: dracut-dracut-053 Dec 13 01:50:45.200419 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:50:45.241996 kernel: SCSI subsystem initialized Dec 13 01:50:45.267515 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:50:45.267593 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:50:45.273144 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 01:50:45.277236 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:50:45.277427 systemd-modules-load[184]: Inserted module 'dm_multipath' Dec 13 01:50:45.280549 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:50:45.304086 kernel: audit: type=1130 audit(1734054645.282:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.283747 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:50:45.306108 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:50:45.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.323993 kernel: audit: type=1130 audit(1734054645.310:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.335999 kernel: iscsi: registered transport (tcp) Dec 13 01:50:45.363594 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:50:45.363656 kernel: QLogic iSCSI HBA Driver Dec 13 01:50:45.392220 systemd[1]: Finished dracut-cmdline.service. Dec 13 01:50:45.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.395554 systemd[1]: Starting dracut-pre-udev.service... Dec 13 01:50:45.447002 kernel: raid6: avx512x4 gen() 18688 MB/s Dec 13 01:50:45.466996 kernel: raid6: avx512x4 xor() 8496 MB/s Dec 13 01:50:45.486991 kernel: raid6: avx512x2 gen() 18949 MB/s Dec 13 01:50:45.506994 kernel: raid6: avx512x2 xor() 29871 MB/s Dec 13 01:50:45.526988 kernel: raid6: avx512x1 gen() 18915 MB/s Dec 13 01:50:45.546991 kernel: raid6: avx512x1 xor() 26848 MB/s Dec 13 01:50:45.567990 kernel: raid6: avx2x4 gen() 18962 MB/s Dec 13 01:50:45.587988 kernel: raid6: avx2x4 xor() 7882 MB/s Dec 13 01:50:45.608987 kernel: raid6: avx2x2 gen() 18616 MB/s Dec 13 01:50:45.628991 kernel: raid6: avx2x2 xor() 22078 MB/s Dec 13 01:50:45.648986 kernel: raid6: avx2x1 gen() 14261 MB/s Dec 13 01:50:45.668989 kernel: raid6: avx2x1 xor() 19307 MB/s Dec 13 01:50:45.688988 kernel: raid6: sse2x4 gen() 11542 MB/s Dec 13 01:50:45.708988 kernel: raid6: sse2x4 xor() 7322 MB/s Dec 13 01:50:45.729990 kernel: raid6: sse2x2 gen() 12946 MB/s Dec 13 01:50:45.749986 kernel: raid6: sse2x2 xor() 7406 MB/s Dec 13 01:50:45.769988 kernel: raid6: sse2x1 gen() 11692 MB/s Dec 13 01:50:45.793580 kernel: raid6: sse2x1 xor() 5750 MB/s Dec 13 01:50:45.793603 kernel: raid6: using algorithm avx2x4 gen() 18962 MB/s Dec 13 01:50:45.793616 kernel: raid6: .... xor() 7882 MB/s, rmw enabled Dec 13 01:50:45.799878 kernel: raid6: using avx512x2 recovery algorithm Dec 13 01:50:45.815002 kernel: xor: automatically using best checksumming function avx Dec 13 01:50:45.911004 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 01:50:45.919040 systemd[1]: Finished dracut-pre-udev.service. Dec 13 01:50:45.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.922000 audit: BPF prog-id=7 op=LOAD Dec 13 01:50:45.923000 audit: BPF prog-id=8 op=LOAD Dec 13 01:50:45.923501 systemd[1]: Starting systemd-udevd.service... Dec 13 01:50:45.937771 systemd-udevd[384]: Using default interface naming scheme 'v252'. Dec 13 01:50:45.942540 systemd[1]: Started systemd-udevd.service. Dec 13 01:50:45.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:45.949047 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 01:50:45.968066 dracut-pre-trigger[387]: rd.md=0: removing MD RAID activation Dec 13 01:50:45.998770 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 01:50:46.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:46.001927 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:50:46.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:46.038325 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:50:46.091002 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:50:46.096994 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 01:50:46.107998 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:50:46.119003 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 01:50:46.142001 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:50:46.151994 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:50:46.155995 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:50:46.160018 kernel: AES CTR mode by8 optimization enabled Dec 13 01:50:46.160048 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:50:46.166251 kernel: scsi host1: storvsc_host_t Dec 13 01:50:46.166302 kernel: scsi host0: storvsc_host_t Dec 13 01:50:46.174573 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:50:46.180995 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:50:46.195003 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:50:46.205836 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 01:50:46.205880 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:50:46.219766 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:50:46.224819 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:50:46.224834 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:50:46.244253 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:50:46.244423 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:50:46.244578 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:50:46.244748 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:50:46.244899 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:50:46.245085 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:50:46.245104 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:50:46.316999 kernel: hv_netvsc 7c1e5277-058e-7c1e-5277-058e7c1e5277 eth0: VF slot 1 added Dec 13 01:50:46.332828 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:50:46.332865 kernel: hv_pci d46c427f-244e-44a1-ab2b-5d6e255b83b1: PCI VMBus probing: Using version 0x10004 Dec 13 01:50:46.417316 kernel: hv_pci d46c427f-244e-44a1-ab2b-5d6e255b83b1: PCI host bridge to bus 244e:00 Dec 13 01:50:46.417507 kernel: pci_bus 244e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 01:50:46.417682 kernel: pci_bus 244e:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:50:46.417827 kernel: pci 244e:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 01:50:46.418009 kernel: pci 244e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 01:50:46.418171 kernel: pci 244e:00:02.0: enabling Extended Tags Dec 13 01:50:46.418324 kernel: pci 244e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 244e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 01:50:46.418477 kernel: pci_bus 244e:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:50:46.418616 kernel: pci 244e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 01:50:46.510002 kernel: mlx5_core 244e:00:02.0: firmware version: 14.30.5000 Dec 13 01:50:46.758352 kernel: mlx5_core 244e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 01:50:46.758536 kernel: mlx5_core 244e:00:02.0: Supported tc offload range - chains: 1, prios: 1 Dec 13 01:50:46.758699 kernel: mlx5_core 244e:00:02.0: mlx5e_tc_post_act_init:40:(pid 192): firmware level support is missing Dec 13 01:50:46.758859 kernel: hv_netvsc 7c1e5277-058e-7c1e-5277-058e7c1e5277 eth0: VF registering: eth1 Dec 13 01:50:46.759025 kernel: mlx5_core 244e:00:02.0 eth1: joined to eth0 Dec 13 01:50:46.753673 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 01:50:46.765995 kernel: mlx5_core 244e:00:02.0 enP9294s1: renamed from eth1 Dec 13 01:50:46.783001 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (445) Dec 13 01:50:46.796376 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:50:46.949619 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 01:50:46.958426 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 01:50:46.961777 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 01:50:46.962704 systemd[1]: Starting disk-uuid.service... Dec 13 01:50:47.988997 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:50:47.990293 disk-uuid[561]: The operation has completed successfully. Dec 13 01:50:48.068681 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:50:48.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:48.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:48.068791 systemd[1]: Finished disk-uuid.service. Dec 13 01:50:48.071882 systemd[1]: Starting verity-setup.service... Dec 13 01:50:48.124325 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:50:48.337434 systemd[1]: Found device dev-mapper-usr.device. Dec 13 01:50:48.342496 systemd[1]: Finished verity-setup.service. Dec 13 01:50:48.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:48.347365 systemd[1]: Mounting sysusr-usr.mount... Dec 13 01:50:48.420994 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 01:50:48.420963 systemd[1]: Mounted sysusr-usr.mount. Dec 13 01:50:48.424985 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 01:50:48.429284 systemd[1]: Starting ignition-setup.service... Dec 13 01:50:48.434479 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 01:50:48.450597 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:50:48.450649 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:50:48.450662 kernel: BTRFS info (device sda6): has skinny extents Dec 13 01:50:48.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:48.505191 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 01:50:48.510000 audit: BPF prog-id=9 op=LOAD Dec 13 01:50:48.512112 systemd[1]: Starting systemd-networkd.service... Dec 13 01:50:48.538307 systemd-networkd[831]: lo: Link UP Dec 13 01:50:48.538320 systemd-networkd[831]: lo: Gained carrier Dec 13 01:50:48.539321 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:50:48.544473 systemd-networkd[831]: Enumeration completed Dec 13 01:50:48.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:48.544544 systemd[1]: Started systemd-networkd.service. Dec 13 01:50:48.545827 systemd-networkd[831]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:50:48.547662 systemd[1]: Reached target network.target. Dec 13 01:50:48.553467 systemd[1]: Starting iscsiuio.service... Dec 13 01:50:48.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:48.561571 systemd[1]: Started iscsiuio.service. Dec 13 01:50:48.567049 systemd[1]: Starting iscsid.service... Dec 13 01:50:48.571975 iscsid[840]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:50:48.571975 iscsid[840]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 01:50:48.571975 iscsid[840]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 01:50:48.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:48.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:48.597799 iscsid[840]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 01:50:48.597799 iscsid[840]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:50:48.597799 iscsid[840]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 01:50:48.573507 systemd[1]: Started iscsid.service. Dec 13 01:50:48.578411 systemd[1]: Starting dracut-initqueue.service... Dec 13 01:50:48.593086 systemd[1]: Finished dracut-initqueue.service. Dec 13 01:50:48.597766 systemd[1]: Reached target remote-fs-pre.target. Dec 13 01:50:48.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:48.600007 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:50:48.605390 systemd[1]: Reached target remote-fs.target. Dec 13 01:50:48.635137 kernel: mlx5_core 244e:00:02.0 enP9294s1: Link up Dec 13 01:50:48.612117 systemd[1]: Starting dracut-pre-mount.service... Dec 13 01:50:48.624447 systemd[1]: Finished dracut-pre-mount.service. Dec 13 01:50:48.671080 kernel: hv_netvsc 7c1e5277-058e-7c1e-5277-058e7c1e5277 eth0: Data path switched to VF: enP9294s1 Dec 13 01:50:48.671458 systemd-networkd[831]: enP9294s1: Link UP Dec 13 01:50:48.677589 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:50:48.671595 systemd-networkd[831]: eth0: Link UP Dec 13 01:50:48.676355 systemd-networkd[831]: eth0: Gained carrier Dec 13 01:50:48.684166 systemd-networkd[831]: enP9294s1: Gained carrier Dec 13 01:50:48.707076 systemd-networkd[831]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:50:48.739156 systemd[1]: Finished ignition-setup.service. Dec 13 01:50:48.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:48.740142 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 01:50:50.099190 systemd-networkd[831]: eth0: Gained IPv6LL Dec 13 01:50:52.323998 ignition[855]: Ignition 2.14.0 Dec 13 01:50:52.324018 ignition[855]: Stage: fetch-offline Dec 13 01:50:52.324101 ignition[855]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:50:52.324164 ignition[855]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:50:52.422385 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:50:52.422553 ignition[855]: parsed url from cmdline: "" Dec 13 01:50:52.422556 ignition[855]: no config URL provided Dec 13 01:50:52.422562 ignition[855]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:50:52.427023 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 01:50:52.422571 ignition[855]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:50:52.422577 ignition[855]: failed to fetch config: resource requires networking Dec 13 01:50:52.426140 ignition[855]: Ignition finished successfully Dec 13 01:50:52.446702 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 01:50:52.446741 kernel: audit: type=1130 audit(1734054652.441:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:52.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:52.442817 systemd[1]: Starting ignition-fetch.service... Dec 13 01:50:52.451085 ignition[861]: Ignition 2.14.0 Dec 13 01:50:52.451090 ignition[861]: Stage: fetch Dec 13 01:50:52.451186 ignition[861]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:50:52.451212 ignition[861]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:50:52.454316 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:50:52.455712 ignition[861]: parsed url from cmdline: "" Dec 13 01:50:52.455729 ignition[861]: no config URL provided Dec 13 01:50:52.455741 ignition[861]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:50:52.455752 ignition[861]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:50:52.455790 ignition[861]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:50:52.564993 ignition[861]: GET result: OK Dec 13 01:50:52.565074 ignition[861]: config has been read from IMDS userdata Dec 13 01:50:52.565093 ignition[861]: parsing config with SHA512: 38a2fb10c6ad6ffe6a8b4b6184a31e04b8613e6f82fac79408d985a5d1a059a5ad3deff0915b0e6115c3018bc843c0c21cb39f0872ce4b36d63a228163e303b2 Dec 13 01:50:52.572619 unknown[861]: fetched base config from "system" Dec 13 01:50:52.575369 unknown[861]: fetched base config from "system" Dec 13 01:50:52.575382 unknown[861]: fetched user config from "azure" Dec 13 01:50:52.575904 ignition[861]: fetch: fetch complete Dec 13 01:50:52.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:52.579271 systemd[1]: Finished ignition-fetch.service. Dec 13 01:50:52.601165 kernel: audit: type=1130 audit(1734054652.582:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:52.575913 ignition[861]: fetch: fetch passed Dec 13 01:50:52.583691 systemd[1]: Starting ignition-kargs.service... Dec 13 01:50:52.575977 ignition[861]: Ignition finished successfully Dec 13 01:50:52.609462 ignition[867]: Ignition 2.14.0 Dec 13 01:50:52.609472 ignition[867]: Stage: kargs Dec 13 01:50:52.609603 ignition[867]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:50:52.609636 ignition[867]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:50:52.634874 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:50:52.635864 ignition[867]: kargs: kargs passed Dec 13 01:50:52.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:52.638043 systemd[1]: Finished ignition-kargs.service. Dec 13 01:50:52.657142 kernel: audit: type=1130 audit(1734054652.639:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:52.635908 ignition[867]: Ignition finished successfully Dec 13 01:50:52.641039 systemd[1]: Starting ignition-disks.service... Dec 13 01:50:52.667630 ignition[873]: Ignition 2.14.0 Dec 13 01:50:52.667640 ignition[873]: Stage: disks Dec 13 01:50:52.667778 ignition[873]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:50:52.667811 ignition[873]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:50:52.677221 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:50:52.681571 ignition[873]: disks: disks passed Dec 13 01:50:52.681641 ignition[873]: Ignition finished successfully Dec 13 01:50:52.684660 systemd[1]: Finished ignition-disks.service. Dec 13 01:50:52.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:52.688057 systemd[1]: Reached target initrd-root-device.target. Dec 13 01:50:52.704418 kernel: audit: type=1130 audit(1734054652.687:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:52.704519 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:50:52.708965 systemd[1]: Reached target local-fs.target. Dec 13 01:50:52.712897 systemd[1]: Reached target sysinit.target. Dec 13 01:50:52.716732 systemd[1]: Reached target basic.target. Dec 13 01:50:52.721670 systemd[1]: Starting systemd-fsck-root.service... Dec 13 01:50:52.778222 systemd-fsck[881]: ROOT: clean, 621/7326000 files, 481077/7359488 blocks Dec 13 01:50:52.783523 systemd[1]: Finished systemd-fsck-root.service. Dec 13 01:50:52.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:52.789378 systemd[1]: Mounting sysroot.mount... Dec 13 01:50:52.804146 kernel: audit: type=1130 audit(1734054652.787:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:52.816016 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 01:50:52.816040 systemd[1]: Mounted sysroot.mount. Dec 13 01:50:52.817965 systemd[1]: Reached target initrd-root-fs.target. Dec 13 01:50:52.895426 systemd[1]: Mounting sysroot-usr.mount... Dec 13 01:50:52.901762 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 01:50:52.906195 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:50:52.906236 systemd[1]: Reached target ignition-diskful.target. Dec 13 01:50:52.915620 systemd[1]: Mounted sysroot-usr.mount. Dec 13 01:50:52.998431 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:50:53.004603 systemd[1]: Starting initrd-setup-root.service... Dec 13 01:50:53.021001 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (892) Dec 13 01:50:53.021053 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:50:53.024412 initrd-setup-root[897]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:50:53.037255 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:50:53.037284 kernel: BTRFS info (device sda6): has skinny extents Dec 13 01:50:53.036100 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:50:53.061575 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:50:53.083107 initrd-setup-root[931]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:50:53.089913 initrd-setup-root[939]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:50:53.588642 systemd[1]: Finished initrd-setup-root.service. Dec 13 01:50:53.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:53.594603 systemd[1]: Starting ignition-mount.service... Dec 13 01:50:53.611319 kernel: audit: type=1130 audit(1734054653.592:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:53.609700 systemd[1]: Starting sysroot-boot.service... Dec 13 01:50:53.616592 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 01:50:53.616733 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 01:50:53.632335 systemd[1]: Finished sysroot-boot.service. Dec 13 01:50:53.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:53.648065 kernel: audit: type=1130 audit(1734054653.635:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:53.656873 ignition[961]: INFO : Ignition 2.14.0 Dec 13 01:50:53.656873 ignition[961]: INFO : Stage: mount Dec 13 01:50:53.661031 ignition[961]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:50:53.661031 ignition[961]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:50:53.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:53.682840 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:50:53.682840 ignition[961]: INFO : mount: mount passed Dec 13 01:50:53.682840 ignition[961]: INFO : Ignition finished successfully Dec 13 01:50:53.690535 kernel: audit: type=1130 audit(1734054653.666:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:53.664661 systemd[1]: Finished ignition-mount.service. Dec 13 01:50:54.413290 coreos-metadata[891]: Dec 13 01:50:54.413 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:50:54.430830 coreos-metadata[891]: Dec 13 01:50:54.430 INFO Fetch successful Dec 13 01:50:54.466140 coreos-metadata[891]: Dec 13 01:50:54.466 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:50:54.484028 coreos-metadata[891]: Dec 13 01:50:54.483 INFO Fetch successful Dec 13 01:50:54.503888 coreos-metadata[891]: Dec 13 01:50:54.503 INFO wrote hostname ci-3510.3.6-a-002183bad1 to /sysroot/etc/hostname Dec 13 01:50:54.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:54.505693 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 01:50:54.527530 kernel: audit: type=1130 audit(1734054654.509:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:54.511639 systemd[1]: Starting ignition-files.service... Dec 13 01:50:54.530730 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:50:54.544001 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (970) Dec 13 01:50:54.553668 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:50:54.553700 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:50:54.553719 kernel: BTRFS info (device sda6): has skinny extents Dec 13 01:50:54.563443 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:50:54.577302 ignition[989]: INFO : Ignition 2.14.0 Dec 13 01:50:54.577302 ignition[989]: INFO : Stage: files Dec 13 01:50:54.581832 ignition[989]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:50:54.581832 ignition[989]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:50:54.593197 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:50:54.610221 ignition[989]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:50:54.613485 ignition[989]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:50:54.613485 ignition[989]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:50:54.690457 ignition[989]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:50:54.693970 ignition[989]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:50:54.706864 unknown[989]: wrote ssh authorized keys file for user: core Dec 13 01:50:54.709256 ignition[989]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:50:54.730162 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:50:54.734913 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:50:54.740039 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:50:54.740039 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:50:54.740039 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:50:54.740039 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:50:54.740039 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 01:50:54.740039 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 01:50:54.775204 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (991) Dec 13 01:50:54.756357 systemd[1]: mnt-oem3725397501.mount: Deactivated successfully. Dec 13 01:50:54.778457 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3725397501" Dec 13 01:50:54.778457 ignition[989]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3725397501": device or resource busy Dec 13 01:50:54.778457 ignition[989]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3725397501", trying btrfs: device or resource busy Dec 13 01:50:54.778457 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3725397501" Dec 13 01:50:54.778457 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3725397501" Dec 13 01:50:54.778457 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem3725397501" Dec 13 01:50:54.778457 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem3725397501" Dec 13 01:50:54.778457 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 01:50:54.778457 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 01:50:54.778457 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 01:50:54.778457 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1471679243" Dec 13 01:50:54.778457 ignition[989]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1471679243": device or resource busy Dec 13 01:50:54.778457 ignition[989]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1471679243", trying btrfs: device or resource busy Dec 13 01:50:54.778457 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1471679243" Dec 13 01:50:54.847749 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1471679243" Dec 13 01:50:54.847749 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem1471679243" Dec 13 01:50:54.847749 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem1471679243" Dec 13 01:50:54.847749 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 01:50:54.847749 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:50:54.847749 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 01:50:54.778803 systemd[1]: mnt-oem1471679243.mount: Deactivated successfully. Dec 13 01:50:55.334755 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Dec 13 01:50:55.744330 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 01:50:55.744330 ignition[989]: INFO : files: op(f): [started] processing unit "waagent.service" Dec 13 01:50:55.744330 ignition[989]: INFO : files: op(f): [finished] processing unit "waagent.service" Dec 13 01:50:55.744330 ignition[989]: INFO : files: op(10): [started] processing unit "nvidia.service" Dec 13 01:50:55.744330 ignition[989]: INFO : files: op(10): [finished] processing unit "nvidia.service" Dec 13 01:50:55.744330 ignition[989]: INFO : files: op(11): [started] setting preset to enabled for "waagent.service" Dec 13 01:50:55.792002 kernel: audit: type=1130 audit(1734054655.758:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.756200 systemd[1]: Finished ignition-files.service. Dec 13 01:50:55.798659 ignition[989]: INFO : files: op(11): [finished] setting preset to enabled for "waagent.service" Dec 13 01:50:55.798659 ignition[989]: INFO : files: op(12): [started] setting preset to enabled for "nvidia.service" Dec 13 01:50:55.798659 ignition[989]: INFO : files: op(12): [finished] setting preset to enabled for "nvidia.service" Dec 13 01:50:55.798659 ignition[989]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:50:55.798659 ignition[989]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:50:55.798659 ignition[989]: INFO : files: files passed Dec 13 01:50:55.798659 ignition[989]: INFO : Ignition finished successfully Dec 13 01:50:55.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.760896 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 01:50:55.772079 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 01:50:55.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.841872 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:50:55.772818 systemd[1]: Starting ignition-quench.service... Dec 13 01:50:55.776346 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:50:55.776429 systemd[1]: Finished ignition-quench.service. Dec 13 01:50:55.786455 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 01:50:55.792054 systemd[1]: Reached target ignition-complete.target. Dec 13 01:50:55.795058 systemd[1]: Starting initrd-parse-etc.service... Dec 13 01:50:55.811030 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:50:55.811117 systemd[1]: Finished initrd-parse-etc.service. Dec 13 01:50:55.816353 systemd[1]: Reached target initrd-fs.target. Dec 13 01:50:55.823015 systemd[1]: Reached target initrd.target. Dec 13 01:50:55.825109 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 01:50:55.825794 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 01:50:55.838473 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 01:50:55.842450 systemd[1]: Starting initrd-cleanup.service... Dec 13 01:50:55.883672 systemd[1]: Stopped target nss-lookup.target. Dec 13 01:50:55.888142 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 01:50:55.892948 systemd[1]: Stopped target timers.target. Dec 13 01:50:55.896960 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:50:55.899453 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 01:50:55.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.903850 systemd[1]: Stopped target initrd.target. Dec 13 01:50:55.907801 systemd[1]: Stopped target basic.target. Dec 13 01:50:55.911783 systemd[1]: Stopped target ignition-complete.target. Dec 13 01:50:55.916293 systemd[1]: Stopped target ignition-diskful.target. Dec 13 01:50:55.920960 systemd[1]: Stopped target initrd-root-device.target. Dec 13 01:50:55.925728 systemd[1]: Stopped target remote-fs.target. Dec 13 01:50:55.929840 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 01:50:55.934221 systemd[1]: Stopped target sysinit.target. Dec 13 01:50:55.938253 systemd[1]: Stopped target local-fs.target. Dec 13 01:50:55.942233 systemd[1]: Stopped target local-fs-pre.target. Dec 13 01:50:55.946118 systemd[1]: Stopped target swap.target. Dec 13 01:50:55.948041 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:50:55.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.948164 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 01:50:55.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.952230 systemd[1]: Stopped target cryptsetup.target. Dec 13 01:50:55.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.956593 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:50:55.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.956746 systemd[1]: Stopped dracut-initqueue.service. Dec 13 01:50:55.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.959000 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:50:55.982354 iscsid[840]: iscsid shutting down. Dec 13 01:50:55.959140 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 01:50:55.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.999599 ignition[1027]: INFO : Ignition 2.14.0 Dec 13 01:50:55.999599 ignition[1027]: INFO : Stage: umount Dec 13 01:50:55.999599 ignition[1027]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:50:55.999599 ignition[1027]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:50:56.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.963520 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:50:56.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.018446 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:50:56.018446 ignition[1027]: INFO : umount: umount passed Dec 13 01:50:56.018446 ignition[1027]: INFO : Ignition finished successfully Dec 13 01:50:56.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.963650 systemd[1]: Stopped ignition-files.service. Dec 13 01:50:56.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.967642 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:50:56.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.967773 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 01:50:55.973436 systemd[1]: Stopping ignition-mount.service... Dec 13 01:50:55.976478 systemd[1]: Stopping iscsid.service... Dec 13 01:50:55.987481 systemd[1]: Stopping sysroot-boot.service... Dec 13 01:50:55.989448 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:50:55.989615 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 01:50:55.992221 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:50:55.992376 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 01:50:56.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:55.997111 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 01:50:55.997218 systemd[1]: Stopped iscsid.service. Dec 13 01:50:56.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.007556 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:50:56.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.007631 systemd[1]: Stopped ignition-mount.service. Dec 13 01:50:56.018587 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:50:56.018681 systemd[1]: Stopped ignition-disks.service. Dec 13 01:50:56.024082 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:50:56.024128 systemd[1]: Stopped ignition-kargs.service. Dec 13 01:50:56.028091 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:50:56.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.028136 systemd[1]: Stopped ignition-fetch.service. Dec 13 01:50:56.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.034693 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:50:56.034742 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 01:50:56.036963 systemd[1]: Stopped target paths.target. Dec 13 01:50:56.119000 audit: BPF prog-id=6 op=UNLOAD Dec 13 01:50:56.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.041819 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:50:56.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.047028 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 01:50:56.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.052914 systemd[1]: Stopped target slices.target. Dec 13 01:50:56.057681 systemd[1]: Stopped target sockets.target. Dec 13 01:50:56.059601 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:50:56.059642 systemd[1]: Closed iscsid.socket. Dec 13 01:50:56.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.063421 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:50:56.063474 systemd[1]: Stopped ignition-setup.service. Dec 13 01:50:56.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.070451 systemd[1]: Stopping iscsiuio.service... Dec 13 01:50:56.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.075292 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:50:56.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.075774 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 01:50:56.075871 systemd[1]: Stopped iscsiuio.service. Dec 13 01:50:56.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.078261 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:50:56.078347 systemd[1]: Finished initrd-cleanup.service. Dec 13 01:50:56.083355 systemd[1]: Stopped target network.target. Dec 13 01:50:56.086444 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:50:56.086478 systemd[1]: Closed iscsiuio.socket. Dec 13 01:50:56.088471 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:50:56.092799 systemd[1]: Stopping systemd-resolved.service... Dec 13 01:50:56.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.097024 systemd-networkd[831]: eth0: DHCPv6 lease lost Dec 13 01:50:56.177000 audit: BPF prog-id=9 op=UNLOAD Dec 13 01:50:56.100675 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:50:56.100759 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:50:56.106408 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:50:56.108184 systemd[1]: Stopped systemd-resolved.service. Dec 13 01:50:56.110757 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:50:56.110790 systemd[1]: Closed systemd-networkd.socket. Dec 13 01:50:56.119470 systemd[1]: Stopping network-cleanup.service... Dec 13 01:50:56.121368 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:50:56.121428 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 01:50:56.123672 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:50:56.123721 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:50:56.128571 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:50:56.128619 systemd[1]: Stopped systemd-modules-load.service. Dec 13 01:50:56.133244 systemd[1]: Stopping systemd-udevd.service... Dec 13 01:50:56.136457 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 01:50:56.141636 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:50:56.141799 systemd[1]: Stopped systemd-udevd.service. Dec 13 01:50:56.145857 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:50:56.145895 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 01:50:56.150170 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:50:56.150211 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 01:50:56.152708 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:50:56.152759 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 01:50:56.157126 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:50:56.157173 systemd[1]: Stopped dracut-cmdline.service. Dec 13 01:50:56.161058 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:50:56.161104 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 01:50:56.166493 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 01:50:56.170404 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:50:56.170460 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 01:50:56.173243 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:50:56.173295 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 01:50:56.177175 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:50:56.177224 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 01:50:56.260126 kernel: hv_netvsc 7c1e5277-058e-7c1e-5277-058e7c1e5277 eth0: Data path switched from VF: enP9294s1 Dec 13 01:50:56.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.261128 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 01:50:56.264949 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:50:56.267393 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 01:50:56.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.281314 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:50:56.283816 systemd[1]: Stopped network-cleanup.service. Dec 13 01:50:56.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.398402 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:50:56.398537 systemd[1]: Stopped sysroot-boot.service. Dec 13 01:50:56.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.402934 systemd[1]: Reached target initrd-switch-root.target. Dec 13 01:50:56.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:50:56.406589 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:50:56.406651 systemd[1]: Stopped initrd-setup-root.service. Dec 13 01:50:56.409308 systemd[1]: Starting initrd-switch-root.service... Dec 13 01:50:56.425580 systemd[1]: Switching root. Dec 13 01:50:56.448758 systemd-journald[183]: Journal stopped Dec 13 01:51:10.978609 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 01:51:10.978640 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 01:51:10.978654 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 01:51:10.978663 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 01:51:10.978671 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:51:10.978682 kernel: SELinux: policy capability open_perms=1 Dec 13 01:51:10.978693 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:51:10.978704 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:51:10.978715 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:51:10.978723 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:51:10.978733 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:51:10.978742 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:51:10.978750 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 13 01:51:10.978759 kernel: audit: type=1403 audit(1734054658.771:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:51:10.978773 systemd[1]: Successfully loaded SELinux policy in 264.257ms. Dec 13 01:51:10.978785 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.685ms. Dec 13 01:51:10.978797 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:51:10.978808 systemd[1]: Detected virtualization microsoft. Dec 13 01:51:10.978820 systemd[1]: Detected architecture x86-64. Dec 13 01:51:10.978829 systemd[1]: Detected first boot. Dec 13 01:51:10.978841 systemd[1]: Hostname set to . Dec 13 01:51:10.978852 systemd[1]: Initializing machine ID from random generator. Dec 13 01:51:10.978862 kernel: audit: type=1400 audit(1734054659.477:83): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:51:10.978873 kernel: audit: type=1400 audit(1734054659.491:84): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:51:10.978883 kernel: audit: type=1400 audit(1734054659.491:85): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:51:10.978896 kernel: audit: type=1334 audit(1734054659.517:86): prog-id=10 op=LOAD Dec 13 01:51:10.978905 kernel: audit: type=1334 audit(1734054659.517:87): prog-id=10 op=UNLOAD Dec 13 01:51:10.978916 kernel: audit: type=1334 audit(1734054659.522:88): prog-id=11 op=LOAD Dec 13 01:51:10.978924 kernel: audit: type=1334 audit(1734054659.522:89): prog-id=11 op=UNLOAD Dec 13 01:51:10.978936 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 01:51:10.978945 kernel: audit: type=1400 audit(1734054661.279:90): avc: denied { associate } for pid=1062 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 01:51:10.978957 kernel: audit: type=1300 audit(1734054661.279:90): arch=c000003e syscall=188 success=yes exit=0 a0=c00018a2d2 a1=c000194378 a2=c000196800 a3=32 items=0 ppid=1045 pid=1062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:51:10.978971 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:51:10.978992 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:51:10.979003 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:51:10.979017 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:51:10.979026 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 01:51:10.979037 kernel: audit: type=1334 audit(1734054670.433:92): prog-id=12 op=LOAD Dec 13 01:51:10.979047 kernel: audit: type=1334 audit(1734054670.433:93): prog-id=3 op=UNLOAD Dec 13 01:51:10.979060 kernel: audit: type=1334 audit(1734054670.437:94): prog-id=13 op=LOAD Dec 13 01:51:10.979074 kernel: audit: type=1334 audit(1734054670.442:95): prog-id=14 op=LOAD Dec 13 01:51:10.979084 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:51:10.979095 kernel: audit: type=1334 audit(1734054670.442:96): prog-id=4 op=UNLOAD Dec 13 01:51:10.979104 kernel: audit: type=1334 audit(1734054670.442:97): prog-id=5 op=UNLOAD Dec 13 01:51:10.979116 kernel: audit: type=1131 audit(1734054670.443:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.979127 systemd[1]: Stopped initrd-switch-root.service. Dec 13 01:51:10.979138 kernel: audit: type=1334 audit(1734054670.489:99): prog-id=12 op=UNLOAD Dec 13 01:51:10.979152 kernel: audit: type=1130 audit(1734054670.495:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.979161 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:51:10.979174 kernel: audit: type=1131 audit(1734054670.495:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.979183 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 01:51:10.979196 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 01:51:10.979208 systemd[1]: Created slice system-getty.slice. Dec 13 01:51:10.979218 systemd[1]: Created slice system-modprobe.slice. Dec 13 01:51:10.979232 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 01:51:10.979242 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 01:51:10.979254 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 01:51:10.979264 systemd[1]: Created slice user.slice. Dec 13 01:51:10.979276 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:51:10.979288 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 01:51:10.979298 systemd[1]: Set up automount boot.automount. Dec 13 01:51:10.979308 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 01:51:10.979319 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 01:51:10.979334 systemd[1]: Stopped target initrd-fs.target. Dec 13 01:51:10.979343 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 01:51:10.979355 systemd[1]: Reached target integritysetup.target. Dec 13 01:51:10.979367 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:51:10.979377 systemd[1]: Reached target remote-fs.target. Dec 13 01:51:10.979388 systemd[1]: Reached target slices.target. Dec 13 01:51:10.979399 systemd[1]: Reached target swap.target. Dec 13 01:51:10.979411 systemd[1]: Reached target torcx.target. Dec 13 01:51:10.979423 systemd[1]: Reached target veritysetup.target. Dec 13 01:51:10.979435 systemd[1]: Listening on systemd-coredump.socket. Dec 13 01:51:10.979447 systemd[1]: Listening on systemd-initctl.socket. Dec 13 01:51:10.979457 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:51:10.979470 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:51:10.979484 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:51:10.979495 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 01:51:10.979506 systemd[1]: Mounting dev-hugepages.mount... Dec 13 01:51:10.979517 systemd[1]: Mounting dev-mqueue.mount... Dec 13 01:51:10.979530 systemd[1]: Mounting media.mount... Dec 13 01:51:10.979540 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:51:10.979552 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 01:51:10.979564 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 01:51:10.979574 systemd[1]: Mounting tmp.mount... Dec 13 01:51:10.979589 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 01:51:10.979601 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:51:10.979612 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:51:10.979621 systemd[1]: Starting modprobe@configfs.service... Dec 13 01:51:10.979634 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:51:10.979646 systemd[1]: Starting modprobe@drm.service... Dec 13 01:51:10.979656 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:51:10.979667 systemd[1]: Starting modprobe@fuse.service... Dec 13 01:51:10.979678 systemd[1]: Starting modprobe@loop.service... Dec 13 01:51:10.979692 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:51:10.979702 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:51:10.979714 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 01:51:10.979727 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:51:10.979737 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:51:10.979749 systemd[1]: Stopped systemd-journald.service. Dec 13 01:51:10.979759 systemd[1]: Starting systemd-journald.service... Dec 13 01:51:10.979771 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:51:10.979783 systemd[1]: Starting systemd-network-generator.service... Dec 13 01:51:10.979794 systemd[1]: Starting systemd-remount-fs.service... Dec 13 01:51:10.979807 kernel: fuse: init (API version 7.34) Dec 13 01:51:10.979816 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:51:10.979828 kernel: loop: module loaded Dec 13 01:51:10.979838 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:51:10.979850 systemd[1]: Stopped verity-setup.service. Dec 13 01:51:10.979859 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:51:10.979872 systemd[1]: Mounted dev-hugepages.mount. Dec 13 01:51:10.979886 systemd[1]: Mounted dev-mqueue.mount. Dec 13 01:51:10.979896 systemd[1]: Mounted media.mount. Dec 13 01:51:10.979909 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 01:51:10.979919 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 01:51:10.979930 systemd[1]: Mounted tmp.mount. Dec 13 01:51:10.979941 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 01:51:10.979958 systemd-journald[1171]: Journal started Dec 13 01:51:10.980026 systemd-journald[1171]: Runtime Journal (/run/log/journal/84df1e19d41a48a0bdc4496303ac1473) is 8.0M, max 159.0M, 151.0M free. Dec 13 01:50:58.771000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:50:59.477000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:50:59.491000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:50:59.491000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:50:59.517000 audit: BPF prog-id=10 op=LOAD Dec 13 01:50:59.517000 audit: BPF prog-id=10 op=UNLOAD Dec 13 01:50:59.522000 audit: BPF prog-id=11 op=LOAD Dec 13 01:50:59.522000 audit: BPF prog-id=11 op=UNLOAD Dec 13 01:51:01.279000 audit[1062]: AVC avc: denied { associate } for pid=1062 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 01:51:01.279000 audit[1062]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00018a2d2 a1=c000194378 a2=c000196800 a3=32 items=0 ppid=1045 pid=1062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:51:01.279000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:51:01.287000 audit[1062]: AVC avc: denied { associate } for pid=1062 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 01:51:01.287000 audit[1062]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018a3a9 a2=1ed a3=0 items=2 ppid=1045 pid=1062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:51:01.287000 audit: CWD cwd="/" Dec 13 01:51:01.287000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:51:01.287000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:51:01.287000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:51:10.433000 audit: BPF prog-id=12 op=LOAD Dec 13 01:51:10.433000 audit: BPF prog-id=3 op=UNLOAD Dec 13 01:51:10.437000 audit: BPF prog-id=13 op=LOAD Dec 13 01:51:10.442000 audit: BPF prog-id=14 op=LOAD Dec 13 01:51:10.442000 audit: BPF prog-id=4 op=UNLOAD Dec 13 01:51:10.442000 audit: BPF prog-id=5 op=UNLOAD Dec 13 01:51:10.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.489000 audit: BPF prog-id=12 op=UNLOAD Dec 13 01:51:10.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.879000 audit: BPF prog-id=15 op=LOAD Dec 13 01:51:10.879000 audit: BPF prog-id=16 op=LOAD Dec 13 01:51:10.879000 audit: BPF prog-id=17 op=LOAD Dec 13 01:51:10.879000 audit: BPF prog-id=13 op=UNLOAD Dec 13 01:51:10.879000 audit: BPF prog-id=14 op=UNLOAD Dec 13 01:51:10.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.974000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 01:51:10.974000 audit[1171]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe2033ca80 a2=4000 a3=7ffe2033cb1c items=0 ppid=1 pid=1171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:51:10.974000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 01:51:01.053445 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:51:10.432388 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:51:01.053845 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 01:51:10.444522 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:51:01.053867 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 01:51:01.053905 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 01:51:01.053917 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 01:51:01.053972 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 01:51:01.054011 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 01:51:01.054243 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 01:51:01.054287 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 01:51:01.054302 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 01:51:01.084086 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 01:51:01.084154 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 01:51:01.084183 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 01:51:01.084207 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 01:51:01.084236 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 01:51:01.084251 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 01:51:09.267388 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:09Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:51:09.267646 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:09Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:51:09.267745 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:09Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:51:09.267905 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:09Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:51:09.267950 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:09Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 01:51:09.268019 /usr/lib/systemd/system-generators/torcx-generator[1062]: time="2024-12-13T01:51:09Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 01:51:10.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.990410 systemd[1]: Started systemd-journald.service. Dec 13 01:51:10.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.991497 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:51:10.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.993967 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:51:10.994117 systemd[1]: Finished modprobe@configfs.service. Dec 13 01:51:10.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.996456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:51:10.996596 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:51:10.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:10.998912 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:51:10.999071 systemd[1]: Finished modprobe@drm.service. Dec 13 01:51:11.001355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:51:11.001484 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:51:11.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.004273 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:51:11.004404 systemd[1]: Finished modprobe@fuse.service. Dec 13 01:51:11.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.006635 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:51:11.006772 systemd[1]: Finished modprobe@loop.service. Dec 13 01:51:11.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.009036 systemd[1]: Finished systemd-network-generator.service. Dec 13 01:51:11.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.012501 systemd[1]: Finished systemd-remount-fs.service. Dec 13 01:51:11.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.015225 systemd[1]: Reached target network-pre.target. Dec 13 01:51:11.018611 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 01:51:11.022306 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 01:51:11.025723 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:51:11.042023 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 01:51:11.045620 systemd[1]: Starting systemd-journal-flush.service... Dec 13 01:51:11.047584 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:51:11.048947 systemd[1]: Starting systemd-random-seed.service... Dec 13 01:51:11.051071 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:51:11.052577 systemd[1]: Starting systemd-sysusers.service... Dec 13 01:51:11.057739 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:51:11.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.060411 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 01:51:11.062684 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 01:51:11.066345 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:51:11.082647 systemd[1]: Finished systemd-random-seed.service. Dec 13 01:51:11.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.085061 systemd[1]: Reached target first-boot-complete.target. Dec 13 01:51:11.097318 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:51:11.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.100654 systemd[1]: Starting systemd-udev-settle.service... Dec 13 01:51:11.103318 systemd-journald[1171]: Time spent on flushing to /var/log/journal/84df1e19d41a48a0bdc4496303ac1473 is 21.186ms for 1140 entries. Dec 13 01:51:11.103318 systemd-journald[1171]: System Journal (/var/log/journal/84df1e19d41a48a0bdc4496303ac1473) is 8.0M, max 2.6G, 2.6G free. Dec 13 01:51:11.197090 systemd-journald[1171]: Received client request to flush runtime journal. Dec 13 01:51:11.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.120743 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:51:11.197335 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:51:11.198317 systemd[1]: Finished systemd-journal-flush.service. Dec 13 01:51:11.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.747328 systemd[1]: Finished systemd-sysusers.service. Dec 13 01:51:11.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:11.751223 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:51:12.103196 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:51:12.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:12.367510 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 01:51:12.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:12.370000 audit: BPF prog-id=18 op=LOAD Dec 13 01:51:12.370000 audit: BPF prog-id=19 op=LOAD Dec 13 01:51:12.370000 audit: BPF prog-id=7 op=UNLOAD Dec 13 01:51:12.370000 audit: BPF prog-id=8 op=UNLOAD Dec 13 01:51:12.371534 systemd[1]: Starting systemd-udevd.service... Dec 13 01:51:12.389821 systemd-udevd[1190]: Using default interface naming scheme 'v252'. Dec 13 01:51:12.713215 systemd[1]: Started systemd-udevd.service. Dec 13 01:51:12.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:12.716000 audit: BPF prog-id=20 op=LOAD Dec 13 01:51:12.718235 systemd[1]: Starting systemd-networkd.service... Dec 13 01:51:12.799680 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 01:51:12.827045 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:51:12.837000 audit[1198]: AVC avc: denied { confidentiality } for pid=1198 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:51:12.849081 kernel: hv_vmbus: registering driver hv_balloon Dec 13 01:51:12.857994 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 01:51:12.837000 audit[1198]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c665e7de20 a1=f884 a2=7f3b4550abc5 a3=5 items=12 ppid=1190 pid=1198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:51:12.863001 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 01:51:12.837000 audit: CWD cwd="/" Dec 13 01:51:12.837000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:51:12.837000 audit: PATH item=1 name=(null) inode=15062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:51:12.837000 audit: PATH item=2 name=(null) inode=15062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:51:12.837000 audit: PATH item=3 name=(null) inode=15063 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:51:12.837000 audit: PATH item=4 name=(null) inode=15062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:51:12.837000 audit: PATH item=5 name=(null) inode=15064 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:51:12.871581 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:51:12.871630 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:51:12.837000 audit: PATH item=6 name=(null) inode=15062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:51:12.837000 audit: PATH item=7 name=(null) inode=15065 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:51:12.837000 audit: PATH item=8 name=(null) inode=15062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:51:12.837000 audit: PATH item=9 name=(null) inode=15066 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:51:12.837000 audit: PATH item=10 name=(null) inode=15062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:51:12.837000 audit: PATH item=11 name=(null) inode=15067 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:51:12.837000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 01:51:12.881000 audit: BPF prog-id=21 op=LOAD Dec 13 01:51:12.882000 audit: BPF prog-id=22 op=LOAD Dec 13 01:51:12.882000 audit: BPF prog-id=23 op=LOAD Dec 13 01:51:12.884038 systemd[1]: Starting systemd-userdbd.service... Dec 13 01:51:13.421161 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:51:13.421236 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:51:13.421266 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:51:13.421291 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 01:51:13.421326 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 01:51:13.429413 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:51:13.438010 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:51:13.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:13.491968 systemd[1]: Started systemd-userdbd.service. Dec 13 01:51:13.626333 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1195) Dec 13 01:51:13.658339 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Dec 13 01:51:13.698502 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:51:13.764674 systemd[1]: Finished systemd-udev-settle.service. Dec 13 01:51:13.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:13.769275 systemd[1]: Starting lvm2-activation-early.service... Dec 13 01:51:13.878737 systemd-networkd[1196]: lo: Link UP Dec 13 01:51:13.878749 systemd-networkd[1196]: lo: Gained carrier Dec 13 01:51:13.879336 systemd-networkd[1196]: Enumeration completed Dec 13 01:51:13.879463 systemd[1]: Started systemd-networkd.service. Dec 13 01:51:13.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:13.883456 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:51:13.920277 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:51:13.974346 kernel: mlx5_core 244e:00:02.0 enP9294s1: Link up Dec 13 01:51:13.999322 kernel: hv_netvsc 7c1e5277-058e-7c1e-5277-058e7c1e5277 eth0: Data path switched to VF: enP9294s1 Dec 13 01:51:13.999294 systemd-networkd[1196]: enP9294s1: Link UP Dec 13 01:51:13.999932 systemd-networkd[1196]: eth0: Link UP Dec 13 01:51:14.000023 systemd-networkd[1196]: eth0: Gained carrier Dec 13 01:51:14.004595 systemd-networkd[1196]: enP9294s1: Gained carrier Dec 13 01:51:14.040469 systemd-networkd[1196]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:51:14.157971 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:51:14.184395 systemd[1]: Finished lvm2-activation-early.service. Dec 13 01:51:14.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:14.187114 systemd[1]: Reached target cryptsetup.target. Dec 13 01:51:14.190551 systemd[1]: Starting lvm2-activation.service... Dec 13 01:51:14.196088 lvm[1270]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:51:14.223343 systemd[1]: Finished lvm2-activation.service. Dec 13 01:51:14.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:14.226494 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:51:14.229209 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:51:14.229248 systemd[1]: Reached target local-fs.target. Dec 13 01:51:14.231818 systemd[1]: Reached target machines.target. Dec 13 01:51:14.235206 systemd[1]: Starting ldconfig.service... Dec 13 01:51:14.257504 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:51:14.257616 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:51:14.259026 systemd[1]: Starting systemd-boot-update.service... Dec 13 01:51:14.262612 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 01:51:14.266769 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 01:51:14.270118 systemd[1]: Starting systemd-sysext.service... Dec 13 01:51:14.748960 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1272 (bootctl) Dec 13 01:51:14.751368 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 01:51:14.801058 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 01:51:14.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:14.815053 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 01:51:14.848817 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 01:51:14.849011 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 01:51:14.862415 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 01:51:14.892333 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:51:14.900164 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:51:14.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:14.901300 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 01:51:14.914331 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 01:51:14.918377 (sd-sysext)[1284]: Using extensions 'kubernetes'. Dec 13 01:51:14.918778 (sd-sysext)[1284]: Merged extensions into '/usr'. Dec 13 01:51:14.934323 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:51:14.935717 systemd[1]: Mounting usr-share-oem.mount... Dec 13 01:51:14.938280 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:51:14.940137 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:51:14.944116 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:51:14.947857 systemd[1]: Starting modprobe@loop.service... Dec 13 01:51:14.950226 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:51:14.950444 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:51:14.950587 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:51:14.953027 systemd[1]: Mounted usr-share-oem.mount. Dec 13 01:51:14.956197 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:51:14.956379 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:51:14.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:14.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:14.959356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:51:14.959498 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:51:14.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:14.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:14.962528 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:51:14.962666 systemd[1]: Finished modprobe@loop.service. Dec 13 01:51:14.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:14.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:14.965609 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:51:14.965750 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:51:14.966894 systemd[1]: Finished systemd-sysext.service. Dec 13 01:51:14.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:14.970659 systemd[1]: Starting ensure-sysext.service... Dec 13 01:51:14.973964 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 01:51:14.982228 systemd[1]: Reloading. Dec 13 01:51:14.991870 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 01:51:15.024953 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:51:15.042478 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:51:15.061821 /usr/lib/systemd/system-generators/torcx-generator[1313]: time="2024-12-13T01:51:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:51:15.061860 /usr/lib/systemd/system-generators/torcx-generator[1313]: time="2024-12-13T01:51:15Z" level=info msg="torcx already run" Dec 13 01:51:15.139467 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:51:15.139489 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:51:15.156055 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:51:15.222000 audit: BPF prog-id=24 op=LOAD Dec 13 01:51:15.222000 audit: BPF prog-id=25 op=LOAD Dec 13 01:51:15.222000 audit: BPF prog-id=18 op=UNLOAD Dec 13 01:51:15.222000 audit: BPF prog-id=19 op=UNLOAD Dec 13 01:51:15.224000 audit: BPF prog-id=26 op=LOAD Dec 13 01:51:15.224000 audit: BPF prog-id=21 op=UNLOAD Dec 13 01:51:15.224000 audit: BPF prog-id=27 op=LOAD Dec 13 01:51:15.224000 audit: BPF prog-id=28 op=LOAD Dec 13 01:51:15.224000 audit: BPF prog-id=22 op=UNLOAD Dec 13 01:51:15.224000 audit: BPF prog-id=23 op=UNLOAD Dec 13 01:51:15.225000 audit: BPF prog-id=29 op=LOAD Dec 13 01:51:15.225000 audit: BPF prog-id=20 op=UNLOAD Dec 13 01:51:15.225000 audit: BPF prog-id=30 op=LOAD Dec 13 01:51:15.225000 audit: BPF prog-id=15 op=UNLOAD Dec 13 01:51:15.226000 audit: BPF prog-id=31 op=LOAD Dec 13 01:51:15.226000 audit: BPF prog-id=32 op=LOAD Dec 13 01:51:15.226000 audit: BPF prog-id=16 op=UNLOAD Dec 13 01:51:15.226000 audit: BPF prog-id=17 op=UNLOAD Dec 13 01:51:15.240119 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:51:15.240408 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:51:15.241750 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:51:15.244302 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:51:15.247269 systemd[1]: Starting modprobe@loop.service... Dec 13 01:51:15.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.247417 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:51:15.247560 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:51:15.247700 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:51:15.248584 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:51:15.248707 systemd[1]: Finished modprobe@loop.service. Dec 13 01:51:15.254469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:51:15.254625 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:51:15.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.255970 systemd[1]: Finished ensure-sysext.service. Dec 13 01:51:15.257277 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:51:15.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.257527 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:51:15.259062 systemd[1]: Starting modprobe@drm.service... Dec 13 01:51:15.260211 systemd[1]: Starting modprobe@loop.service... Dec 13 01:51:15.260769 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:51:15.260840 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:51:15.260985 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:51:15.262749 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:51:15.262917 systemd[1]: Finished modprobe@loop.service. Dec 13 01:51:15.263134 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:51:15.263711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:51:15.263834 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:51:15.264006 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:51:15.265635 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:51:15.265740 systemd[1]: Finished modprobe@drm.service. Dec 13 01:51:15.410715 systemd-fsck[1279]: fsck.fat 4.2 (2021-01-31) Dec 13 01:51:15.410715 systemd-fsck[1279]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 01:51:15.411112 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 01:51:15.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.415895 systemd[1]: Mounting boot.mount... Dec 13 01:51:15.425988 systemd[1]: Mounted boot.mount. Dec 13 01:51:15.439727 systemd[1]: Finished systemd-boot-update.service. Dec 13 01:51:15.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.636426 systemd-networkd[1196]: eth0: Gained IPv6LL Dec 13 01:51:15.641437 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:51:15.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.795678 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 01:51:15.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.799420 systemd[1]: Starting audit-rules.service... Dec 13 01:51:15.802544 systemd[1]: Starting clean-ca-certificates.service... Dec 13 01:51:15.806178 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 01:51:15.809000 audit: BPF prog-id=33 op=LOAD Dec 13 01:51:15.810851 systemd[1]: Starting systemd-resolved.service... Dec 13 01:51:15.813000 audit: BPF prog-id=34 op=LOAD Dec 13 01:51:15.815369 systemd[1]: Starting systemd-timesyncd.service... Dec 13 01:51:15.819771 systemd[1]: Starting systemd-update-utmp.service... Dec 13 01:51:15.844000 audit[1388]: SYSTEM_BOOT pid=1388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.850407 systemd[1]: Finished systemd-update-utmp.service. Dec 13 01:51:15.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.877807 systemd[1]: Finished clean-ca-certificates.service. Dec 13 01:51:15.880719 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:51:15.918221 systemd[1]: Started systemd-timesyncd.service. Dec 13 01:51:15.921268 systemd[1]: Reached target time-set.target. Dec 13 01:51:15.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:15.987581 systemd-resolved[1386]: Positive Trust Anchors: Dec 13 01:51:15.987600 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:51:15.987638 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:51:16.078157 systemd-resolved[1386]: Using system hostname 'ci-3510.3.6-a-002183bad1'. Dec 13 01:51:16.079828 systemd[1]: Started systemd-resolved.service. Dec 13 01:51:16.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:16.082273 systemd[1]: Reached target network.target. Dec 13 01:51:16.085082 kernel: kauditd_printk_skb: 116 callbacks suppressed Dec 13 01:51:16.085151 kernel: audit: type=1130 audit(1734054676.081:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:16.098795 systemd[1]: Reached target network-online.target. Dec 13 01:51:16.101028 systemd[1]: Reached target nss-lookup.target. Dec 13 01:51:16.130181 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 01:51:16.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:16.146342 kernel: audit: type=1130 audit(1734054676.133:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:51:16.155556 systemd-timesyncd[1387]: Contacted time server 193.1.8.98:123 (0.flatcar.pool.ntp.org). Dec 13 01:51:16.155627 systemd-timesyncd[1387]: Initial clock synchronization to Fri 2024-12-13 01:51:16.155718 UTC. Dec 13 01:51:16.258000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:51:16.259069 augenrules[1404]: No rules Dec 13 01:51:16.260016 systemd[1]: Finished audit-rules.service. Dec 13 01:51:16.281637 kernel: audit: type=1305 audit(1734054676.258:203): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:51:16.281708 kernel: audit: type=1300 audit(1734054676.258:203): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffee515b100 a2=420 a3=0 items=0 ppid=1383 pid=1404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:51:16.281735 kernel: audit: type=1327 audit(1734054676.258:203): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 01:51:16.258000 audit[1404]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffee515b100 a2=420 a3=0 items=0 ppid=1383 pid=1404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:51:16.258000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 01:51:21.765981 ldconfig[1271]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:51:21.777969 systemd[1]: Finished ldconfig.service. Dec 13 01:51:21.782944 systemd[1]: Starting systemd-update-done.service... Dec 13 01:51:21.804391 systemd[1]: Finished systemd-update-done.service. Dec 13 01:51:21.807559 systemd[1]: Reached target sysinit.target. Dec 13 01:51:21.810277 systemd[1]: Started motdgen.path. Dec 13 01:51:21.812497 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 01:51:21.815786 systemd[1]: Started logrotate.timer. Dec 13 01:51:21.817814 systemd[1]: Started mdadm.timer. Dec 13 01:51:21.819640 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 01:51:21.821905 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:51:21.821948 systemd[1]: Reached target paths.target. Dec 13 01:51:21.824018 systemd[1]: Reached target timers.target. Dec 13 01:51:21.826333 systemd[1]: Listening on dbus.socket. Dec 13 01:51:21.829217 systemd[1]: Starting docker.socket... Dec 13 01:51:21.833789 systemd[1]: Listening on sshd.socket. Dec 13 01:51:21.836019 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:51:21.836472 systemd[1]: Listening on docker.socket. Dec 13 01:51:21.838531 systemd[1]: Reached target sockets.target. Dec 13 01:51:21.840592 systemd[1]: Reached target basic.target. Dec 13 01:51:21.842696 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:51:21.842727 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:51:21.843681 systemd[1]: Starting containerd.service... Dec 13 01:51:21.847879 systemd[1]: Starting dbus.service... Dec 13 01:51:21.850593 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 01:51:21.854067 systemd[1]: Starting extend-filesystems.service... Dec 13 01:51:21.856901 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 01:51:21.873539 systemd[1]: Starting kubelet.service... Dec 13 01:51:21.876971 systemd[1]: Starting motdgen.service... Dec 13 01:51:21.880019 systemd[1]: Started nvidia.service. Dec 13 01:51:21.883691 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 01:51:21.887483 systemd[1]: Starting sshd-keygen.service... Dec 13 01:51:21.893822 systemd[1]: Starting systemd-logind.service... Dec 13 01:51:21.896610 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:51:21.896750 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:51:21.897334 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:51:21.898216 systemd[1]: Starting update-engine.service... Dec 13 01:51:21.902959 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 01:51:21.909638 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:51:21.909929 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 01:51:21.940952 jq[1414]: false Dec 13 01:51:21.941255 jq[1431]: true Dec 13 01:51:21.941787 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:51:21.941987 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 01:51:21.949916 extend-filesystems[1415]: Found loop1 Dec 13 01:51:21.953451 extend-filesystems[1415]: Found sda Dec 13 01:51:21.958466 extend-filesystems[1415]: Found sda1 Dec 13 01:51:21.960635 extend-filesystems[1415]: Found sda2 Dec 13 01:51:21.960635 extend-filesystems[1415]: Found sda3 Dec 13 01:51:21.960635 extend-filesystems[1415]: Found usr Dec 13 01:51:21.960635 extend-filesystems[1415]: Found sda4 Dec 13 01:51:21.960635 extend-filesystems[1415]: Found sda6 Dec 13 01:51:21.960635 extend-filesystems[1415]: Found sda7 Dec 13 01:51:21.960635 extend-filesystems[1415]: Found sda9 Dec 13 01:51:21.960635 extend-filesystems[1415]: Checking size of /dev/sda9 Dec 13 01:51:21.993066 jq[1435]: true Dec 13 01:51:21.994356 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:51:21.994555 systemd[1]: Finished motdgen.service. Dec 13 01:51:22.039406 env[1438]: time="2024-12-13T01:51:22.039357017Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 01:51:22.074117 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:51:22.081262 systemd-logind[1424]: New seat seat0. Dec 13 01:51:22.091255 extend-filesystems[1415]: Old size kept for /dev/sda9 Dec 13 01:51:22.094644 extend-filesystems[1415]: Found sr0 Dec 13 01:51:22.097061 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:51:22.097252 systemd[1]: Finished extend-filesystems.service. Dec 13 01:51:22.127283 env[1438]: time="2024-12-13T01:51:22.127229792Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:51:22.129202 env[1438]: time="2024-12-13T01:51:22.129175813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:51:22.131461 env[1438]: time="2024-12-13T01:51:22.131426138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:51:22.131561 env[1438]: time="2024-12-13T01:51:22.131546039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:51:22.131904 env[1438]: time="2024-12-13T01:51:22.131881143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:51:22.131998 env[1438]: time="2024-12-13T01:51:22.131983144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:51:22.132076 env[1438]: time="2024-12-13T01:51:22.132061245Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 01:51:22.132144 env[1438]: time="2024-12-13T01:51:22.132131046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:51:22.132293 env[1438]: time="2024-12-13T01:51:22.132277548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:51:22.132626 env[1438]: time="2024-12-13T01:51:22.132606951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:51:22.132892 env[1438]: time="2024-12-13T01:51:22.132871054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:51:22.133025 env[1438]: time="2024-12-13T01:51:22.133010556Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:51:22.133158 env[1438]: time="2024-12-13T01:51:22.133135757Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 01:51:22.133228 env[1438]: time="2024-12-13T01:51:22.133216358Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:51:22.153973 env[1438]: time="2024-12-13T01:51:22.152693474Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:51:22.153973 env[1438]: time="2024-12-13T01:51:22.152734675Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:51:22.153973 env[1438]: time="2024-12-13T01:51:22.152785575Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:51:22.153973 env[1438]: time="2024-12-13T01:51:22.152834676Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:51:22.153973 env[1438]: time="2024-12-13T01:51:22.152855276Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:51:22.153973 env[1438]: time="2024-12-13T01:51:22.152874076Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:51:22.153973 env[1438]: time="2024-12-13T01:51:22.152893776Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:51:22.153973 env[1438]: time="2024-12-13T01:51:22.152926877Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:51:22.153973 env[1438]: time="2024-12-13T01:51:22.152945477Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 01:51:22.153973 env[1438]: time="2024-12-13T01:51:22.152963077Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:51:22.153973 env[1438]: time="2024-12-13T01:51:22.152991477Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:51:22.153973 env[1438]: time="2024-12-13T01:51:22.153009678Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:51:22.153973 env[1438]: time="2024-12-13T01:51:22.153149079Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:51:22.153973 env[1438]: time="2024-12-13T01:51:22.153258480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:51:22.154523 env[1438]: time="2024-12-13T01:51:22.153631884Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:51:22.154523 env[1438]: time="2024-12-13T01:51:22.153665085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:51:22.154523 env[1438]: time="2024-12-13T01:51:22.153694485Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:51:22.154523 env[1438]: time="2024-12-13T01:51:22.153756486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:51:22.154523 env[1438]: time="2024-12-13T01:51:22.153785186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:51:22.154523 env[1438]: time="2024-12-13T01:51:22.153802386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:51:22.154523 env[1438]: time="2024-12-13T01:51:22.153817487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:51:22.154523 env[1438]: time="2024-12-13T01:51:22.153834387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:51:22.154523 env[1438]: time="2024-12-13T01:51:22.153891487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:51:22.157111 env[1438]: time="2024-12-13T01:51:22.153909188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:51:22.157111 env[1438]: time="2024-12-13T01:51:22.154868798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:51:22.157111 env[1438]: time="2024-12-13T01:51:22.154894899Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:51:22.157111 env[1438]: time="2024-12-13T01:51:22.156601617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:51:22.157111 env[1438]: time="2024-12-13T01:51:22.156629118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:51:22.157111 env[1438]: time="2024-12-13T01:51:22.156661318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:51:22.157111 env[1438]: time="2024-12-13T01:51:22.156680318Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:51:22.157111 env[1438]: time="2024-12-13T01:51:22.156701019Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 01:51:22.157111 env[1438]: time="2024-12-13T01:51:22.156728719Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:51:22.157111 env[1438]: time="2024-12-13T01:51:22.156753619Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 01:51:22.157111 env[1438]: time="2024-12-13T01:51:22.156803020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:51:22.158062 env[1438]: time="2024-12-13T01:51:22.157083523Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:51:22.158062 env[1438]: time="2024-12-13T01:51:22.157658029Z" level=info msg="Connect containerd service" Dec 13 01:51:22.158062 env[1438]: time="2024-12-13T01:51:22.157704830Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:51:22.192444 env[1438]: time="2024-12-13T01:51:22.158669940Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:51:22.192444 env[1438]: time="2024-12-13T01:51:22.158802642Z" level=info msg="Start subscribing containerd event" Dec 13 01:51:22.192444 env[1438]: time="2024-12-13T01:51:22.158848542Z" level=info msg="Start recovering state" Dec 13 01:51:22.192444 env[1438]: time="2024-12-13T01:51:22.158908743Z" level=info msg="Start event monitor" Dec 13 01:51:22.192444 env[1438]: time="2024-12-13T01:51:22.158920043Z" level=info msg="Start snapshots syncer" Dec 13 01:51:22.192444 env[1438]: time="2024-12-13T01:51:22.158931043Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:51:22.192444 env[1438]: time="2024-12-13T01:51:22.158940843Z" level=info msg="Start streaming server" Dec 13 01:51:22.192444 env[1438]: time="2024-12-13T01:51:22.159392548Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:51:22.192444 env[1438]: time="2024-12-13T01:51:22.159487349Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:51:22.167117 systemd[1]: Started dbus.service. Dec 13 01:51:22.166932 dbus-daemon[1413]: [system] SELinux support is enabled Dec 13 01:51:22.171990 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:51:22.172020 systemd[1]: Reached target system-config.target. Dec 13 01:51:22.174710 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:51:22.174731 systemd[1]: Reached target user-config.target. Dec 13 01:51:22.179863 systemd[1]: Started systemd-logind.service. Dec 13 01:51:22.201274 systemd[1]: Started containerd.service. Dec 13 01:51:22.204536 env[1438]: time="2024-12-13T01:51:22.204497849Z" level=info msg="containerd successfully booted in 0.170361s" Dec 13 01:51:22.215934 bash[1463]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:51:22.216546 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 01:51:22.276171 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 01:51:22.906232 update_engine[1429]: I1213 01:51:22.905655 1429 main.cc:92] Flatcar Update Engine starting Dec 13 01:51:22.974834 systemd[1]: Started update-engine.service. Dec 13 01:51:22.975272 update_engine[1429]: I1213 01:51:22.974884 1429 update_check_scheduler.cc:74] Next update check in 5m12s Dec 13 01:51:22.979902 systemd[1]: Started locksmithd.service. Dec 13 01:51:23.015886 systemd[1]: Started kubelet.service. Dec 13 01:51:23.084583 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:51:23.112841 systemd[1]: Finished sshd-keygen.service. Dec 13 01:51:23.117847 systemd[1]: Starting issuegen.service... Dec 13 01:51:23.122024 systemd[1]: Started waagent.service. Dec 13 01:51:23.136150 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:51:23.136339 systemd[1]: Finished issuegen.service. Dec 13 01:51:23.139919 systemd[1]: Starting systemd-user-sessions.service... Dec 13 01:51:23.148623 systemd[1]: Finished systemd-user-sessions.service. Dec 13 01:51:23.152655 systemd[1]: Started getty@tty1.service. Dec 13 01:51:23.155953 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 01:51:23.158854 systemd[1]: Reached target getty.target. Dec 13 01:51:23.160859 systemd[1]: Reached target multi-user.target. Dec 13 01:51:23.165089 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 01:51:23.174848 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 01:51:23.175019 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 01:51:23.178025 systemd[1]: Startup finished in 782ms (firmware) + 28.927s (loader) + 981ms (kernel) + 13.575s (initrd) + 24.398s (userspace) = 1min 8.664s. Dec 13 01:51:23.545508 login[1531]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:51:23.548593 login[1532]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:51:23.575531 systemd[1]: Created slice user-500.slice. Dec 13 01:51:23.577068 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 01:51:23.585820 systemd-logind[1424]: New session 2 of user core. Dec 13 01:51:23.590918 systemd-logind[1424]: New session 1 of user core. Dec 13 01:51:23.596938 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 01:51:23.598742 systemd[1]: Starting user@500.service... Dec 13 01:51:23.619716 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:51:23.671587 kubelet[1513]: E1213 01:51:23.671531 1513 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:51:23.673387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:51:23.673547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:51:23.673888 systemd[1]: kubelet.service: Consumed 1.030s CPU time. Dec 13 01:51:23.778997 systemd[1539]: Queued start job for default target default.target. Dec 13 01:51:23.779544 systemd[1539]: Reached target paths.target. Dec 13 01:51:23.779571 systemd[1539]: Reached target sockets.target. Dec 13 01:51:23.779588 systemd[1539]: Reached target timers.target. Dec 13 01:51:23.779603 systemd[1539]: Reached target basic.target. Dec 13 01:51:23.779714 systemd[1]: Started user@500.service. Dec 13 01:51:23.780833 systemd[1]: Started session-1.scope. Dec 13 01:51:23.781635 systemd[1]: Started session-2.scope. Dec 13 01:51:23.782534 systemd[1539]: Reached target default.target. Dec 13 01:51:23.782741 systemd[1539]: Startup finished in 154ms. Dec 13 01:51:24.473484 locksmithd[1510]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:51:29.775515 waagent[1522]: 2024-12-13T01:51:29.775396Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 01:51:29.790466 waagent[1522]: 2024-12-13T01:51:29.778115Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 01:51:29.790466 waagent[1522]: 2024-12-13T01:51:29.779180Z INFO Daemon Daemon Python: 3.9.16 Dec 13 01:51:29.790466 waagent[1522]: 2024-12-13T01:51:29.780378Z INFO Daemon Daemon Run daemon Dec 13 01:51:29.790466 waagent[1522]: 2024-12-13T01:51:29.781273Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 01:51:29.796222 waagent[1522]: 2024-12-13T01:51:29.796098Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 01:51:29.804830 waagent[1522]: 2024-12-13T01:51:29.804716Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:51:29.809788 waagent[1522]: 2024-12-13T01:51:29.809723Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:51:29.812232 waagent[1522]: 2024-12-13T01:51:29.812168Z INFO Daemon Daemon Using waagent for provisioning Dec 13 01:51:29.815261 waagent[1522]: 2024-12-13T01:51:29.815199Z INFO Daemon Daemon Activate resource disk Dec 13 01:51:29.817962 waagent[1522]: 2024-12-13T01:51:29.817901Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 01:51:29.828201 waagent[1522]: 2024-12-13T01:51:29.828134Z INFO Daemon Daemon Found device: None Dec 13 01:51:29.830927 waagent[1522]: 2024-12-13T01:51:29.830865Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 01:51:29.835259 waagent[1522]: 2024-12-13T01:51:29.835202Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 01:51:29.841535 waagent[1522]: 2024-12-13T01:51:29.841476Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:51:29.844791 waagent[1522]: 2024-12-13T01:51:29.844733Z INFO Daemon Daemon Running default provisioning handler Dec 13 01:51:29.855548 waagent[1522]: 2024-12-13T01:51:29.855421Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 01:51:29.862567 waagent[1522]: 2024-12-13T01:51:29.862464Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:51:29.867692 waagent[1522]: 2024-12-13T01:51:29.867627Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:51:29.870436 waagent[1522]: 2024-12-13T01:51:29.870379Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 01:51:29.962218 waagent[1522]: 2024-12-13T01:51:29.958183Z INFO Daemon Daemon Successfully mounted dvd Dec 13 01:51:30.009933 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 01:51:30.032002 waagent[1522]: 2024-12-13T01:51:30.031825Z INFO Daemon Daemon Detect protocol endpoint Dec 13 01:51:30.035067 waagent[1522]: 2024-12-13T01:51:30.034996Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:51:30.038385 waagent[1522]: 2024-12-13T01:51:30.038325Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 01:51:30.041900 waagent[1522]: 2024-12-13T01:51:30.041839Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 01:51:30.045035 waagent[1522]: 2024-12-13T01:51:30.044974Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 01:51:30.047827 waagent[1522]: 2024-12-13T01:51:30.047766Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 01:51:30.198597 waagent[1522]: 2024-12-13T01:51:30.198506Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 01:51:30.207871 waagent[1522]: 2024-12-13T01:51:30.199595Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 01:51:30.207871 waagent[1522]: 2024-12-13T01:51:30.200186Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 01:51:30.778792 waagent[1522]: 2024-12-13T01:51:30.778645Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 01:51:30.791606 waagent[1522]: 2024-12-13T01:51:30.791527Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 01:51:30.795125 waagent[1522]: 2024-12-13T01:51:30.795056Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 01:51:30.874620 waagent[1522]: 2024-12-13T01:51:30.874493Z INFO Daemon Daemon Found private key matching thumbprint D203053D485B9BD630DC27DE832E7216DAC19F4B Dec 13 01:51:30.885581 waagent[1522]: 2024-12-13T01:51:30.874974Z INFO Daemon Daemon Certificate with thumbprint 3DC860EE7D8D1DD56EB97D81E8EF52EFF532426D has no matching private key. Dec 13 01:51:30.885581 waagent[1522]: 2024-12-13T01:51:30.876327Z INFO Daemon Daemon Fetch goal state completed Dec 13 01:51:30.931031 waagent[1522]: 2024-12-13T01:51:30.930942Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 25f716cc-65d4-4cef-9c0f-f423412bd6d2 New eTag: 12895761831151721602] Dec 13 01:51:30.940299 waagent[1522]: 2024-12-13T01:51:30.931950Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 01:51:30.948482 waagent[1522]: 2024-12-13T01:51:30.948420Z INFO Daemon Daemon Starting provisioning Dec 13 01:51:30.951238 waagent[1522]: 2024-12-13T01:51:30.951175Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 01:51:30.954113 waagent[1522]: 2024-12-13T01:51:30.954056Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-002183bad1] Dec 13 01:51:30.974639 waagent[1522]: 2024-12-13T01:51:30.974529Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-002183bad1] Dec 13 01:51:30.978449 waagent[1522]: 2024-12-13T01:51:30.978381Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 01:51:30.981860 waagent[1522]: 2024-12-13T01:51:30.981795Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 01:51:30.996320 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 01:51:30.996577 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 01:51:30.996651 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 01:51:30.996993 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:51:31.001361 systemd-networkd[1196]: eth0: DHCPv6 lease lost Dec 13 01:51:31.002763 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:51:31.002966 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:51:31.005220 systemd[1]: Starting systemd-networkd.service... Dec 13 01:51:31.036477 systemd-networkd[1581]: enP9294s1: Link UP Dec 13 01:51:31.036487 systemd-networkd[1581]: enP9294s1: Gained carrier Dec 13 01:51:31.037873 systemd-networkd[1581]: eth0: Link UP Dec 13 01:51:31.037883 systemd-networkd[1581]: eth0: Gained carrier Dec 13 01:51:31.038318 systemd-networkd[1581]: lo: Link UP Dec 13 01:51:31.038327 systemd-networkd[1581]: lo: Gained carrier Dec 13 01:51:31.038644 systemd-networkd[1581]: eth0: Gained IPv6LL Dec 13 01:51:31.038913 systemd-networkd[1581]: Enumeration completed Dec 13 01:51:31.042346 waagent[1522]: 2024-12-13T01:51:31.040264Z INFO Daemon Daemon Create user account if not exists Dec 13 01:51:31.042346 waagent[1522]: 2024-12-13T01:51:31.041035Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 01:51:31.042346 waagent[1522]: 2024-12-13T01:51:31.041490Z INFO Daemon Daemon Configure sudoer Dec 13 01:51:31.039021 systemd[1]: Started systemd-networkd.service. Dec 13 01:51:31.046904 systemd-networkd[1581]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:51:31.048675 waagent[1522]: 2024-12-13T01:51:31.048591Z INFO Daemon Daemon Configure sshd Dec 13 01:51:31.049098 waagent[1522]: 2024-12-13T01:51:31.049040Z INFO Daemon Daemon Deploy ssh public key. Dec 13 01:51:31.057939 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:51:31.099386 systemd-networkd[1581]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:51:31.102284 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:51:32.197052 waagent[1522]: 2024-12-13T01:51:32.196954Z INFO Daemon Daemon Provisioning complete Dec 13 01:51:32.214553 waagent[1522]: 2024-12-13T01:51:32.214473Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 01:51:32.218204 waagent[1522]: 2024-12-13T01:51:32.218134Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 01:51:32.223895 waagent[1522]: 2024-12-13T01:51:32.223831Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 01:51:32.486907 waagent[1590]: 2024-12-13T01:51:32.486734Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 01:51:32.487619 waagent[1590]: 2024-12-13T01:51:32.487555Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:51:32.487767 waagent[1590]: 2024-12-13T01:51:32.487712Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:51:32.498815 waagent[1590]: 2024-12-13T01:51:32.498743Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 01:51:32.498971 waagent[1590]: 2024-12-13T01:51:32.498918Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 01:51:32.565584 waagent[1590]: 2024-12-13T01:51:32.565455Z INFO ExtHandler ExtHandler Found private key matching thumbprint D203053D485B9BD630DC27DE832E7216DAC19F4B Dec 13 01:51:32.565807 waagent[1590]: 2024-12-13T01:51:32.565746Z INFO ExtHandler ExtHandler Certificate with thumbprint 3DC860EE7D8D1DD56EB97D81E8EF52EFF532426D has no matching private key. Dec 13 01:51:32.566043 waagent[1590]: 2024-12-13T01:51:32.565993Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 01:51:32.579768 waagent[1590]: 2024-12-13T01:51:32.579706Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 390d16d1-9879-4008-9803-a7fbaf1682bd New eTag: 12895761831151721602] Dec 13 01:51:32.580330 waagent[1590]: 2024-12-13T01:51:32.580263Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 01:51:32.721025 waagent[1590]: 2024-12-13T01:51:32.720856Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:51:32.745773 waagent[1590]: 2024-12-13T01:51:32.745600Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1590 Dec 13 01:51:32.749627 waagent[1590]: 2024-12-13T01:51:32.749553Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:51:32.750848 waagent[1590]: 2024-12-13T01:51:32.750791Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:51:32.837152 waagent[1590]: 2024-12-13T01:51:32.837075Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:51:32.837720 waagent[1590]: 2024-12-13T01:51:32.837637Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:51:32.846244 waagent[1590]: 2024-12-13T01:51:32.846185Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:51:32.846738 waagent[1590]: 2024-12-13T01:51:32.846678Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 01:51:32.847836 waagent[1590]: 2024-12-13T01:51:32.847771Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 01:51:32.849118 waagent[1590]: 2024-12-13T01:51:32.849060Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:51:32.849525 waagent[1590]: 2024-12-13T01:51:32.849471Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:51:32.849676 waagent[1590]: 2024-12-13T01:51:32.849629Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:51:32.850176 waagent[1590]: 2024-12-13T01:51:32.850121Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:51:32.850795 waagent[1590]: 2024-12-13T01:51:32.850741Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:51:32.850878 waagent[1590]: 2024-12-13T01:51:32.850821Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:51:32.850878 waagent[1590]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:51:32.850878 waagent[1590]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:51:32.850878 waagent[1590]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:51:32.850878 waagent[1590]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:51:32.850878 waagent[1590]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:51:32.850878 waagent[1590]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:51:32.854177 waagent[1590]: 2024-12-13T01:51:32.854009Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:51:32.854474 waagent[1590]: 2024-12-13T01:51:32.854413Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:51:32.854620 waagent[1590]: 2024-12-13T01:51:32.854556Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:51:32.854749 waagent[1590]: 2024-12-13T01:51:32.854682Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:51:32.856057 waagent[1590]: 2024-12-13T01:51:32.855997Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:51:32.856214 waagent[1590]: 2024-12-13T01:51:32.856165Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:51:32.856435 waagent[1590]: 2024-12-13T01:51:32.856385Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:51:32.857436 waagent[1590]: 2024-12-13T01:51:32.857376Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:51:32.857575 waagent[1590]: 2024-12-13T01:51:32.857510Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:51:32.860329 waagent[1590]: 2024-12-13T01:51:32.860266Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:51:32.870112 waagent[1590]: 2024-12-13T01:51:32.870055Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 01:51:32.870858 waagent[1590]: 2024-12-13T01:51:32.870808Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 01:51:32.871952 waagent[1590]: 2024-12-13T01:51:32.871895Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 01:51:32.911816 waagent[1590]: 2024-12-13T01:51:32.911734Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 01:51:32.923143 waagent[1590]: 2024-12-13T01:51:32.923073Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1581' Dec 13 01:51:33.028237 waagent[1590]: 2024-12-13T01:51:33.028047Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:51:33.028237 waagent[1590]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:51:33.028237 waagent[1590]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:51:33.028237 waagent[1590]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:77:05:8e brd ff:ff:ff:ff:ff:ff Dec 13 01:51:33.028237 waagent[1590]: 3: enP9294s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:77:05:8e brd ff:ff:ff:ff:ff:ff\ altname enP9294p0s2 Dec 13 01:51:33.028237 waagent[1590]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:51:33.028237 waagent[1590]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:51:33.028237 waagent[1590]: 2: eth0 inet 10.200.8.15/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:51:33.028237 waagent[1590]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:51:33.028237 waagent[1590]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 01:51:33.028237 waagent[1590]: 2: eth0 inet6 fe80::7e1e:52ff:fe77:58e/64 scope link \ valid_lft forever preferred_lft forever Dec 13 01:51:33.194528 waagent[1590]: 2024-12-13T01:51:33.194453Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 01:51:33.792518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:51:33.792838 systemd[1]: Stopped kubelet.service. Dec 13 01:51:33.792900 systemd[1]: kubelet.service: Consumed 1.030s CPU time. Dec 13 01:51:33.794916 systemd[1]: Starting kubelet.service... Dec 13 01:51:33.877513 systemd[1]: Started kubelet.service. Dec 13 01:51:34.228168 waagent[1522]: 2024-12-13T01:51:34.227805Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 01:51:34.234424 waagent[1522]: 2024-12-13T01:51:34.234352Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 01:51:34.577242 kubelet[1623]: E1213 01:51:34.577198 1623 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:51:34.581775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:51:34.581945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:51:35.533001 waagent[1629]: 2024-12-13T01:51:35.532891Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 01:51:35.533735 waagent[1629]: 2024-12-13T01:51:35.533669Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 01:51:35.533882 waagent[1629]: 2024-12-13T01:51:35.533828Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 01:51:35.534025 waagent[1629]: 2024-12-13T01:51:35.533979Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 13 01:51:35.543609 waagent[1629]: 2024-12-13T01:51:35.543510Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:51:35.543986 waagent[1629]: 2024-12-13T01:51:35.543932Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:51:35.544146 waagent[1629]: 2024-12-13T01:51:35.544098Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:51:35.556428 waagent[1629]: 2024-12-13T01:51:35.556355Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:51:35.569131 waagent[1629]: 2024-12-13T01:51:35.569071Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 01:51:35.570043 waagent[1629]: 2024-12-13T01:51:35.569984Z INFO ExtHandler Dec 13 01:51:35.570187 waagent[1629]: 2024-12-13T01:51:35.570136Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: da98c7d3-0462-4c5a-a7e2-8b3ec802a83c eTag: 12895761831151721602 source: Fabric] Dec 13 01:51:35.570877 waagent[1629]: 2024-12-13T01:51:35.570821Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 01:51:35.571945 waagent[1629]: 2024-12-13T01:51:35.571886Z INFO ExtHandler Dec 13 01:51:35.572076 waagent[1629]: 2024-12-13T01:51:35.572028Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:51:35.578756 waagent[1629]: 2024-12-13T01:51:35.578706Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 01:51:35.579163 waagent[1629]: 2024-12-13T01:51:35.579116Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 01:51:35.597878 waagent[1629]: 2024-12-13T01:51:35.597806Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 01:51:35.660562 waagent[1629]: 2024-12-13T01:51:35.660435Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D203053D485B9BD630DC27DE832E7216DAC19F4B', 'hasPrivateKey': True} Dec 13 01:51:35.661519 waagent[1629]: 2024-12-13T01:51:35.661449Z INFO ExtHandler Downloaded certificate {'thumbprint': '3DC860EE7D8D1DD56EB97D81E8EF52EFF532426D', 'hasPrivateKey': False} Dec 13 01:51:35.662495 waagent[1629]: 2024-12-13T01:51:35.662434Z INFO ExtHandler Fetch goal state completed Dec 13 01:51:35.684666 waagent[1629]: 2024-12-13T01:51:35.684552Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 01:51:35.696740 waagent[1629]: 2024-12-13T01:51:35.696659Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1629 Dec 13 01:51:35.699725 waagent[1629]: 2024-12-13T01:51:35.699664Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:51:35.700653 waagent[1629]: 2024-12-13T01:51:35.700595Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 01:51:35.700924 waagent[1629]: 2024-12-13T01:51:35.700868Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 01:51:35.702856 waagent[1629]: 2024-12-13T01:51:35.702800Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:51:35.707293 waagent[1629]: 2024-12-13T01:51:35.707239Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:51:35.707686 waagent[1629]: 2024-12-13T01:51:35.707630Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:51:35.715440 waagent[1629]: 2024-12-13T01:51:35.715385Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:51:35.715872 waagent[1629]: 2024-12-13T01:51:35.715817Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 01:51:35.721511 waagent[1629]: 2024-12-13T01:51:35.721422Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 01:51:35.722506 waagent[1629]: 2024-12-13T01:51:35.722444Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 01:51:35.723867 waagent[1629]: 2024-12-13T01:51:35.723808Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:51:35.724261 waagent[1629]: 2024-12-13T01:51:35.724207Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:51:35.724578 waagent[1629]: 2024-12-13T01:51:35.724523Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:51:35.725117 waagent[1629]: 2024-12-13T01:51:35.725061Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:51:35.725747 waagent[1629]: 2024-12-13T01:51:35.725689Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:51:35.725747 waagent[1629]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:51:35.725747 waagent[1629]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:51:35.725747 waagent[1629]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:51:35.725747 waagent[1629]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:51:35.725747 waagent[1629]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:51:35.725747 waagent[1629]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:51:35.726042 waagent[1629]: 2024-12-13T01:51:35.725986Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:51:35.728581 waagent[1629]: 2024-12-13T01:51:35.728503Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:51:35.728975 waagent[1629]: 2024-12-13T01:51:35.728918Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:51:35.729493 waagent[1629]: 2024-12-13T01:51:35.729436Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:51:35.729746 waagent[1629]: 2024-12-13T01:51:35.729673Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:51:35.729961 waagent[1629]: 2024-12-13T01:51:35.729895Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:51:35.730030 waagent[1629]: 2024-12-13T01:51:35.729983Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:51:35.730537 waagent[1629]: 2024-12-13T01:51:35.730479Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:51:35.731303 waagent[1629]: 2024-12-13T01:51:35.731175Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:51:35.731711 waagent[1629]: 2024-12-13T01:51:35.731656Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:51:35.733467 waagent[1629]: 2024-12-13T01:51:35.733414Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:51:35.747320 waagent[1629]: 2024-12-13T01:51:35.747204Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:51:35.747320 waagent[1629]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:51:35.747320 waagent[1629]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:51:35.747320 waagent[1629]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:77:05:8e brd ff:ff:ff:ff:ff:ff Dec 13 01:51:35.747320 waagent[1629]: 3: enP9294s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:77:05:8e brd ff:ff:ff:ff:ff:ff\ altname enP9294p0s2 Dec 13 01:51:35.747320 waagent[1629]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:51:35.747320 waagent[1629]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:51:35.747320 waagent[1629]: 2: eth0 inet 10.200.8.15/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:51:35.747320 waagent[1629]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:51:35.747320 waagent[1629]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 01:51:35.747320 waagent[1629]: 2: eth0 inet6 fe80::7e1e:52ff:fe77:58e/64 scope link \ valid_lft forever preferred_lft forever Dec 13 01:51:35.756364 waagent[1629]: 2024-12-13T01:51:35.756266Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 01:51:35.775905 waagent[1629]: 2024-12-13T01:51:35.775843Z INFO ExtHandler ExtHandler Dec 13 01:51:35.790578 waagent[1629]: 2024-12-13T01:51:35.790325Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: aa1f6afd-2858-48ad-b59a-45a1287c526d correlation 6aba95eb-4ee1-46fc-bf97-dfc4fcd294b1 created: 2024-12-13T01:50:02.690086Z] Dec 13 01:51:35.791809 waagent[1629]: 2024-12-13T01:51:35.791740Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 01:51:35.794831 waagent[1629]: 2024-12-13T01:51:35.794774Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 18 ms] Dec 13 01:51:35.823221 waagent[1629]: 2024-12-13T01:51:35.823165Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 01:51:35.835452 waagent[1629]: 2024-12-13T01:51:35.835395Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 03125CAF-2C3C-40BC-9E03-3FECC4482487;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 01:51:35.905170 waagent[1629]: 2024-12-13T01:51:35.905055Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 13 01:51:35.905170 waagent[1629]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:51:35.905170 waagent[1629]: pkts bytes target prot opt in out source destination Dec 13 01:51:35.905170 waagent[1629]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:51:35.905170 waagent[1629]: pkts bytes target prot opt in out source destination Dec 13 01:51:35.905170 waagent[1629]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:51:35.905170 waagent[1629]: pkts bytes target prot opt in out source destination Dec 13 01:51:35.905170 waagent[1629]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:51:35.905170 waagent[1629]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:51:35.905170 waagent[1629]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:51:35.912200 waagent[1629]: 2024-12-13T01:51:35.912103Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 01:51:35.912200 waagent[1629]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:51:35.912200 waagent[1629]: pkts bytes target prot opt in out source destination Dec 13 01:51:35.912200 waagent[1629]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:51:35.912200 waagent[1629]: pkts bytes target prot opt in out source destination Dec 13 01:51:35.912200 waagent[1629]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:51:35.912200 waagent[1629]: pkts bytes target prot opt in out source destination Dec 13 01:51:35.912200 waagent[1629]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:51:35.912200 waagent[1629]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:51:35.912200 waagent[1629]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:51:35.912762 waagent[1629]: 2024-12-13T01:51:35.912711Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 01:51:44.792504 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:51:44.792822 systemd[1]: Stopped kubelet.service. Dec 13 01:51:44.794797 systemd[1]: Starting kubelet.service... Dec 13 01:51:44.910867 systemd[1]: Started kubelet.service. Dec 13 01:51:45.478356 kubelet[1684]: E1213 01:51:45.478296 1684 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:51:45.480164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:51:45.480340 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:51:55.542426 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:51:55.542727 systemd[1]: Stopped kubelet.service. Dec 13 01:51:55.544819 systemd[1]: Starting kubelet.service... Dec 13 01:51:55.625454 systemd[1]: Started kubelet.service. Dec 13 01:51:56.233608 kubelet[1694]: E1213 01:51:56.233555 1694 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:51:56.235340 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:51:56.235499 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:52:01.514482 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 01:52:06.292496 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:52:06.292818 systemd[1]: Stopped kubelet.service. Dec 13 01:52:06.294815 systemd[1]: Starting kubelet.service... Dec 13 01:52:06.375650 systemd[1]: Started kubelet.service. Dec 13 01:52:06.412603 kubelet[1704]: E1213 01:52:06.412554 1704 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:52:06.414430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:52:06.414587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:52:08.260542 update_engine[1429]: I1213 01:52:08.260461 1429 update_attempter.cc:509] Updating boot flags... Dec 13 01:52:10.083066 systemd[1]: Created slice system-sshd.slice. Dec 13 01:52:10.085138 systemd[1]: Started sshd@0-10.200.8.15:22-10.200.16.10:58494.service. Dec 13 01:52:11.263330 sshd[1751]: Accepted publickey for core from 10.200.16.10 port 58494 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:52:11.265003 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:11.269365 systemd-logind[1424]: New session 3 of user core. Dec 13 01:52:11.269923 systemd[1]: Started session-3.scope. Dec 13 01:52:11.805565 systemd[1]: Started sshd@1-10.200.8.15:22-10.200.16.10:58496.service. Dec 13 01:52:12.426849 sshd[1756]: Accepted publickey for core from 10.200.16.10 port 58496 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:52:12.428559 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:12.433396 systemd[1]: Started session-4.scope. Dec 13 01:52:12.433830 systemd-logind[1424]: New session 4 of user core. Dec 13 01:52:12.874133 sshd[1756]: pam_unix(sshd:session): session closed for user core Dec 13 01:52:12.877441 systemd[1]: sshd@1-10.200.8.15:22-10.200.16.10:58496.service: Deactivated successfully. Dec 13 01:52:12.878332 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:52:12.878942 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:52:12.879681 systemd-logind[1424]: Removed session 4. Dec 13 01:52:12.978639 systemd[1]: Started sshd@2-10.200.8.15:22-10.200.16.10:58510.service. Dec 13 01:52:13.604148 sshd[1762]: Accepted publickey for core from 10.200.16.10 port 58510 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:52:13.605850 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:13.611495 systemd[1]: Started session-5.scope. Dec 13 01:52:13.612362 systemd-logind[1424]: New session 5 of user core. Dec 13 01:52:14.044472 sshd[1762]: pam_unix(sshd:session): session closed for user core Dec 13 01:52:14.047705 systemd[1]: sshd@2-10.200.8.15:22-10.200.16.10:58510.service: Deactivated successfully. Dec 13 01:52:14.048663 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:52:14.049445 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:52:14.050361 systemd-logind[1424]: Removed session 5. Dec 13 01:52:14.148501 systemd[1]: Started sshd@3-10.200.8.15:22-10.200.16.10:58516.service. Dec 13 01:52:14.773717 sshd[1771]: Accepted publickey for core from 10.200.16.10 port 58516 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:52:14.775409 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:14.780458 systemd[1]: Started session-6.scope. Dec 13 01:52:14.781137 systemd-logind[1424]: New session 6 of user core. Dec 13 01:52:15.219450 sshd[1771]: pam_unix(sshd:session): session closed for user core Dec 13 01:52:15.222798 systemd[1]: sshd@3-10.200.8.15:22-10.200.16.10:58516.service: Deactivated successfully. Dec 13 01:52:15.223745 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:52:15.224540 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:52:15.225486 systemd-logind[1424]: Removed session 6. Dec 13 01:52:15.324104 systemd[1]: Started sshd@4-10.200.8.15:22-10.200.16.10:58520.service. Dec 13 01:52:15.949139 sshd[1777]: Accepted publickey for core from 10.200.16.10 port 58520 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:52:15.950811 sshd[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:15.955864 systemd[1]: Started session-7.scope. Dec 13 01:52:15.956325 systemd-logind[1424]: New session 7 of user core. Dec 13 01:52:16.542755 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:52:16.543053 systemd[1]: Stopped kubelet.service. Dec 13 01:52:16.545055 systemd[1]: Starting kubelet.service... Dec 13 01:52:17.273472 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:52:17.273769 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:52:17.286558 systemd[1]: Starting coreos-metadata.service... Dec 13 01:52:17.299361 systemd[1]: Started kubelet.service. Dec 13 01:52:17.354225 kubelet[1791]: E1213 01:52:17.354176 1791 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:52:17.355910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:52:17.356026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:52:17.531530 coreos-metadata[1786]: Dec 13 01:52:17.531 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:52:17.534268 coreos-metadata[1786]: Dec 13 01:52:17.534 INFO Fetch successful Dec 13 01:52:17.534467 coreos-metadata[1786]: Dec 13 01:52:17.534 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 01:52:17.536068 coreos-metadata[1786]: Dec 13 01:52:17.536 INFO Fetch successful Dec 13 01:52:17.536484 coreos-metadata[1786]: Dec 13 01:52:17.536 INFO Fetching http://168.63.129.16/machine/872619b4-7764-4a21-bbfc-46f3f18ce99a/1f8e2283%2D8c3a%2D4890%2D9105%2D996000ce9860.%5Fci%2D3510.3.6%2Da%2D002183bad1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 01:52:17.537979 coreos-metadata[1786]: Dec 13 01:52:17.537 INFO Fetch successful Dec 13 01:52:17.570732 coreos-metadata[1786]: Dec 13 01:52:17.570 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:52:17.582638 coreos-metadata[1786]: Dec 13 01:52:17.582 INFO Fetch successful Dec 13 01:52:17.591208 systemd[1]: Finished coreos-metadata.service. Dec 13 01:52:23.887349 systemd[1]: Stopped kubelet.service. Dec 13 01:52:23.890149 systemd[1]: Starting kubelet.service... Dec 13 01:52:23.912883 systemd[1]: Reloading. Dec 13 01:52:24.016933 /usr/lib/systemd/system-generators/torcx-generator[1853]: time="2024-12-13T01:52:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:52:24.016974 /usr/lib/systemd/system-generators/torcx-generator[1853]: time="2024-12-13T01:52:24Z" level=info msg="torcx already run" Dec 13 01:52:24.124819 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:52:24.125043 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:52:24.144143 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:52:24.248621 systemd[1]: Started kubelet.service. Dec 13 01:52:24.250273 systemd[1]: Stopping kubelet.service... Dec 13 01:52:24.250536 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:52:24.250694 systemd[1]: Stopped kubelet.service. Dec 13 01:52:24.253450 systemd[1]: Starting kubelet.service... Dec 13 01:52:24.547772 systemd[1]: Started kubelet.service. Dec 13 01:52:24.589498 kubelet[1922]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:52:24.589498 kubelet[1922]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:52:24.589498 kubelet[1922]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:52:25.200847 kubelet[1922]: I1213 01:52:25.200727 1922 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:52:25.613841 kubelet[1922]: I1213 01:52:25.613804 1922 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:52:25.613841 kubelet[1922]: I1213 01:52:25.613831 1922 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:52:25.614296 kubelet[1922]: I1213 01:52:25.614103 1922 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:52:25.643677 kubelet[1922]: I1213 01:52:25.643549 1922 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:52:25.658831 kubelet[1922]: I1213 01:52:25.658806 1922 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:52:25.660972 kubelet[1922]: I1213 01:52:25.660921 1922 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:52:25.661202 kubelet[1922]: I1213 01:52:25.660974 1922 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.8.15","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:52:25.661725 kubelet[1922]: I1213 01:52:25.661701 1922 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:52:25.661725 kubelet[1922]: I1213 01:52:25.661726 1922 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:52:25.661875 kubelet[1922]: I1213 01:52:25.661858 1922 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:52:25.662773 kubelet[1922]: I1213 01:52:25.662754 1922 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:52:25.662773 kubelet[1922]: I1213 01:52:25.662775 1922 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:52:25.662902 kubelet[1922]: I1213 01:52:25.662799 1922 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:52:25.662902 kubelet[1922]: I1213 01:52:25.662818 1922 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:52:25.663256 kubelet[1922]: E1213 01:52:25.663220 1922 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:25.663410 kubelet[1922]: E1213 01:52:25.663396 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:25.666438 kubelet[1922]: I1213 01:52:25.666422 1922 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:52:25.668026 kubelet[1922]: I1213 01:52:25.668007 1922 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:52:25.668158 kubelet[1922]: W1213 01:52:25.668137 1922 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.8.15" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:52:25.668220 kubelet[1922]: E1213 01:52:25.668176 1922 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.15" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:52:25.668298 kubelet[1922]: W1213 01:52:25.668286 1922 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:52:25.668399 kubelet[1922]: W1213 01:52:25.668380 1922 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:52:25.668448 kubelet[1922]: E1213 01:52:25.668409 1922 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:52:25.668986 kubelet[1922]: I1213 01:52:25.668971 1922 server.go:1264] "Started kubelet" Dec 13 01:52:25.681128 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 01:52:25.681948 kubelet[1922]: I1213 01:52:25.681303 1922 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:52:25.684479 kubelet[1922]: E1213 01:52:25.684319 1922 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.8.15.1810999a90ad713b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.8.15,UID:10.200.8.15,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.8.15,},FirstTimestamp:2024-12-13 01:52:25.668940091 +0000 UTC m=+1.116710619,LastTimestamp:2024-12-13 01:52:25.668940091 +0000 UTC m=+1.116710619,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.8.15,}" Dec 13 01:52:25.684951 kubelet[1922]: I1213 01:52:25.684923 1922 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:52:25.686098 kubelet[1922]: I1213 01:52:25.686078 1922 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:52:25.687110 kubelet[1922]: I1213 01:52:25.687061 1922 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:52:25.687269 kubelet[1922]: I1213 01:52:25.687250 1922 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:52:25.688554 kubelet[1922]: I1213 01:52:25.688535 1922 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:52:25.688980 kubelet[1922]: I1213 01:52:25.688957 1922 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:52:25.689055 kubelet[1922]: I1213 01:52:25.689020 1922 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:52:25.690245 kubelet[1922]: E1213 01:52:25.690226 1922 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:52:25.690886 kubelet[1922]: I1213 01:52:25.690858 1922 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:52:25.690968 kubelet[1922]: I1213 01:52:25.690933 1922 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:52:25.692430 kubelet[1922]: I1213 01:52:25.692413 1922 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:52:25.694407 kubelet[1922]: E1213 01:52:25.694385 1922 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.15\" not found" node="10.200.8.15" Dec 13 01:52:25.704840 kubelet[1922]: I1213 01:52:25.704826 1922 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:52:25.704967 kubelet[1922]: I1213 01:52:25.704955 1922 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:52:25.705094 kubelet[1922]: I1213 01:52:25.705083 1922 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:52:25.711711 kubelet[1922]: I1213 01:52:25.711698 1922 policy_none.go:49] "None policy: Start" Dec 13 01:52:25.712294 kubelet[1922]: I1213 01:52:25.712274 1922 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:52:25.712423 kubelet[1922]: I1213 01:52:25.712414 1922 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:52:25.719671 systemd[1]: Created slice kubepods.slice. Dec 13 01:52:25.724281 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 01:52:25.730575 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 01:52:25.732483 kubelet[1922]: I1213 01:52:25.732463 1922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:52:25.733779 kubelet[1922]: I1213 01:52:25.733765 1922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:52:25.733883 kubelet[1922]: I1213 01:52:25.733872 1922 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:52:25.733956 kubelet[1922]: I1213 01:52:25.733946 1922 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:52:25.734105 kubelet[1922]: E1213 01:52:25.734080 1922 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:52:25.738893 kubelet[1922]: I1213 01:52:25.738861 1922 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:52:25.745471 kubelet[1922]: I1213 01:52:25.745420 1922 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:52:25.745663 kubelet[1922]: I1213 01:52:25.745652 1922 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:52:25.746558 kubelet[1922]: E1213 01:52:25.746545 1922 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.15\" not found" Dec 13 01:52:25.789757 kubelet[1922]: I1213 01:52:25.789729 1922 kubelet_node_status.go:73] "Attempting to register node" node="10.200.8.15" Dec 13 01:52:25.794488 kubelet[1922]: I1213 01:52:25.794467 1922 kubelet_node_status.go:76] "Successfully registered node" node="10.200.8.15" Dec 13 01:52:25.806256 kubelet[1922]: E1213 01:52:25.806235 1922 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.15\" not found" Dec 13 01:52:25.907367 kubelet[1922]: E1213 01:52:25.907214 1922 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.15\" not found" Dec 13 01:52:26.008000 kubelet[1922]: E1213 01:52:26.007948 1922 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.15\" not found" Dec 13 01:52:26.068811 sudo[1780]: pam_unix(sudo:session): session closed for user root Dec 13 01:52:26.108862 kubelet[1922]: E1213 01:52:26.108808 1922 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.15\" not found" Dec 13 01:52:26.188740 sshd[1777]: pam_unix(sshd:session): session closed for user core Dec 13 01:52:26.192296 systemd[1]: sshd@4-10.200.8.15:22-10.200.16.10:58520.service: Deactivated successfully. Dec 13 01:52:26.193112 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:52:26.193792 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:52:26.194633 systemd-logind[1424]: Removed session 7. Dec 13 01:52:26.209811 kubelet[1922]: E1213 01:52:26.209780 1922 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.15\" not found" Dec 13 01:52:26.310599 kubelet[1922]: E1213 01:52:26.310558 1922 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.15\" not found" Dec 13 01:52:26.411407 kubelet[1922]: E1213 01:52:26.411355 1922 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.15\" not found" Dec 13 01:52:26.512490 kubelet[1922]: I1213 01:52:26.512372 1922 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 01:52:26.512909 env[1438]: time="2024-12-13T01:52:26.512743792Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:52:26.513387 kubelet[1922]: I1213 01:52:26.513364 1922 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 01:52:26.615596 kubelet[1922]: I1213 01:52:26.615562 1922 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 01:52:26.618996 kubelet[1922]: W1213 01:52:26.618969 1922 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:52:26.619212 kubelet[1922]: W1213 01:52:26.619190 1922 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:52:26.619278 kubelet[1922]: W1213 01:52:26.619233 1922 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:52:26.664411 kubelet[1922]: E1213 01:52:26.664373 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:26.664411 kubelet[1922]: I1213 01:52:26.664389 1922 apiserver.go:52] "Watching apiserver" Dec 13 01:52:26.668114 kubelet[1922]: I1213 01:52:26.668067 1922 topology_manager.go:215] "Topology Admit Handler" podUID="b76ec320-9f69-4b63-93ef-d07657991968" podNamespace="kube-system" podName="kube-proxy-mkb6b" Dec 13 01:52:26.668293 kubelet[1922]: I1213 01:52:26.668270 1922 topology_manager.go:215] "Topology Admit Handler" podUID="e436b850-5445-46fa-84e8-98bdf9565446" podNamespace="kube-system" podName="cilium-jxlnm" Dec 13 01:52:26.674137 systemd[1]: Created slice kubepods-burstable-pode436b850_5445_46fa_84e8_98bdf9565446.slice. Dec 13 01:52:26.684213 systemd[1]: Created slice kubepods-besteffort-podb76ec320_9f69_4b63_93ef_d07657991968.slice. Dec 13 01:52:26.689554 kubelet[1922]: I1213 01:52:26.689531 1922 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:52:26.695466 kubelet[1922]: I1213 01:52:26.695444 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-host-proc-sys-net\") pod \"cilium-jxlnm\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " pod="kube-system/cilium-jxlnm" Dec 13 01:52:26.695564 kubelet[1922]: I1213 01:52:26.695476 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-cilium-run\") pod \"cilium-jxlnm\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " pod="kube-system/cilium-jxlnm" Dec 13 01:52:26.695564 kubelet[1922]: I1213 01:52:26.695499 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-etc-cni-netd\") pod \"cilium-jxlnm\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " pod="kube-system/cilium-jxlnm" Dec 13 01:52:26.695564 kubelet[1922]: I1213 01:52:26.695521 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-xtables-lock\") pod \"cilium-jxlnm\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " pod="kube-system/cilium-jxlnm" Dec 13 01:52:26.695564 kubelet[1922]: I1213 01:52:26.695542 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-cni-path\") pod \"cilium-jxlnm\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " pod="kube-system/cilium-jxlnm" Dec 13 01:52:26.695564 kubelet[1922]: I1213 01:52:26.695563 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-lib-modules\") pod \"cilium-jxlnm\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " pod="kube-system/cilium-jxlnm" Dec 13 01:52:26.695775 kubelet[1922]: I1213 01:52:26.695586 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48njx\" (UniqueName: \"kubernetes.io/projected/e436b850-5445-46fa-84e8-98bdf9565446-kube-api-access-48njx\") pod \"cilium-jxlnm\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " pod="kube-system/cilium-jxlnm" Dec 13 01:52:26.695775 kubelet[1922]: I1213 01:52:26.695608 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b76ec320-9f69-4b63-93ef-d07657991968-lib-modules\") pod \"kube-proxy-mkb6b\" (UID: \"b76ec320-9f69-4b63-93ef-d07657991968\") " pod="kube-system/kube-proxy-mkb6b" Dec 13 01:52:26.695775 kubelet[1922]: I1213 01:52:26.695629 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-hostproc\") pod \"cilium-jxlnm\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " pod="kube-system/cilium-jxlnm" Dec 13 01:52:26.695775 kubelet[1922]: I1213 01:52:26.695650 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-cilium-cgroup\") pod \"cilium-jxlnm\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " pod="kube-system/cilium-jxlnm" Dec 13 01:52:26.695775 kubelet[1922]: I1213 01:52:26.695673 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b76ec320-9f69-4b63-93ef-d07657991968-xtables-lock\") pod \"kube-proxy-mkb6b\" (UID: \"b76ec320-9f69-4b63-93ef-d07657991968\") " pod="kube-system/kube-proxy-mkb6b" Dec 13 01:52:26.695968 kubelet[1922]: I1213 01:52:26.695705 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn4k9\" (UniqueName: \"kubernetes.io/projected/b76ec320-9f69-4b63-93ef-d07657991968-kube-api-access-hn4k9\") pod \"kube-proxy-mkb6b\" (UID: \"b76ec320-9f69-4b63-93ef-d07657991968\") " pod="kube-system/kube-proxy-mkb6b" Dec 13 01:52:26.695968 kubelet[1922]: I1213 01:52:26.695736 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e436b850-5445-46fa-84e8-98bdf9565446-cilium-config-path\") pod \"cilium-jxlnm\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " pod="kube-system/cilium-jxlnm" Dec 13 01:52:26.695968 kubelet[1922]: I1213 01:52:26.695762 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-host-proc-sys-kernel\") pod \"cilium-jxlnm\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " pod="kube-system/cilium-jxlnm" Dec 13 01:52:26.695968 kubelet[1922]: I1213 01:52:26.695784 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e436b850-5445-46fa-84e8-98bdf9565446-hubble-tls\") pod \"cilium-jxlnm\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " pod="kube-system/cilium-jxlnm" Dec 13 01:52:26.695968 kubelet[1922]: I1213 01:52:26.695807 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b76ec320-9f69-4b63-93ef-d07657991968-kube-proxy\") pod \"kube-proxy-mkb6b\" (UID: \"b76ec320-9f69-4b63-93ef-d07657991968\") " pod="kube-system/kube-proxy-mkb6b" Dec 13 01:52:26.696094 kubelet[1922]: I1213 01:52:26.695827 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-bpf-maps\") pod \"cilium-jxlnm\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " pod="kube-system/cilium-jxlnm" Dec 13 01:52:26.696094 kubelet[1922]: I1213 01:52:26.695850 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e436b850-5445-46fa-84e8-98bdf9565446-clustermesh-secrets\") pod \"cilium-jxlnm\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " pod="kube-system/cilium-jxlnm" Dec 13 01:52:26.984359 env[1438]: time="2024-12-13T01:52:26.984283715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jxlnm,Uid:e436b850-5445-46fa-84e8-98bdf9565446,Namespace:kube-system,Attempt:0,}" Dec 13 01:52:26.991750 env[1438]: time="2024-12-13T01:52:26.991703942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mkb6b,Uid:b76ec320-9f69-4b63-93ef-d07657991968,Namespace:kube-system,Attempt:0,}" Dec 13 01:52:27.664809 kubelet[1922]: E1213 01:52:27.664770 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:28.665919 kubelet[1922]: E1213 01:52:28.665870 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:29.666531 kubelet[1922]: E1213 01:52:29.666479 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:30.667574 kubelet[1922]: E1213 01:52:30.667505 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:30.971477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295235375.mount: Deactivated successfully. Dec 13 01:52:30.998131 env[1438]: time="2024-12-13T01:52:30.998080823Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:31.001757 env[1438]: time="2024-12-13T01:52:31.001718123Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:31.013478 env[1438]: time="2024-12-13T01:52:31.013433935Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:31.018595 env[1438]: time="2024-12-13T01:52:31.018562272Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:31.028601 env[1438]: time="2024-12-13T01:52:31.028563039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:31.034511 env[1438]: time="2024-12-13T01:52:31.034475096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:31.039161 env[1438]: time="2024-12-13T01:52:31.039127321Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:31.043230 env[1438]: time="2024-12-13T01:52:31.043195329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:31.122361 env[1438]: time="2024-12-13T01:52:31.122282338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:52:31.122516 env[1438]: time="2024-12-13T01:52:31.122374041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:52:31.122516 env[1438]: time="2024-12-13T01:52:31.122402242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:52:31.122625 env[1438]: time="2024-12-13T01:52:31.122560246Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e pid=1966 runtime=io.containerd.runc.v2 Dec 13 01:52:31.130383 env[1438]: time="2024-12-13T01:52:31.130232350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:52:31.130591 env[1438]: time="2024-12-13T01:52:31.130551159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:52:31.130733 env[1438]: time="2024-12-13T01:52:31.130707663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:52:31.131032 env[1438]: time="2024-12-13T01:52:31.130993071Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d6cacb0d75f4aedddfbcf35d90b28b30dfe5d505d4e70d36807ecf27b24ffa8 pid=1981 runtime=io.containerd.runc.v2 Dec 13 01:52:31.144865 systemd[1]: Started cri-containerd-d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e.scope. Dec 13 01:52:31.157817 systemd[1]: Started cri-containerd-9d6cacb0d75f4aedddfbcf35d90b28b30dfe5d505d4e70d36807ecf27b24ffa8.scope. Dec 13 01:52:31.196431 env[1438]: time="2024-12-13T01:52:31.196376615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mkb6b,Uid:b76ec320-9f69-4b63-93ef-d07657991968,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d6cacb0d75f4aedddfbcf35d90b28b30dfe5d505d4e70d36807ecf27b24ffa8\"" Dec 13 01:52:31.197696 env[1438]: time="2024-12-13T01:52:31.197663449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jxlnm,Uid:e436b850-5445-46fa-84e8-98bdf9565446,Namespace:kube-system,Attempt:0,} returns sandbox id \"d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e\"" Dec 13 01:52:31.199073 env[1438]: time="2024-12-13T01:52:31.198993784Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:52:31.667779 kubelet[1922]: E1213 01:52:31.667693 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:32.367490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3463472114.mount: Deactivated successfully. Dec 13 01:52:32.669055 kubelet[1922]: E1213 01:52:32.668703 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:32.944592 env[1438]: time="2024-12-13T01:52:32.944468669Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:32.950380 env[1438]: time="2024-12-13T01:52:32.950345121Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:32.956552 env[1438]: time="2024-12-13T01:52:32.956514681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:32.958921 env[1438]: time="2024-12-13T01:52:32.958886743Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:32.959326 env[1438]: time="2024-12-13T01:52:32.959285253Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 01:52:32.960863 env[1438]: time="2024-12-13T01:52:32.960447284Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:52:32.961906 env[1438]: time="2024-12-13T01:52:32.961875721Z" level=info msg="CreateContainer within sandbox \"9d6cacb0d75f4aedddfbcf35d90b28b30dfe5d505d4e70d36807ecf27b24ffa8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:52:32.988977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654741237.mount: Deactivated successfully. Dec 13 01:52:32.996106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1345498292.mount: Deactivated successfully. Dec 13 01:52:33.015368 env[1438]: time="2024-12-13T01:52:33.015319899Z" level=info msg="CreateContainer within sandbox \"9d6cacb0d75f4aedddfbcf35d90b28b30dfe5d505d4e70d36807ecf27b24ffa8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7bc6312ca572251ba804c630fbd0fca469132c91e31f9dbaaa5ffc50bd06093c\"" Dec 13 01:52:33.016154 env[1438]: time="2024-12-13T01:52:33.016120719Z" level=info msg="StartContainer for \"7bc6312ca572251ba804c630fbd0fca469132c91e31f9dbaaa5ffc50bd06093c\"" Dec 13 01:52:33.035545 systemd[1]: Started cri-containerd-7bc6312ca572251ba804c630fbd0fca469132c91e31f9dbaaa5ffc50bd06093c.scope. Dec 13 01:52:33.073138 env[1438]: time="2024-12-13T01:52:33.073091259Z" level=info msg="StartContainer for \"7bc6312ca572251ba804c630fbd0fca469132c91e31f9dbaaa5ffc50bd06093c\" returns successfully" Dec 13 01:52:33.669143 kubelet[1922]: E1213 01:52:33.669084 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:33.765006 kubelet[1922]: I1213 01:52:33.764942 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mkb6b" podStartSLOduration=7.003162835 podStartE2EDuration="8.764908242s" podCreationTimestamp="2024-12-13 01:52:25 +0000 UTC" firstStartedPulling="2024-12-13 01:52:31.198505571 +0000 UTC m=+6.646276099" lastFinishedPulling="2024-12-13 01:52:32.960250978 +0000 UTC m=+8.408021506" observedRunningTime="2024-12-13 01:52:33.764702937 +0000 UTC m=+9.212473465" watchObservedRunningTime="2024-12-13 01:52:33.764908242 +0000 UTC m=+9.212678870" Dec 13 01:52:34.669519 kubelet[1922]: E1213 01:52:34.669433 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:35.670054 kubelet[1922]: E1213 01:52:35.670014 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:36.670944 kubelet[1922]: E1213 01:52:36.670870 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:37.671893 kubelet[1922]: E1213 01:52:37.671826 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:38.672199 kubelet[1922]: E1213 01:52:38.672151 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:38.720548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3040207442.mount: Deactivated successfully. Dec 13 01:52:39.673002 kubelet[1922]: E1213 01:52:39.672939 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:40.674088 kubelet[1922]: E1213 01:52:40.674025 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:41.334838 env[1438]: time="2024-12-13T01:52:41.334784601Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:41.340251 env[1438]: time="2024-12-13T01:52:41.340214212Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:41.344393 env[1438]: time="2024-12-13T01:52:41.344359097Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:52:41.344875 env[1438]: time="2024-12-13T01:52:41.344844907Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:52:41.347486 env[1438]: time="2024-12-13T01:52:41.347453661Z" level=info msg="CreateContainer within sandbox \"d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:52:41.367400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295885430.mount: Deactivated successfully. Dec 13 01:52:41.380651 env[1438]: time="2024-12-13T01:52:41.380615140Z" level=info msg="CreateContainer within sandbox \"d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1\"" Dec 13 01:52:41.381144 env[1438]: time="2024-12-13T01:52:41.381106250Z" level=info msg="StartContainer for \"5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1\"" Dec 13 01:52:41.404038 systemd[1]: Started cri-containerd-5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1.scope. Dec 13 01:52:41.436484 env[1438]: time="2024-12-13T01:52:41.436439783Z" level=info msg="StartContainer for \"5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1\" returns successfully" Dec 13 01:52:41.439999 systemd[1]: cri-containerd-5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1.scope: Deactivated successfully. Dec 13 01:52:42.102887 kubelet[1922]: E1213 01:52:41.674358 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:42.365812 systemd[1]: run-containerd-runc-k8s.io-5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1-runc.VQ6mTh.mount: Deactivated successfully. Dec 13 01:52:42.365937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1-rootfs.mount: Deactivated successfully. Dec 13 01:52:42.675080 kubelet[1922]: E1213 01:52:42.674943 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:43.675374 kubelet[1922]: E1213 01:52:43.675321 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:44.675800 kubelet[1922]: E1213 01:52:44.675751 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:45.663342 kubelet[1922]: E1213 01:52:45.663292 1922 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:45.676784 kubelet[1922]: E1213 01:52:45.676734 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:45.858848 env[1438]: time="2024-12-13T01:52:45.858771518Z" level=info msg="shim disconnected" id=5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1 Dec 13 01:52:45.858848 env[1438]: time="2024-12-13T01:52:45.858831319Z" level=warning msg="cleaning up after shim disconnected" id=5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1 namespace=k8s.io Dec 13 01:52:45.858848 env[1438]: time="2024-12-13T01:52:45.858847619Z" level=info msg="cleaning up dead shim" Dec 13 01:52:45.868947 env[1438]: time="2024-12-13T01:52:45.868902405Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:52:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2248 runtime=io.containerd.runc.v2\n" Dec 13 01:52:46.677131 kubelet[1922]: E1213 01:52:46.676944 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:46.779895 env[1438]: time="2024-12-13T01:52:46.779843008Z" level=info msg="CreateContainer within sandbox \"d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:52:46.843370 env[1438]: time="2024-12-13T01:52:46.843295053Z" level=info msg="CreateContainer within sandbox \"d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5\"" Dec 13 01:52:46.844043 env[1438]: time="2024-12-13T01:52:46.844005866Z" level=info msg="StartContainer for \"2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5\"" Dec 13 01:52:46.868256 systemd[1]: Started cri-containerd-2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5.scope. Dec 13 01:52:46.896507 env[1438]: time="2024-12-13T01:52:46.896466813Z" level=info msg="StartContainer for \"2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5\" returns successfully" Dec 13 01:52:46.902927 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:52:46.903470 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:52:46.903674 systemd[1]: Stopping systemd-sysctl.service... Dec 13 01:52:46.905708 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:52:46.914261 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 01:52:46.915252 systemd[1]: cri-containerd-2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5.scope: Deactivated successfully. Dec 13 01:52:46.920295 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:52:46.952033 env[1438]: time="2024-12-13T01:52:46.951904914Z" level=info msg="shim disconnected" id=2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5 Dec 13 01:52:46.952033 env[1438]: time="2024-12-13T01:52:46.951958215Z" level=warning msg="cleaning up after shim disconnected" id=2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5 namespace=k8s.io Dec 13 01:52:46.952033 env[1438]: time="2024-12-13T01:52:46.951969915Z" level=info msg="cleaning up dead shim" Dec 13 01:52:46.959780 env[1438]: time="2024-12-13T01:52:46.959743055Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:52:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2313 runtime=io.containerd.runc.v2\n" Dec 13 01:52:47.677224 kubelet[1922]: E1213 01:52:47.677173 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:47.784099 env[1438]: time="2024-12-13T01:52:47.784039592Z" level=info msg="CreateContainer within sandbox \"d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:52:47.813364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5-rootfs.mount: Deactivated successfully. Dec 13 01:52:47.818715 env[1438]: time="2024-12-13T01:52:47.818673302Z" level=info msg="CreateContainer within sandbox \"d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839\"" Dec 13 01:52:47.819223 env[1438]: time="2024-12-13T01:52:47.819190611Z" level=info msg="StartContainer for \"6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839\"" Dec 13 01:52:47.845088 systemd[1]: Started cri-containerd-6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839.scope. Dec 13 01:52:47.877749 systemd[1]: cri-containerd-6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839.scope: Deactivated successfully. Dec 13 01:52:47.878661 env[1438]: time="2024-12-13T01:52:47.878620157Z" level=info msg="StartContainer for \"6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839\" returns successfully" Dec 13 01:52:47.896809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839-rootfs.mount: Deactivated successfully. Dec 13 01:52:47.914562 env[1438]: time="2024-12-13T01:52:47.914509989Z" level=info msg="shim disconnected" id=6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839 Dec 13 01:52:47.914562 env[1438]: time="2024-12-13T01:52:47.914558790Z" level=warning msg="cleaning up after shim disconnected" id=6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839 namespace=k8s.io Dec 13 01:52:47.915067 env[1438]: time="2024-12-13T01:52:47.914569890Z" level=info msg="cleaning up dead shim" Dec 13 01:52:47.921976 env[1438]: time="2024-12-13T01:52:47.921941820Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:52:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2369 runtime=io.containerd.runc.v2\n" Dec 13 01:52:48.678269 kubelet[1922]: E1213 01:52:48.678209 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:48.787758 env[1438]: time="2024-12-13T01:52:48.787720525Z" level=info msg="CreateContainer within sandbox \"d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:52:48.813099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2586868237.mount: Deactivated successfully. Dec 13 01:52:48.832493 env[1438]: time="2024-12-13T01:52:48.832449094Z" level=info msg="CreateContainer within sandbox \"d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8\"" Dec 13 01:52:48.833025 env[1438]: time="2024-12-13T01:52:48.832993003Z" level=info msg="StartContainer for \"595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8\"" Dec 13 01:52:48.855238 systemd[1]: Started cri-containerd-595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8.scope. Dec 13 01:52:48.878089 systemd[1]: cri-containerd-595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8.scope: Deactivated successfully. Dec 13 01:52:48.879404 env[1438]: time="2024-12-13T01:52:48.879329399Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode436b850_5445_46fa_84e8_98bdf9565446.slice/cri-containerd-595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8.scope/memory.events\": no such file or directory" Dec 13 01:52:48.885714 env[1438]: time="2024-12-13T01:52:48.885679508Z" level=info msg="StartContainer for \"595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8\" returns successfully" Dec 13 01:52:48.900493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8-rootfs.mount: Deactivated successfully. Dec 13 01:52:48.915836 env[1438]: time="2024-12-13T01:52:48.915785725Z" level=info msg="shim disconnected" id=595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8 Dec 13 01:52:48.916209 env[1438]: time="2024-12-13T01:52:48.915835926Z" level=warning msg="cleaning up after shim disconnected" id=595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8 namespace=k8s.io Dec 13 01:52:48.916209 env[1438]: time="2024-12-13T01:52:48.915848926Z" level=info msg="cleaning up dead shim" Dec 13 01:52:48.923711 env[1438]: time="2024-12-13T01:52:48.923679761Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:52:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2425 runtime=io.containerd.runc.v2\n" Dec 13 01:52:49.678406 kubelet[1922]: E1213 01:52:49.678345 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:49.793409 env[1438]: time="2024-12-13T01:52:49.793363879Z" level=info msg="CreateContainer within sandbox \"d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:52:49.841610 env[1438]: time="2024-12-13T01:52:49.841569487Z" level=info msg="CreateContainer within sandbox \"d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\"" Dec 13 01:52:49.842047 env[1438]: time="2024-12-13T01:52:49.842017895Z" level=info msg="StartContainer for \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\"" Dec 13 01:52:49.861429 systemd[1]: Started cri-containerd-311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb.scope. Dec 13 01:52:49.902362 env[1438]: time="2024-12-13T01:52:49.902295305Z" level=info msg="StartContainer for \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\" returns successfully" Dec 13 01:52:50.047701 kubelet[1922]: I1213 01:52:50.046262 1922 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:52:50.523345 kernel: Initializing XFRM netlink socket Dec 13 01:52:50.679379 kubelet[1922]: E1213 01:52:50.679303 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:51.680279 kubelet[1922]: E1213 01:52:51.680220 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:52.185078 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 01:52:52.186134 systemd-networkd[1581]: cilium_host: Link UP Dec 13 01:52:52.186325 systemd-networkd[1581]: cilium_net: Link UP Dec 13 01:52:52.186329 systemd-networkd[1581]: cilium_net: Gained carrier Dec 13 01:52:52.186576 systemd-networkd[1581]: cilium_host: Gained carrier Dec 13 01:52:52.186789 systemd-networkd[1581]: cilium_host: Gained IPv6LL Dec 13 01:52:52.359491 systemd-networkd[1581]: cilium_vxlan: Link UP Dec 13 01:52:52.359689 systemd-networkd[1581]: cilium_vxlan: Gained carrier Dec 13 01:52:52.580420 systemd-networkd[1581]: cilium_net: Gained IPv6LL Dec 13 01:52:52.598340 kernel: NET: Registered PF_ALG protocol family Dec 13 01:52:52.681424 kubelet[1922]: E1213 01:52:52.681372 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:53.301522 systemd-networkd[1581]: lxc_health: Link UP Dec 13 01:52:53.316335 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 01:52:53.317029 systemd-networkd[1581]: lxc_health: Gained carrier Dec 13 01:52:53.532338 kubelet[1922]: I1213 01:52:53.532244 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jxlnm" podStartSLOduration=18.385790663 podStartE2EDuration="28.532201494s" podCreationTimestamp="2024-12-13 01:52:25 +0000 UTC" firstStartedPulling="2024-12-13 01:52:31.1995883 +0000 UTC m=+6.647358828" lastFinishedPulling="2024-12-13 01:52:41.345999131 +0000 UTC m=+16.793769659" observedRunningTime="2024-12-13 01:52:50.813145547 +0000 UTC m=+26.260916075" watchObservedRunningTime="2024-12-13 01:52:53.532201494 +0000 UTC m=+28.979972022" Dec 13 01:52:53.532764 kubelet[1922]: I1213 01:52:53.532732 1922 topology_manager.go:215] "Topology Admit Handler" podUID="00725c65-6c28-4235-af0d-6700ebdfc50d" podNamespace="default" podName="nginx-deployment-85f456d6dd-dls8z" Dec 13 01:52:53.539287 systemd[1]: Created slice kubepods-besteffort-pod00725c65_6c28_4235_af0d_6700ebdfc50d.slice. Dec 13 01:52:53.583473 kubelet[1922]: I1213 01:52:53.583364 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dsst\" (UniqueName: \"kubernetes.io/projected/00725c65-6c28-4235-af0d-6700ebdfc50d-kube-api-access-2dsst\") pod \"nginx-deployment-85f456d6dd-dls8z\" (UID: \"00725c65-6c28-4235-af0d-6700ebdfc50d\") " pod="default/nginx-deployment-85f456d6dd-dls8z" Dec 13 01:52:53.682323 kubelet[1922]: E1213 01:52:53.682220 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:53.843817 env[1438]: time="2024-12-13T01:52:53.843470935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-dls8z,Uid:00725c65-6c28-4235-af0d-6700ebdfc50d,Namespace:default,Attempt:0,}" Dec 13 01:52:53.940823 systemd-networkd[1581]: lxc0135ffa5979d: Link UP Dec 13 01:52:53.950399 kernel: eth0: renamed from tmp54b63 Dec 13 01:52:53.962379 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0135ffa5979d: link becomes ready Dec 13 01:52:53.962210 systemd-networkd[1581]: lxc0135ffa5979d: Gained carrier Dec 13 01:52:54.004488 systemd-networkd[1581]: cilium_vxlan: Gained IPv6LL Dec 13 01:52:54.682713 kubelet[1922]: E1213 01:52:54.682667 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:54.964599 systemd-networkd[1581]: lxc_health: Gained IPv6LL Dec 13 01:52:55.684376 kubelet[1922]: E1213 01:52:55.684324 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:55.796579 systemd-networkd[1581]: lxc0135ffa5979d: Gained IPv6LL Dec 13 01:52:56.684633 kubelet[1922]: E1213 01:52:56.684578 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:57.294574 env[1438]: time="2024-12-13T01:52:57.294472618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:52:57.295046 env[1438]: time="2024-12-13T01:52:57.294535319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:52:57.295046 env[1438]: time="2024-12-13T01:52:57.294548819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:52:57.295046 env[1438]: time="2024-12-13T01:52:57.294818823Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/54b633bc819695a27270743ac6b0b04ec7a145502ca71ba7b076b1691eb651b0 pid=2959 runtime=io.containerd.runc.v2 Dec 13 01:52:57.319397 systemd[1]: run-containerd-runc-k8s.io-54b633bc819695a27270743ac6b0b04ec7a145502ca71ba7b076b1691eb651b0-runc.cyqLp1.mount: Deactivated successfully. Dec 13 01:52:57.324559 systemd[1]: Started cri-containerd-54b633bc819695a27270743ac6b0b04ec7a145502ca71ba7b076b1691eb651b0.scope. Dec 13 01:52:57.360950 env[1438]: time="2024-12-13T01:52:57.360905440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-dls8z,Uid:00725c65-6c28-4235-af0d-6700ebdfc50d,Namespace:default,Attempt:0,} returns sandbox id \"54b633bc819695a27270743ac6b0b04ec7a145502ca71ba7b076b1691eb651b0\"" Dec 13 01:52:57.362801 env[1438]: time="2024-12-13T01:52:57.362696565Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:52:57.686405 kubelet[1922]: E1213 01:52:57.686247 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:58.686942 kubelet[1922]: E1213 01:52:58.686885 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:52:59.687943 kubelet[1922]: E1213 01:52:59.687883 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:00.223375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1423133895.mount: Deactivated successfully. Dec 13 01:53:00.688906 kubelet[1922]: E1213 01:53:00.688840 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:01.689242 kubelet[1922]: E1213 01:53:01.689174 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:01.756527 env[1438]: time="2024-12-13T01:53:01.756479123Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:01.766379 env[1438]: time="2024-12-13T01:53:01.766338648Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:01.772406 env[1438]: time="2024-12-13T01:53:01.772374625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:01.778204 env[1438]: time="2024-12-13T01:53:01.778175598Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:01.778778 env[1438]: time="2024-12-13T01:53:01.778746506Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:53:01.781244 env[1438]: time="2024-12-13T01:53:01.781214737Z" level=info msg="CreateContainer within sandbox \"54b633bc819695a27270743ac6b0b04ec7a145502ca71ba7b076b1691eb651b0\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 01:53:01.843494 env[1438]: time="2024-12-13T01:53:01.843444227Z" level=info msg="CreateContainer within sandbox \"54b633bc819695a27270743ac6b0b04ec7a145502ca71ba7b076b1691eb651b0\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"21be8bf875abd8096a57d9cccc2375075593f6640f04aef087754fdc1ef9d5a4\"" Dec 13 01:53:01.843935 env[1438]: time="2024-12-13T01:53:01.843906933Z" level=info msg="StartContainer for \"21be8bf875abd8096a57d9cccc2375075593f6640f04aef087754fdc1ef9d5a4\"" Dec 13 01:53:01.870169 systemd[1]: Started cri-containerd-21be8bf875abd8096a57d9cccc2375075593f6640f04aef087754fdc1ef9d5a4.scope. Dec 13 01:53:01.900463 env[1438]: time="2024-12-13T01:53:01.900420450Z" level=info msg="StartContainer for \"21be8bf875abd8096a57d9cccc2375075593f6640f04aef087754fdc1ef9d5a4\" returns successfully" Dec 13 01:53:02.689474 kubelet[1922]: E1213 01:53:02.689417 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:02.828449 systemd[1]: run-containerd-runc-k8s.io-21be8bf875abd8096a57d9cccc2375075593f6640f04aef087754fdc1ef9d5a4-runc.OsfXzC.mount: Deactivated successfully. Dec 13 01:53:02.829871 kubelet[1922]: I1213 01:53:02.829734 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-dls8z" podStartSLOduration=5.412209258 podStartE2EDuration="9.829714917s" podCreationTimestamp="2024-12-13 01:52:53 +0000 UTC" firstStartedPulling="2024-12-13 01:52:57.362405661 +0000 UTC m=+32.810176189" lastFinishedPulling="2024-12-13 01:53:01.77991132 +0000 UTC m=+37.227681848" observedRunningTime="2024-12-13 01:53:02.829457913 +0000 UTC m=+38.277228441" watchObservedRunningTime="2024-12-13 01:53:02.829714917 +0000 UTC m=+38.277485445" Dec 13 01:53:03.690481 kubelet[1922]: E1213 01:53:03.690413 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:04.690703 kubelet[1922]: E1213 01:53:04.690650 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:05.663184 kubelet[1922]: E1213 01:53:05.663123 1922 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:05.691569 kubelet[1922]: E1213 01:53:05.691529 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:06.692243 kubelet[1922]: E1213 01:53:06.692184 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:07.693269 kubelet[1922]: E1213 01:53:07.693210 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:08.694161 kubelet[1922]: E1213 01:53:08.694101 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:09.694568 kubelet[1922]: E1213 01:53:09.694506 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:10.695591 kubelet[1922]: E1213 01:53:10.695531 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:11.189590 kubelet[1922]: I1213 01:53:11.189552 1922 topology_manager.go:215] "Topology Admit Handler" podUID="202ff5ea-fda4-44ff-a6e2-e64af5f76563" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 01:53:11.195652 systemd[1]: Created slice kubepods-besteffort-pod202ff5ea_fda4_44ff_a6e2_e64af5f76563.slice. Dec 13 01:53:11.290770 kubelet[1922]: I1213 01:53:11.290713 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/202ff5ea-fda4-44ff-a6e2-e64af5f76563-data\") pod \"nfs-server-provisioner-0\" (UID: \"202ff5ea-fda4-44ff-a6e2-e64af5f76563\") " pod="default/nfs-server-provisioner-0" Dec 13 01:53:11.290770 kubelet[1922]: I1213 01:53:11.290766 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mltdd\" (UniqueName: \"kubernetes.io/projected/202ff5ea-fda4-44ff-a6e2-e64af5f76563-kube-api-access-mltdd\") pod \"nfs-server-provisioner-0\" (UID: \"202ff5ea-fda4-44ff-a6e2-e64af5f76563\") " pod="default/nfs-server-provisioner-0" Dec 13 01:53:11.501694 env[1438]: time="2024-12-13T01:53:11.501562565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:202ff5ea-fda4-44ff-a6e2-e64af5f76563,Namespace:default,Attempt:0,}" Dec 13 01:53:11.564613 systemd-networkd[1581]: lxc4ebaad37ff30: Link UP Dec 13 01:53:11.570336 kernel: eth0: renamed from tmp74868 Dec 13 01:53:11.581437 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:53:11.581520 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4ebaad37ff30: link becomes ready Dec 13 01:53:11.581866 systemd-networkd[1581]: lxc4ebaad37ff30: Gained carrier Dec 13 01:53:11.696453 kubelet[1922]: E1213 01:53:11.696381 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:11.756281 env[1438]: time="2024-12-13T01:53:11.756002686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:53:11.756281 env[1438]: time="2024-12-13T01:53:11.756043687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:53:11.756281 env[1438]: time="2024-12-13T01:53:11.756057287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:53:11.756739 env[1438]: time="2024-12-13T01:53:11.756261489Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/748681dc12faa7e8a4982e276bd7e44a4efda52935f03c6cbc91e509554b4485 pid=3089 runtime=io.containerd.runc.v2 Dec 13 01:53:11.774560 systemd[1]: Started cri-containerd-748681dc12faa7e8a4982e276bd7e44a4efda52935f03c6cbc91e509554b4485.scope. Dec 13 01:53:11.812728 env[1438]: time="2024-12-13T01:53:11.812689670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:202ff5ea-fda4-44ff-a6e2-e64af5f76563,Namespace:default,Attempt:0,} returns sandbox id \"748681dc12faa7e8a4982e276bd7e44a4efda52935f03c6cbc91e509554b4485\"" Dec 13 01:53:11.814647 env[1438]: time="2024-12-13T01:53:11.814621490Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 01:53:12.696889 kubelet[1922]: E1213 01:53:12.696838 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:13.397711 systemd-networkd[1581]: lxc4ebaad37ff30: Gained IPv6LL Dec 13 01:53:13.698285 kubelet[1922]: E1213 01:53:13.697911 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:14.559952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount734675184.mount: Deactivated successfully. Dec 13 01:53:14.698711 kubelet[1922]: E1213 01:53:14.698665 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:15.699402 kubelet[1922]: E1213 01:53:15.699323 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:16.633638 env[1438]: time="2024-12-13T01:53:16.633586407Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:16.641661 env[1438]: time="2024-12-13T01:53:16.641618082Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:16.647856 env[1438]: time="2024-12-13T01:53:16.647774439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:16.653619 env[1438]: time="2024-12-13T01:53:16.653536993Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:16.654611 env[1438]: time="2024-12-13T01:53:16.654574503Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 01:53:16.657185 env[1438]: time="2024-12-13T01:53:16.657147427Z" level=info msg="CreateContainer within sandbox \"748681dc12faa7e8a4982e276bd7e44a4efda52935f03c6cbc91e509554b4485\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 01:53:16.699705 kubelet[1922]: E1213 01:53:16.699675 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:16.712798 env[1438]: time="2024-12-13T01:53:16.712754748Z" level=info msg="CreateContainer within sandbox \"748681dc12faa7e8a4982e276bd7e44a4efda52935f03c6cbc91e509554b4485\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"e3146466601dd81a608f9630e01e413c0e2e053d0a074e938b18821465992fcf\"" Dec 13 01:53:16.713404 env[1438]: time="2024-12-13T01:53:16.713376454Z" level=info msg="StartContainer for \"e3146466601dd81a608f9630e01e413c0e2e053d0a074e938b18821465992fcf\"" Dec 13 01:53:16.737822 systemd[1]: Started cri-containerd-e3146466601dd81a608f9630e01e413c0e2e053d0a074e938b18821465992fcf.scope. Dec 13 01:53:16.764892 env[1438]: time="2024-12-13T01:53:16.764845836Z" level=info msg="StartContainer for \"e3146466601dd81a608f9630e01e413c0e2e053d0a074e938b18821465992fcf\" returns successfully" Dec 13 01:53:16.866294 kubelet[1922]: I1213 01:53:16.866230 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.024593256 podStartE2EDuration="5.866214185s" podCreationTimestamp="2024-12-13 01:53:11 +0000 UTC" firstStartedPulling="2024-12-13 01:53:11.814082085 +0000 UTC m=+47.261852613" lastFinishedPulling="2024-12-13 01:53:16.655703014 +0000 UTC m=+52.103473542" observedRunningTime="2024-12-13 01:53:16.86567858 +0000 UTC m=+52.313449208" watchObservedRunningTime="2024-12-13 01:53:16.866214185 +0000 UTC m=+52.313984813" Dec 13 01:53:17.700757 kubelet[1922]: E1213 01:53:17.700702 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:18.701105 kubelet[1922]: E1213 01:53:18.701050 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:19.702220 kubelet[1922]: E1213 01:53:19.702163 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:20.702970 kubelet[1922]: E1213 01:53:20.702907 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:21.703128 kubelet[1922]: E1213 01:53:21.703059 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:22.704182 kubelet[1922]: E1213 01:53:22.704128 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:23.705073 kubelet[1922]: E1213 01:53:23.705012 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:24.705460 kubelet[1922]: E1213 01:53:24.705396 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:25.663957 kubelet[1922]: E1213 01:53:25.663892 1922 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:25.706284 kubelet[1922]: E1213 01:53:25.706241 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:26.686040 kubelet[1922]: I1213 01:53:26.685991 1922 topology_manager.go:215] "Topology Admit Handler" podUID="3d662a1f-38a5-403a-99d0-3e030db477d8" podNamespace="default" podName="test-pod-1" Dec 13 01:53:26.692402 systemd[1]: Created slice kubepods-besteffort-pod3d662a1f_38a5_403a_99d0_3e030db477d8.slice. Dec 13 01:53:26.706652 kubelet[1922]: E1213 01:53:26.706616 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:26.890463 kubelet[1922]: I1213 01:53:26.890419 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-844p9\" (UniqueName: \"kubernetes.io/projected/3d662a1f-38a5-403a-99d0-3e030db477d8-kube-api-access-844p9\") pod \"test-pod-1\" (UID: \"3d662a1f-38a5-403a-99d0-3e030db477d8\") " pod="default/test-pod-1" Dec 13 01:53:26.890725 kubelet[1922]: I1213 01:53:26.890698 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6d4db190-1095-4256-b048-f0f7b1856460\" (UniqueName: \"kubernetes.io/nfs/3d662a1f-38a5-403a-99d0-3e030db477d8-pvc-6d4db190-1095-4256-b048-f0f7b1856460\") pod \"test-pod-1\" (UID: \"3d662a1f-38a5-403a-99d0-3e030db477d8\") " pod="default/test-pod-1" Dec 13 01:53:27.178394 kernel: FS-Cache: Loaded Dec 13 01:53:27.275006 kernel: RPC: Registered named UNIX socket transport module. Dec 13 01:53:27.275143 kernel: RPC: Registered udp transport module. Dec 13 01:53:27.275171 kernel: RPC: Registered tcp transport module. Dec 13 01:53:27.280700 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 01:53:27.485349 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 01:53:27.707714 kubelet[1922]: E1213 01:53:27.707621 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:27.714673 kernel: NFS: Registering the id_resolver key type Dec 13 01:53:27.714778 kernel: Key type id_resolver registered Dec 13 01:53:27.714804 kernel: Key type id_legacy registered Dec 13 01:53:28.068822 nfsidmap[3210]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-002183bad1' Dec 13 01:53:28.085696 nfsidmap[3211]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-002183bad1' Dec 13 01:53:28.196743 env[1438]: time="2024-12-13T01:53:28.196694098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3d662a1f-38a5-403a-99d0-3e030db477d8,Namespace:default,Attempt:0,}" Dec 13 01:53:28.264731 systemd-networkd[1581]: lxc5c18d41a36e0: Link UP Dec 13 01:53:28.274911 kernel: eth0: renamed from tmp4a02a Dec 13 01:53:28.287135 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:53:28.287211 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5c18d41a36e0: link becomes ready Dec 13 01:53:28.287403 systemd-networkd[1581]: lxc5c18d41a36e0: Gained carrier Dec 13 01:53:28.428777 env[1438]: time="2024-12-13T01:53:28.428651569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:53:28.428777 env[1438]: time="2024-12-13T01:53:28.428687069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:53:28.428777 env[1438]: time="2024-12-13T01:53:28.428701069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:53:28.429429 env[1438]: time="2024-12-13T01:53:28.429350474Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a02afc4260a22920e39bea5828657635d6cc4b05b7cd2407ed5eb4f8e2036a0 pid=3238 runtime=io.containerd.runc.v2 Dec 13 01:53:28.442414 systemd[1]: Started cri-containerd-4a02afc4260a22920e39bea5828657635d6cc4b05b7cd2407ed5eb4f8e2036a0.scope. Dec 13 01:53:28.482336 env[1438]: time="2024-12-13T01:53:28.481636073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3d662a1f-38a5-403a-99d0-3e030db477d8,Namespace:default,Attempt:0,} returns sandbox id \"4a02afc4260a22920e39bea5828657635d6cc4b05b7cd2407ed5eb4f8e2036a0\"" Dec 13 01:53:28.483696 env[1438]: time="2024-12-13T01:53:28.483665889Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:53:28.708743 kubelet[1922]: E1213 01:53:28.708612 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:28.787405 env[1438]: time="2024-12-13T01:53:28.787362408Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:28.799746 env[1438]: time="2024-12-13T01:53:28.799697002Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:28.803681 env[1438]: time="2024-12-13T01:53:28.803642732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:28.809239 env[1438]: time="2024-12-13T01:53:28.809204374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:28.810079 env[1438]: time="2024-12-13T01:53:28.810035581Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:53:28.813140 env[1438]: time="2024-12-13T01:53:28.813113704Z" level=info msg="CreateContainer within sandbox \"4a02afc4260a22920e39bea5828657635d6cc4b05b7cd2407ed5eb4f8e2036a0\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 01:53:28.850062 env[1438]: time="2024-12-13T01:53:28.850027286Z" level=info msg="CreateContainer within sandbox \"4a02afc4260a22920e39bea5828657635d6cc4b05b7cd2407ed5eb4f8e2036a0\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"be5e93384bb2cc3094d7c94192ceeeb750135c77807acb6d4a1c3376aabd004a\"" Dec 13 01:53:28.850546 env[1438]: time="2024-12-13T01:53:28.850506690Z" level=info msg="StartContainer for \"be5e93384bb2cc3094d7c94192ceeeb750135c77807acb6d4a1c3376aabd004a\"" Dec 13 01:53:28.866047 systemd[1]: Started cri-containerd-be5e93384bb2cc3094d7c94192ceeeb750135c77807acb6d4a1c3376aabd004a.scope. Dec 13 01:53:28.897866 env[1438]: time="2024-12-13T01:53:28.897825651Z" level=info msg="StartContainer for \"be5e93384bb2cc3094d7c94192ceeeb750135c77807acb6d4a1c3376aabd004a\" returns successfully" Dec 13 01:53:29.709621 kubelet[1922]: E1213 01:53:29.709548 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:29.844606 systemd-networkd[1581]: lxc5c18d41a36e0: Gained IPv6LL Dec 13 01:53:29.895000 kubelet[1922]: I1213 01:53:29.894940 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.566349351 podStartE2EDuration="17.89492056s" podCreationTimestamp="2024-12-13 01:53:12 +0000 UTC" firstStartedPulling="2024-12-13 01:53:28.483027784 +0000 UTC m=+63.930798312" lastFinishedPulling="2024-12-13 01:53:28.811598993 +0000 UTC m=+64.259369521" observedRunningTime="2024-12-13 01:53:29.89484306 +0000 UTC m=+65.342613688" watchObservedRunningTime="2024-12-13 01:53:29.89492056 +0000 UTC m=+65.342691088" Dec 13 01:53:30.710455 kubelet[1922]: E1213 01:53:30.710395 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:31.711356 kubelet[1922]: E1213 01:53:31.711284 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:32.712515 kubelet[1922]: E1213 01:53:32.712456 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:33.713139 kubelet[1922]: E1213 01:53:33.713031 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:34.713538 kubelet[1922]: E1213 01:53:34.713475 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:34.940881 env[1438]: time="2024-12-13T01:53:34.940808711Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:53:34.946430 env[1438]: time="2024-12-13T01:53:34.946392550Z" level=info msg="StopContainer for \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\" with timeout 2 (s)" Dec 13 01:53:34.946685 env[1438]: time="2024-12-13T01:53:34.946645551Z" level=info msg="Stop container \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\" with signal terminated" Dec 13 01:53:34.954397 systemd-networkd[1581]: lxc_health: Link DOWN Dec 13 01:53:34.954407 systemd-networkd[1581]: lxc_health: Lost carrier Dec 13 01:53:34.975653 systemd[1]: cri-containerd-311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb.scope: Deactivated successfully. Dec 13 01:53:34.975941 systemd[1]: cri-containerd-311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb.scope: Consumed 6.336s CPU time. Dec 13 01:53:34.995443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb-rootfs.mount: Deactivated successfully. Dec 13 01:53:35.714433 kubelet[1922]: E1213 01:53:35.714378 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:35.763342 kubelet[1922]: E1213 01:53:35.763277 1922 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:53:36.715148 kubelet[1922]: E1213 01:53:36.715095 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:36.954108 env[1438]: time="2024-12-13T01:53:36.954038204Z" level=info msg="Kill container \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\"" Dec 13 01:53:37.594886 kubelet[1922]: I1213 01:53:37.594634 1922 setters.go:580] "Node became not ready" node="10.200.8.15" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:53:37Z","lastTransitionTime":"2024-12-13T01:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:53:37.649001 env[1438]: time="2024-12-13T01:53:37.648915871Z" level=info msg="shim disconnected" id=311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb Dec 13 01:53:37.649001 env[1438]: time="2024-12-13T01:53:37.648995771Z" level=warning msg="cleaning up after shim disconnected" id=311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb namespace=k8s.io Dec 13 01:53:37.649273 env[1438]: time="2024-12-13T01:53:37.649011171Z" level=info msg="cleaning up dead shim" Dec 13 01:53:37.659743 env[1438]: time="2024-12-13T01:53:37.659700543Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3375 runtime=io.containerd.runc.v2\n" Dec 13 01:53:37.670283 env[1438]: time="2024-12-13T01:53:37.670242914Z" level=info msg="StopContainer for \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\" returns successfully" Dec 13 01:53:37.670835 env[1438]: time="2024-12-13T01:53:37.670803518Z" level=info msg="StopPodSandbox for \"d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e\"" Dec 13 01:53:37.670945 env[1438]: time="2024-12-13T01:53:37.670863618Z" level=info msg="Container to stop \"6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:53:37.670945 env[1438]: time="2024-12-13T01:53:37.670883618Z" level=info msg="Container to stop \"2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:53:37.670945 env[1438]: time="2024-12-13T01:53:37.670903018Z" level=info msg="Container to stop \"595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:53:37.670945 env[1438]: time="2024-12-13T01:53:37.670917118Z" level=info msg="Container to stop \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:53:37.670945 env[1438]: time="2024-12-13T01:53:37.670931718Z" level=info msg="Container to stop \"5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:53:37.673417 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e-shm.mount: Deactivated successfully. Dec 13 01:53:37.680715 systemd[1]: cri-containerd-d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e.scope: Deactivated successfully. Dec 13 01:53:37.701646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e-rootfs.mount: Deactivated successfully. Dec 13 01:53:37.716487 kubelet[1922]: E1213 01:53:37.716303 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:37.721003 env[1438]: time="2024-12-13T01:53:37.720958054Z" level=info msg="shim disconnected" id=d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e Dec 13 01:53:37.721160 env[1438]: time="2024-12-13T01:53:37.721141055Z" level=warning msg="cleaning up after shim disconnected" id=d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e namespace=k8s.io Dec 13 01:53:37.721243 env[1438]: time="2024-12-13T01:53:37.721230256Z" level=info msg="cleaning up dead shim" Dec 13 01:53:37.728521 env[1438]: time="2024-12-13T01:53:37.728489105Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3406 runtime=io.containerd.runc.v2\n" Dec 13 01:53:37.728802 env[1438]: time="2024-12-13T01:53:37.728771907Z" level=info msg="TearDown network for sandbox \"d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e\" successfully" Dec 13 01:53:37.728881 env[1438]: time="2024-12-13T01:53:37.728801207Z" level=info msg="StopPodSandbox for \"d39d42e9810203d69801027c826026e080fa983f710907944fa403f1cc26e68e\" returns successfully" Dec 13 01:53:37.861417 kubelet[1922]: I1213 01:53:37.861264 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-hostproc\") pod \"e436b850-5445-46fa-84e8-98bdf9565446\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " Dec 13 01:53:37.861417 kubelet[1922]: I1213 01:53:37.861340 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e436b850-5445-46fa-84e8-98bdf9565446-hubble-tls\") pod \"e436b850-5445-46fa-84e8-98bdf9565446\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " Dec 13 01:53:37.861417 kubelet[1922]: I1213 01:53:37.861373 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-host-proc-sys-net\") pod \"e436b850-5445-46fa-84e8-98bdf9565446\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " Dec 13 01:53:37.862549 kubelet[1922]: I1213 01:53:37.862507 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-hostproc" (OuterVolumeSpecName: "hostproc") pod "e436b850-5445-46fa-84e8-98bdf9565446" (UID: "e436b850-5445-46fa-84e8-98bdf9565446"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:37.862720 kubelet[1922]: I1213 01:53:37.862689 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e436b850-5445-46fa-84e8-98bdf9565446" (UID: "e436b850-5445-46fa-84e8-98bdf9565446"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:37.862819 kubelet[1922]: I1213 01:53:37.862632 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-cilium-run\") pod \"e436b850-5445-46fa-84e8-98bdf9565446\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " Dec 13 01:53:37.862819 kubelet[1922]: I1213 01:53:37.862761 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-etc-cni-netd\") pod \"e436b850-5445-46fa-84e8-98bdf9565446\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " Dec 13 01:53:37.862819 kubelet[1922]: I1213 01:53:37.862789 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-xtables-lock\") pod \"e436b850-5445-46fa-84e8-98bdf9565446\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " Dec 13 01:53:37.862997 kubelet[1922]: I1213 01:53:37.862822 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48njx\" (UniqueName: \"kubernetes.io/projected/e436b850-5445-46fa-84e8-98bdf9565446-kube-api-access-48njx\") pod \"e436b850-5445-46fa-84e8-98bdf9565446\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " Dec 13 01:53:37.862997 kubelet[1922]: I1213 01:53:37.862850 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-bpf-maps\") pod \"e436b850-5445-46fa-84e8-98bdf9565446\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " Dec 13 01:53:37.862997 kubelet[1922]: I1213 01:53:37.862876 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-cilium-cgroup\") pod \"e436b850-5445-46fa-84e8-98bdf9565446\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " Dec 13 01:53:37.862997 kubelet[1922]: I1213 01:53:37.862907 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e436b850-5445-46fa-84e8-98bdf9565446-cilium-config-path\") pod \"e436b850-5445-46fa-84e8-98bdf9565446\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " Dec 13 01:53:37.862997 kubelet[1922]: I1213 01:53:37.862934 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-host-proc-sys-kernel\") pod \"e436b850-5445-46fa-84e8-98bdf9565446\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " Dec 13 01:53:37.862997 kubelet[1922]: I1213 01:53:37.862964 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e436b850-5445-46fa-84e8-98bdf9565446-clustermesh-secrets\") pod \"e436b850-5445-46fa-84e8-98bdf9565446\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " Dec 13 01:53:37.863329 kubelet[1922]: I1213 01:53:37.862995 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-cni-path\") pod \"e436b850-5445-46fa-84e8-98bdf9565446\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " Dec 13 01:53:37.863329 kubelet[1922]: I1213 01:53:37.863019 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-lib-modules\") pod \"e436b850-5445-46fa-84e8-98bdf9565446\" (UID: \"e436b850-5445-46fa-84e8-98bdf9565446\") " Dec 13 01:53:37.863329 kubelet[1922]: I1213 01:53:37.863064 1922 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-hostproc\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:37.863329 kubelet[1922]: I1213 01:53:37.863081 1922 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-cilium-run\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:37.863329 kubelet[1922]: I1213 01:53:37.863110 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e436b850-5445-46fa-84e8-98bdf9565446" (UID: "e436b850-5445-46fa-84e8-98bdf9565446"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:37.863329 kubelet[1922]: I1213 01:53:37.863140 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e436b850-5445-46fa-84e8-98bdf9565446" (UID: "e436b850-5445-46fa-84e8-98bdf9565446"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:37.863669 kubelet[1922]: I1213 01:53:37.863163 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e436b850-5445-46fa-84e8-98bdf9565446" (UID: "e436b850-5445-46fa-84e8-98bdf9565446"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:37.863669 kubelet[1922]: I1213 01:53:37.863185 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e436b850-5445-46fa-84e8-98bdf9565446" (UID: "e436b850-5445-46fa-84e8-98bdf9565446"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:37.864436 kubelet[1922]: I1213 01:53:37.864393 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e436b850-5445-46fa-84e8-98bdf9565446" (UID: "e436b850-5445-46fa-84e8-98bdf9565446"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:37.870422 systemd[1]: var-lib-kubelet-pods-e436b850\x2d5445\x2d46fa\x2d84e8\x2d98bdf9565446-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d48njx.mount: Deactivated successfully. Dec 13 01:53:37.873563 kubelet[1922]: I1213 01:53:37.871513 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e436b850-5445-46fa-84e8-98bdf9565446-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e436b850-5445-46fa-84e8-98bdf9565446" (UID: "e436b850-5445-46fa-84e8-98bdf9565446"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:53:37.873563 kubelet[1922]: I1213 01:53:37.871565 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-cni-path" (OuterVolumeSpecName: "cni-path") pod "e436b850-5445-46fa-84e8-98bdf9565446" (UID: "e436b850-5445-46fa-84e8-98bdf9565446"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:37.873563 kubelet[1922]: I1213 01:53:37.871631 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e436b850-5445-46fa-84e8-98bdf9565446-kube-api-access-48njx" (OuterVolumeSpecName: "kube-api-access-48njx") pod "e436b850-5445-46fa-84e8-98bdf9565446" (UID: "e436b850-5445-46fa-84e8-98bdf9565446"). InnerVolumeSpecName "kube-api-access-48njx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:53:37.873563 kubelet[1922]: I1213 01:53:37.871661 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e436b850-5445-46fa-84e8-98bdf9565446" (UID: "e436b850-5445-46fa-84e8-98bdf9565446"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:37.873563 kubelet[1922]: I1213 01:53:37.871681 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e436b850-5445-46fa-84e8-98bdf9565446" (UID: "e436b850-5445-46fa-84e8-98bdf9565446"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:37.874080 kubelet[1922]: I1213 01:53:37.874040 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e436b850-5445-46fa-84e8-98bdf9565446-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e436b850-5445-46fa-84e8-98bdf9565446" (UID: "e436b850-5445-46fa-84e8-98bdf9565446"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:53:37.874749 systemd[1]: var-lib-kubelet-pods-e436b850\x2d5445\x2d46fa\x2d84e8\x2d98bdf9565446-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:53:37.879238 systemd[1]: var-lib-kubelet-pods-e436b850\x2d5445\x2d46fa\x2d84e8\x2d98bdf9565446-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:53:37.880230 kubelet[1922]: I1213 01:53:37.880064 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e436b850-5445-46fa-84e8-98bdf9565446-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e436b850-5445-46fa-84e8-98bdf9565446" (UID: "e436b850-5445-46fa-84e8-98bdf9565446"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:53:37.901420 kubelet[1922]: I1213 01:53:37.901399 1922 scope.go:117] "RemoveContainer" containerID="311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb" Dec 13 01:53:37.904782 systemd[1]: Removed slice kubepods-burstable-pode436b850_5445_46fa_84e8_98bdf9565446.slice. Dec 13 01:53:37.905497 env[1438]: time="2024-12-13T01:53:37.905022589Z" level=info msg="RemoveContainer for \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\"" Dec 13 01:53:37.904904 systemd[1]: kubepods-burstable-pode436b850_5445_46fa_84e8_98bdf9565446.slice: Consumed 6.429s CPU time. Dec 13 01:53:37.911962 env[1438]: time="2024-12-13T01:53:37.911930035Z" level=info msg="RemoveContainer for \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\" returns successfully" Dec 13 01:53:37.912233 kubelet[1922]: I1213 01:53:37.912214 1922 scope.go:117] "RemoveContainer" containerID="595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8" Dec 13 01:53:37.913173 env[1438]: time="2024-12-13T01:53:37.913134943Z" level=info msg="RemoveContainer for \"595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8\"" Dec 13 01:53:37.922204 env[1438]: time="2024-12-13T01:53:37.922171004Z" level=info msg="RemoveContainer for \"595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8\" returns successfully" Dec 13 01:53:37.922391 kubelet[1922]: I1213 01:53:37.922357 1922 scope.go:117] "RemoveContainer" containerID="6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839" Dec 13 01:53:37.923390 env[1438]: time="2024-12-13T01:53:37.923363412Z" level=info msg="RemoveContainer for \"6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839\"" Dec 13 01:53:37.931030 env[1438]: time="2024-12-13T01:53:37.930994063Z" level=info msg="RemoveContainer for \"6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839\" returns successfully" Dec 13 01:53:37.931186 kubelet[1922]: I1213 01:53:37.931167 1922 scope.go:117] "RemoveContainer" containerID="2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5" Dec 13 01:53:37.932225 env[1438]: time="2024-12-13T01:53:37.932198971Z" level=info msg="RemoveContainer for \"2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5\"" Dec 13 01:53:37.938857 env[1438]: time="2024-12-13T01:53:37.938822916Z" level=info msg="RemoveContainer for \"2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5\" returns successfully" Dec 13 01:53:37.938995 kubelet[1922]: I1213 01:53:37.938974 1922 scope.go:117] "RemoveContainer" containerID="5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1" Dec 13 01:53:37.939908 env[1438]: time="2024-12-13T01:53:37.939882423Z" level=info msg="RemoveContainer for \"5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1\"" Dec 13 01:53:37.947561 env[1438]: time="2024-12-13T01:53:37.947527374Z" level=info msg="RemoveContainer for \"5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1\" returns successfully" Dec 13 01:53:37.947696 kubelet[1922]: I1213 01:53:37.947676 1922 scope.go:117] "RemoveContainer" containerID="311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb" Dec 13 01:53:37.947997 env[1438]: time="2024-12-13T01:53:37.947931977Z" level=error msg="ContainerStatus for \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\": not found" Dec 13 01:53:37.948180 kubelet[1922]: E1213 01:53:37.948158 1922 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\": not found" containerID="311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb" Dec 13 01:53:37.948282 kubelet[1922]: I1213 01:53:37.948189 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb"} err="failed to get container status \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"311b8c352a4c615f17640da979565c1f6d07172c62d0b13520781d3cddec97bb\": not found" Dec 13 01:53:37.948367 kubelet[1922]: I1213 01:53:37.948285 1922 scope.go:117] "RemoveContainer" containerID="595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8" Dec 13 01:53:37.948526 env[1438]: time="2024-12-13T01:53:37.948471280Z" level=error msg="ContainerStatus for \"595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8\": not found" Dec 13 01:53:37.948621 kubelet[1922]: E1213 01:53:37.948599 1922 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8\": not found" containerID="595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8" Dec 13 01:53:37.948694 kubelet[1922]: I1213 01:53:37.948630 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8"} err="failed to get container status \"595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8\": rpc error: code = NotFound desc = an error occurred when try to find container \"595e888129540f8dc470e5255eb4bf8198b7d3355cca9cd4c7072dbee299eee8\": not found" Dec 13 01:53:37.948694 kubelet[1922]: I1213 01:53:37.948655 1922 scope.go:117] "RemoveContainer" containerID="6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839" Dec 13 01:53:37.948930 env[1438]: time="2024-12-13T01:53:37.948883783Z" level=error msg="ContainerStatus for \"6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839\": not found" Dec 13 01:53:37.949053 kubelet[1922]: E1213 01:53:37.949032 1922 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839\": not found" containerID="6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839" Dec 13 01:53:37.949120 kubelet[1922]: I1213 01:53:37.949064 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839"} err="failed to get container status \"6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f8db22100ccb22651083b25eb45ea96e04f3b3d1c0f6697924f7c8df6813839\": not found" Dec 13 01:53:37.949120 kubelet[1922]: I1213 01:53:37.949084 1922 scope.go:117] "RemoveContainer" containerID="2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5" Dec 13 01:53:37.949292 env[1438]: time="2024-12-13T01:53:37.949246986Z" level=error msg="ContainerStatus for \"2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5\": not found" Dec 13 01:53:37.949426 kubelet[1922]: E1213 01:53:37.949406 1922 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5\": not found" containerID="2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5" Dec 13 01:53:37.949485 kubelet[1922]: I1213 01:53:37.949438 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5"} err="failed to get container status \"2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5\": rpc error: code = NotFound desc = an error occurred when try to find container \"2481f1294b60ff02e75bc10c143a2f76eaf0855e83edf33e6b44467679497bd5\": not found" Dec 13 01:53:37.949485 kubelet[1922]: I1213 01:53:37.949458 1922 scope.go:117] "RemoveContainer" containerID="5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1" Dec 13 01:53:37.949652 env[1438]: time="2024-12-13T01:53:37.949608688Z" level=error msg="ContainerStatus for \"5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1\": not found" Dec 13 01:53:37.949765 kubelet[1922]: E1213 01:53:37.949746 1922 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1\": not found" containerID="5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1" Dec 13 01:53:37.949831 kubelet[1922]: I1213 01:53:37.949776 1922 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1"} err="failed to get container status \"5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b4b673cfec9e1e86ea7e5618d00e6cec65bbe1615113957344aa77b202227f1\": not found" Dec 13 01:53:37.964077 kubelet[1922]: I1213 01:53:37.964042 1922 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-host-proc-sys-net\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:37.964077 kubelet[1922]: I1213 01:53:37.964076 1922 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-etc-cni-netd\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:37.964321 kubelet[1922]: I1213 01:53:37.964092 1922 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-xtables-lock\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:37.964321 kubelet[1922]: I1213 01:53:37.964105 1922 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e436b850-5445-46fa-84e8-98bdf9565446-hubble-tls\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:37.964321 kubelet[1922]: I1213 01:53:37.964121 1922 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-48njx\" (UniqueName: \"kubernetes.io/projected/e436b850-5445-46fa-84e8-98bdf9565446-kube-api-access-48njx\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:37.964321 kubelet[1922]: I1213 01:53:37.964150 1922 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-cilium-cgroup\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:37.964321 kubelet[1922]: I1213 01:53:37.964167 1922 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e436b850-5445-46fa-84e8-98bdf9565446-cilium-config-path\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:37.964321 kubelet[1922]: I1213 01:53:37.964180 1922 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-host-proc-sys-kernel\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:37.964321 kubelet[1922]: I1213 01:53:37.964193 1922 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-bpf-maps\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:37.964321 kubelet[1922]: I1213 01:53:37.964207 1922 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e436b850-5445-46fa-84e8-98bdf9565446-clustermesh-secrets\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:37.964580 kubelet[1922]: I1213 01:53:37.964218 1922 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-cni-path\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:37.964580 kubelet[1922]: I1213 01:53:37.964232 1922 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e436b850-5445-46fa-84e8-98bdf9565446-lib-modules\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:38.589691 kubelet[1922]: I1213 01:53:38.589636 1922 topology_manager.go:215] "Topology Admit Handler" podUID="9d6dd9b7-68b1-4720-86e3-591bb20c5b8f" podNamespace="kube-system" podName="cilium-operator-599987898-8qdxg" Dec 13 01:53:38.589932 kubelet[1922]: E1213 01:53:38.589897 1922 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e436b850-5445-46fa-84e8-98bdf9565446" containerName="apply-sysctl-overwrites" Dec 13 01:53:38.589932 kubelet[1922]: E1213 01:53:38.589918 1922 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e436b850-5445-46fa-84e8-98bdf9565446" containerName="clean-cilium-state" Dec 13 01:53:38.589932 kubelet[1922]: E1213 01:53:38.589930 1922 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e436b850-5445-46fa-84e8-98bdf9565446" containerName="cilium-agent" Dec 13 01:53:38.590158 kubelet[1922]: E1213 01:53:38.589942 1922 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e436b850-5445-46fa-84e8-98bdf9565446" containerName="mount-cgroup" Dec 13 01:53:38.590158 kubelet[1922]: E1213 01:53:38.589952 1922 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e436b850-5445-46fa-84e8-98bdf9565446" containerName="mount-bpf-fs" Dec 13 01:53:38.590158 kubelet[1922]: I1213 01:53:38.589978 1922 memory_manager.go:354] "RemoveStaleState removing state" podUID="e436b850-5445-46fa-84e8-98bdf9565446" containerName="cilium-agent" Dec 13 01:53:38.595858 systemd[1]: Created slice kubepods-besteffort-pod9d6dd9b7_68b1_4720_86e3_591bb20c5b8f.slice. Dec 13 01:53:38.603132 kubelet[1922]: I1213 01:53:38.603102 1922 topology_manager.go:215] "Topology Admit Handler" podUID="b22f97a0-507d-4bc7-ac36-8a96b7dacf41" podNamespace="kube-system" podName="cilium-9rgjg" Dec 13 01:53:38.607939 systemd[1]: Created slice kubepods-burstable-podb22f97a0_507d_4bc7_ac36_8a96b7dacf41.slice. Dec 13 01:53:38.717393 kubelet[1922]: E1213 01:53:38.717335 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:38.768983 kubelet[1922]: I1213 01:53:38.768922 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d6dd9b7-68b1-4720-86e3-591bb20c5b8f-cilium-config-path\") pod \"cilium-operator-599987898-8qdxg\" (UID: \"9d6dd9b7-68b1-4720-86e3-591bb20c5b8f\") " pod="kube-system/cilium-operator-599987898-8qdxg" Dec 13 01:53:38.768983 kubelet[1922]: I1213 01:53:38.768979 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-hostproc\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.769283 kubelet[1922]: I1213 01:53:38.769010 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-host-proc-sys-net\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.769283 kubelet[1922]: I1213 01:53:38.769038 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmt8n\" (UniqueName: \"kubernetes.io/projected/9d6dd9b7-68b1-4720-86e3-591bb20c5b8f-kube-api-access-zmt8n\") pod \"cilium-operator-599987898-8qdxg\" (UID: \"9d6dd9b7-68b1-4720-86e3-591bb20c5b8f\") " pod="kube-system/cilium-operator-599987898-8qdxg" Dec 13 01:53:38.769283 kubelet[1922]: I1213 01:53:38.769061 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-run\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.769283 kubelet[1922]: I1213 01:53:38.769082 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-cgroup\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.769283 kubelet[1922]: I1213 01:53:38.769105 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cni-path\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.769593 kubelet[1922]: I1213 01:53:38.769135 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-ipsec-secrets\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.769593 kubelet[1922]: I1213 01:53:38.769159 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-etc-cni-netd\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.769593 kubelet[1922]: I1213 01:53:38.769186 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-bpf-maps\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.769593 kubelet[1922]: I1213 01:53:38.769210 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-lib-modules\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.769593 kubelet[1922]: I1213 01:53:38.769237 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-xtables-lock\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.769593 kubelet[1922]: I1213 01:53:38.769266 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-clustermesh-secrets\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.769787 kubelet[1922]: I1213 01:53:38.769294 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-config-path\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.769787 kubelet[1922]: I1213 01:53:38.769347 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-host-proc-sys-kernel\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.769787 kubelet[1922]: I1213 01:53:38.769380 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2qxg\" (UniqueName: \"kubernetes.io/projected/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-kube-api-access-v2qxg\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.769787 kubelet[1922]: I1213 01:53:38.769406 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-hubble-tls\") pod \"cilium-9rgjg\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " pod="kube-system/cilium-9rgjg" Dec 13 01:53:38.915977 env[1438]: time="2024-12-13T01:53:38.915517890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9rgjg,Uid:b22f97a0-507d-4bc7-ac36-8a96b7dacf41,Namespace:kube-system,Attempt:0,}" Dec 13 01:53:38.946770 env[1438]: time="2024-12-13T01:53:38.946694896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:53:38.947010 env[1438]: time="2024-12-13T01:53:38.946951198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:53:38.947010 env[1438]: time="2024-12-13T01:53:38.946977198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:53:38.947401 env[1438]: time="2024-12-13T01:53:38.947300300Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/30cf21d65a3c58e82439235d498582a5b3a3be0620e606b3b152c07ae56a74ab pid=3435 runtime=io.containerd.runc.v2 Dec 13 01:53:38.961143 systemd[1]: Started cri-containerd-30cf21d65a3c58e82439235d498582a5b3a3be0620e606b3b152c07ae56a74ab.scope. Dec 13 01:53:38.987378 env[1438]: time="2024-12-13T01:53:38.987335365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9rgjg,Uid:b22f97a0-507d-4bc7-ac36-8a96b7dacf41,Namespace:kube-system,Attempt:0,} returns sandbox id \"30cf21d65a3c58e82439235d498582a5b3a3be0620e606b3b152c07ae56a74ab\"" Dec 13 01:53:38.992586 env[1438]: time="2024-12-13T01:53:38.992546300Z" level=info msg="CreateContainer within sandbox \"30cf21d65a3c58e82439235d498582a5b3a3be0620e606b3b152c07ae56a74ab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:53:39.028649 env[1438]: time="2024-12-13T01:53:39.028603936Z" level=info msg="CreateContainer within sandbox \"30cf21d65a3c58e82439235d498582a5b3a3be0620e606b3b152c07ae56a74ab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a\"" Dec 13 01:53:39.029163 env[1438]: time="2024-12-13T01:53:39.029133540Z" level=info msg="StartContainer for \"3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a\"" Dec 13 01:53:39.045412 systemd[1]: Started cri-containerd-3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a.scope. Dec 13 01:53:39.056415 systemd[1]: cri-containerd-3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a.scope: Deactivated successfully. Dec 13 01:53:39.119044 env[1438]: time="2024-12-13T01:53:39.118982127Z" level=info msg="shim disconnected" id=3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a Dec 13 01:53:39.119044 env[1438]: time="2024-12-13T01:53:39.119041127Z" level=warning msg="cleaning up after shim disconnected" id=3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a namespace=k8s.io Dec 13 01:53:39.119044 env[1438]: time="2024-12-13T01:53:39.119052027Z" level=info msg="cleaning up dead shim" Dec 13 01:53:39.126944 env[1438]: time="2024-12-13T01:53:39.126902579Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3496 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T01:53:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 01:53:39.127250 env[1438]: time="2024-12-13T01:53:39.127154480Z" level=error msg="copy shim log" error="read /proc/self/fd/53: file already closed" Dec 13 01:53:39.127499 env[1438]: time="2024-12-13T01:53:39.127458382Z" level=error msg="Failed to pipe stderr of container \"3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a\"" error="reading from a closed fifo" Dec 13 01:53:39.127600 env[1438]: time="2024-12-13T01:53:39.127461482Z" level=error msg="Failed to pipe stdout of container \"3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a\"" error="reading from a closed fifo" Dec 13 01:53:39.131566 env[1438]: time="2024-12-13T01:53:39.131501509Z" level=error msg="StartContainer for \"3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 01:53:39.131788 kubelet[1922]: E1213 01:53:39.131739 1922 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a" Dec 13 01:53:39.132235 kubelet[1922]: E1213 01:53:39.132163 1922 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 01:53:39.132235 kubelet[1922]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 01:53:39.132235 kubelet[1922]: rm /hostbin/cilium-mount Dec 13 01:53:39.132393 kubelet[1922]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v2qxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-9rgjg_kube-system(b22f97a0-507d-4bc7-ac36-8a96b7dacf41): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 01:53:39.132393 kubelet[1922]: E1213 01:53:39.132207 1922 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-9rgjg" podUID="b22f97a0-507d-4bc7-ac36-8a96b7dacf41" Dec 13 01:53:39.199821 env[1438]: time="2024-12-13T01:53:39.199688654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8qdxg,Uid:9d6dd9b7-68b1-4720-86e3-591bb20c5b8f,Namespace:kube-system,Attempt:0,}" Dec 13 01:53:39.243029 env[1438]: time="2024-12-13T01:53:39.242941137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:53:39.243029 env[1438]: time="2024-12-13T01:53:39.242982337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:53:39.243299 env[1438]: time="2024-12-13T01:53:39.243243439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:53:39.243575 env[1438]: time="2024-12-13T01:53:39.243525941Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47657070dc8089004a853693e90b9f37408ec3e568ec1f2a0ef69ee17bc67d08 pid=3516 runtime=io.containerd.runc.v2 Dec 13 01:53:39.256734 systemd[1]: Started cri-containerd-47657070dc8089004a853693e90b9f37408ec3e568ec1f2a0ef69ee17bc67d08.scope. Dec 13 01:53:39.297544 env[1438]: time="2024-12-13T01:53:39.297491094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8qdxg,Uid:9d6dd9b7-68b1-4720-86e3-591bb20c5b8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"47657070dc8089004a853693e90b9f37408ec3e568ec1f2a0ef69ee17bc67d08\"" Dec 13 01:53:39.299297 env[1438]: time="2024-12-13T01:53:39.299262405Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:53:39.718562 kubelet[1922]: E1213 01:53:39.718504 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:39.737935 kubelet[1922]: I1213 01:53:39.737481 1922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e436b850-5445-46fa-84e8-98bdf9565446" path="/var/lib/kubelet/pods/e436b850-5445-46fa-84e8-98bdf9565446/volumes" Dec 13 01:53:39.911430 env[1438]: time="2024-12-13T01:53:39.911392607Z" level=info msg="StopPodSandbox for \"30cf21d65a3c58e82439235d498582a5b3a3be0620e606b3b152c07ae56a74ab\"" Dec 13 01:53:39.911642 env[1438]: time="2024-12-13T01:53:39.911615408Z" level=info msg="Container to stop \"3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:53:39.915684 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30cf21d65a3c58e82439235d498582a5b3a3be0620e606b3b152c07ae56a74ab-shm.mount: Deactivated successfully. Dec 13 01:53:39.922479 systemd[1]: cri-containerd-30cf21d65a3c58e82439235d498582a5b3a3be0620e606b3b152c07ae56a74ab.scope: Deactivated successfully. Dec 13 01:53:39.952038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30cf21d65a3c58e82439235d498582a5b3a3be0620e606b3b152c07ae56a74ab-rootfs.mount: Deactivated successfully. Dec 13 01:53:39.974484 env[1438]: time="2024-12-13T01:53:39.974070516Z" level=info msg="shim disconnected" id=30cf21d65a3c58e82439235d498582a5b3a3be0620e606b3b152c07ae56a74ab Dec 13 01:53:39.974484 env[1438]: time="2024-12-13T01:53:39.974302518Z" level=warning msg="cleaning up after shim disconnected" id=30cf21d65a3c58e82439235d498582a5b3a3be0620e606b3b152c07ae56a74ab namespace=k8s.io Dec 13 01:53:39.974484 env[1438]: time="2024-12-13T01:53:39.974337818Z" level=info msg="cleaning up dead shim" Dec 13 01:53:39.982267 env[1438]: time="2024-12-13T01:53:39.982233870Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3568 runtime=io.containerd.runc.v2\n" Dec 13 01:53:39.982605 env[1438]: time="2024-12-13T01:53:39.982572872Z" level=info msg="TearDown network for sandbox \"30cf21d65a3c58e82439235d498582a5b3a3be0620e606b3b152c07ae56a74ab\" successfully" Dec 13 01:53:39.982708 env[1438]: time="2024-12-13T01:53:39.982606272Z" level=info msg="StopPodSandbox for \"30cf21d65a3c58e82439235d498582a5b3a3be0620e606b3b152c07ae56a74ab\" returns successfully" Dec 13 01:53:40.084619 kubelet[1922]: I1213 01:53:40.084563 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-config-path\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.084619 kubelet[1922]: I1213 01:53:40.084618 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2qxg\" (UniqueName: \"kubernetes.io/projected/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-kube-api-access-v2qxg\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.084922 kubelet[1922]: I1213 01:53:40.084653 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-hubble-tls\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.084922 kubelet[1922]: I1213 01:53:40.084676 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-run\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.084922 kubelet[1922]: I1213 01:53:40.084701 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-ipsec-secrets\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.084922 kubelet[1922]: I1213 01:53:40.084728 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-clustermesh-secrets\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.084922 kubelet[1922]: I1213 01:53:40.084752 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cni-path\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.084922 kubelet[1922]: I1213 01:53:40.084777 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-etc-cni-netd\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.084922 kubelet[1922]: I1213 01:53:40.084799 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-xtables-lock\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.084922 kubelet[1922]: I1213 01:53:40.084823 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-host-proc-sys-kernel\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.084922 kubelet[1922]: I1213 01:53:40.084849 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-cgroup\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.084922 kubelet[1922]: I1213 01:53:40.084877 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-bpf-maps\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.084922 kubelet[1922]: I1213 01:53:40.084904 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-hostproc\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.084922 kubelet[1922]: I1213 01:53:40.084929 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-host-proc-sys-net\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.085596 kubelet[1922]: I1213 01:53:40.084955 1922 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-lib-modules\") pod \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\" (UID: \"b22f97a0-507d-4bc7-ac36-8a96b7dacf41\") " Dec 13 01:53:40.085596 kubelet[1922]: I1213 01:53:40.085057 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.088067 kubelet[1922]: I1213 01:53:40.085760 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.088404 kubelet[1922]: I1213 01:53:40.088369 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:53:40.088525 kubelet[1922]: I1213 01:53:40.088436 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.088525 kubelet[1922]: I1213 01:53:40.088473 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.088525 kubelet[1922]: I1213 01:53:40.088502 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.088697 kubelet[1922]: I1213 01:53:40.088529 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.088697 kubelet[1922]: I1213 01:53:40.088556 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-hostproc" (OuterVolumeSpecName: "hostproc") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.088697 kubelet[1922]: I1213 01:53:40.088586 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.089837 kubelet[1922]: I1213 01:53:40.089808 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.095865 systemd[1]: var-lib-kubelet-pods-b22f97a0\x2d507d\x2d4bc7\x2dac36\x2d8a96b7dacf41-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv2qxg.mount: Deactivated successfully. Dec 13 01:53:40.100473 kubelet[1922]: I1213 01:53:40.100445 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:53:40.100666 kubelet[1922]: I1213 01:53:40.100646 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:53:40.100790 kubelet[1922]: I1213 01:53:40.100773 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cni-path" (OuterVolumeSpecName: "cni-path") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:53:40.101502 systemd[1]: var-lib-kubelet-pods-b22f97a0\x2d507d\x2d4bc7\x2dac36\x2d8a96b7dacf41-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 01:53:40.103002 kubelet[1922]: I1213 01:53:40.102973 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-kube-api-access-v2qxg" (OuterVolumeSpecName: "kube-api-access-v2qxg") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "kube-api-access-v2qxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:53:40.104497 kubelet[1922]: I1213 01:53:40.104455 1922 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b22f97a0-507d-4bc7-ac36-8a96b7dacf41" (UID: "b22f97a0-507d-4bc7-ac36-8a96b7dacf41"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:53:40.186205 kubelet[1922]: I1213 01:53:40.186146 1922 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-cgroup\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.186205 kubelet[1922]: I1213 01:53:40.186192 1922 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-bpf-maps\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.186205 kubelet[1922]: I1213 01:53:40.186209 1922 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-hostproc\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.186537 kubelet[1922]: I1213 01:53:40.186223 1922 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-host-proc-sys-net\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.186537 kubelet[1922]: I1213 01:53:40.186237 1922 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-lib-modules\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.186537 kubelet[1922]: I1213 01:53:40.186254 1922 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-clustermesh-secrets\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.186537 kubelet[1922]: I1213 01:53:40.186267 1922 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-config-path\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.186537 kubelet[1922]: I1213 01:53:40.186280 1922 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v2qxg\" (UniqueName: \"kubernetes.io/projected/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-kube-api-access-v2qxg\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.186537 kubelet[1922]: I1213 01:53:40.186291 1922 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-hubble-tls\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.186537 kubelet[1922]: I1213 01:53:40.186333 1922 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-run\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.186537 kubelet[1922]: I1213 01:53:40.186347 1922 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cilium-ipsec-secrets\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.186537 kubelet[1922]: I1213 01:53:40.186360 1922 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-cni-path\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.186537 kubelet[1922]: I1213 01:53:40.186372 1922 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-etc-cni-netd\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.186537 kubelet[1922]: I1213 01:53:40.186384 1922 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-xtables-lock\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.186537 kubelet[1922]: I1213 01:53:40.186398 1922 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b22f97a0-507d-4bc7-ac36-8a96b7dacf41-host-proc-sys-kernel\") on node \"10.200.8.15\" DevicePath \"\"" Dec 13 01:53:40.719493 kubelet[1922]: E1213 01:53:40.719407 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:40.764772 kubelet[1922]: E1213 01:53:40.764727 1922 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:53:40.877669 systemd[1]: var-lib-kubelet-pods-b22f97a0\x2d507d\x2d4bc7\x2dac36\x2d8a96b7dacf41-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:53:40.877789 systemd[1]: var-lib-kubelet-pods-b22f97a0\x2d507d\x2d4bc7\x2dac36\x2d8a96b7dacf41-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:53:40.917333 kubelet[1922]: I1213 01:53:40.915200 1922 scope.go:117] "RemoveContainer" containerID="3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a" Dec 13 01:53:40.920045 systemd[1]: Removed slice kubepods-burstable-podb22f97a0_507d_4bc7_ac36_8a96b7dacf41.slice. Dec 13 01:53:40.921429 env[1438]: time="2024-12-13T01:53:40.921397534Z" level=info msg="RemoveContainer for \"3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a\"" Dec 13 01:53:40.955203 env[1438]: time="2024-12-13T01:53:40.955156351Z" level=info msg="RemoveContainer for \"3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a\" returns successfully" Dec 13 01:53:40.969947 kubelet[1922]: I1213 01:53:40.969511 1922 topology_manager.go:215] "Topology Admit Handler" podUID="45591908-d42f-4295-afeb-f4e95db114ac" podNamespace="kube-system" podName="cilium-qf2c6" Dec 13 01:53:40.969947 kubelet[1922]: E1213 01:53:40.969569 1922 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b22f97a0-507d-4bc7-ac36-8a96b7dacf41" containerName="mount-cgroup" Dec 13 01:53:40.969947 kubelet[1922]: I1213 01:53:40.969600 1922 memory_manager.go:354] "RemoveStaleState removing state" podUID="b22f97a0-507d-4bc7-ac36-8a96b7dacf41" containerName="mount-cgroup" Dec 13 01:53:40.974961 systemd[1]: Created slice kubepods-burstable-pod45591908_d42f_4295_afeb_f4e95db114ac.slice. Dec 13 01:53:41.092475 kubelet[1922]: I1213 01:53:41.092428 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45591908-d42f-4295-afeb-f4e95db114ac-cilium-run\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.092475 kubelet[1922]: I1213 01:53:41.092477 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45591908-d42f-4295-afeb-f4e95db114ac-cni-path\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.092739 kubelet[1922]: I1213 01:53:41.092501 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45591908-d42f-4295-afeb-f4e95db114ac-hubble-tls\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.092739 kubelet[1922]: I1213 01:53:41.092524 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45591908-d42f-4295-afeb-f4e95db114ac-etc-cni-netd\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.092739 kubelet[1922]: I1213 01:53:41.092546 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45591908-d42f-4295-afeb-f4e95db114ac-xtables-lock\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.092739 kubelet[1922]: I1213 01:53:41.092564 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45591908-d42f-4295-afeb-f4e95db114ac-clustermesh-secrets\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.092739 kubelet[1922]: I1213 01:53:41.092584 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/45591908-d42f-4295-afeb-f4e95db114ac-cilium-ipsec-secrets\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.092739 kubelet[1922]: I1213 01:53:41.092604 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45591908-d42f-4295-afeb-f4e95db114ac-host-proc-sys-net\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.092739 kubelet[1922]: I1213 01:53:41.092622 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl852\" (UniqueName: \"kubernetes.io/projected/45591908-d42f-4295-afeb-f4e95db114ac-kube-api-access-nl852\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.092739 kubelet[1922]: I1213 01:53:41.092647 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45591908-d42f-4295-afeb-f4e95db114ac-bpf-maps\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.092739 kubelet[1922]: I1213 01:53:41.092667 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45591908-d42f-4295-afeb-f4e95db114ac-lib-modules\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.092739 kubelet[1922]: I1213 01:53:41.092687 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45591908-d42f-4295-afeb-f4e95db114ac-host-proc-sys-kernel\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.092739 kubelet[1922]: I1213 01:53:41.092714 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45591908-d42f-4295-afeb-f4e95db114ac-hostproc\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.092739 kubelet[1922]: I1213 01:53:41.092734 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45591908-d42f-4295-afeb-f4e95db114ac-cilium-cgroup\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.093180 kubelet[1922]: I1213 01:53:41.092761 1922 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45591908-d42f-4295-afeb-f4e95db114ac-cilium-config-path\") pod \"cilium-qf2c6\" (UID: \"45591908-d42f-4295-afeb-f4e95db114ac\") " pod="kube-system/cilium-qf2c6" Dec 13 01:53:41.283103 env[1438]: time="2024-12-13T01:53:41.282597643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qf2c6,Uid:45591908-d42f-4295-afeb-f4e95db114ac,Namespace:kube-system,Attempt:0,}" Dec 13 01:53:41.454281 env[1438]: time="2024-12-13T01:53:41.454177937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:53:41.454465 env[1438]: time="2024-12-13T01:53:41.454301938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:53:41.454465 env[1438]: time="2024-12-13T01:53:41.454350638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:53:41.454706 env[1438]: time="2024-12-13T01:53:41.454647640Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/edc8bd329e1040eac4470399652a5144e43ec9f0b80541c074871740855a6103 pid=3597 runtime=io.containerd.runc.v2 Dec 13 01:53:41.468445 systemd[1]: Started cri-containerd-edc8bd329e1040eac4470399652a5144e43ec9f0b80541c074871740855a6103.scope. Dec 13 01:53:41.496247 env[1438]: time="2024-12-13T01:53:41.495563201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qf2c6,Uid:45591908-d42f-4295-afeb-f4e95db114ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"edc8bd329e1040eac4470399652a5144e43ec9f0b80541c074871740855a6103\"" Dec 13 01:53:41.498984 env[1438]: time="2024-12-13T01:53:41.498940122Z" level=info msg="CreateContainer within sandbox \"edc8bd329e1040eac4470399652a5144e43ec9f0b80541c074871740855a6103\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:53:41.551667 env[1438]: time="2024-12-13T01:53:41.551619858Z" level=info msg="CreateContainer within sandbox \"edc8bd329e1040eac4470399652a5144e43ec9f0b80541c074871740855a6103\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"51b7c152347f9ac391e3693d3283d0dcb0a44888af67db113551d951c29b3141\"" Dec 13 01:53:41.552446 env[1438]: time="2024-12-13T01:53:41.552414963Z" level=info msg="StartContainer for \"51b7c152347f9ac391e3693d3283d0dcb0a44888af67db113551d951c29b3141\"" Dec 13 01:53:41.569002 systemd[1]: Started cri-containerd-51b7c152347f9ac391e3693d3283d0dcb0a44888af67db113551d951c29b3141.scope. Dec 13 01:53:41.623468 env[1438]: time="2024-12-13T01:53:41.623403716Z" level=info msg="StartContainer for \"51b7c152347f9ac391e3693d3283d0dcb0a44888af67db113551d951c29b3141\" returns successfully" Dec 13 01:53:41.628331 systemd[1]: cri-containerd-51b7c152347f9ac391e3693d3283d0dcb0a44888af67db113551d951c29b3141.scope: Deactivated successfully. Dec 13 01:53:41.714229 env[1438]: time="2024-12-13T01:53:41.714178494Z" level=info msg="shim disconnected" id=51b7c152347f9ac391e3693d3283d0dcb0a44888af67db113551d951c29b3141 Dec 13 01:53:41.714603 env[1438]: time="2024-12-13T01:53:41.714581597Z" level=warning msg="cleaning up after shim disconnected" id=51b7c152347f9ac391e3693d3283d0dcb0a44888af67db113551d951c29b3141 namespace=k8s.io Dec 13 01:53:41.714699 env[1438]: time="2024-12-13T01:53:41.714685998Z" level=info msg="cleaning up dead shim" Dec 13 01:53:41.719820 kubelet[1922]: E1213 01:53:41.719749 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:41.738263 kubelet[1922]: I1213 01:53:41.737930 1922 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b22f97a0-507d-4bc7-ac36-8a96b7dacf41" path="/var/lib/kubelet/pods/b22f97a0-507d-4bc7-ac36-8a96b7dacf41/volumes" Dec 13 01:53:41.739686 env[1438]: time="2024-12-13T01:53:41.739655257Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3683 runtime=io.containerd.runc.v2\n" Dec 13 01:53:41.922272 env[1438]: time="2024-12-13T01:53:41.921738418Z" level=info msg="CreateContainer within sandbox \"edc8bd329e1040eac4470399652a5144e43ec9f0b80541c074871740855a6103\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:53:41.948705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713003324.mount: Deactivated successfully. Dec 13 01:53:41.960930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2060837406.mount: Deactivated successfully. Dec 13 01:53:41.972245 env[1438]: time="2024-12-13T01:53:41.972197539Z" level=info msg="CreateContainer within sandbox \"edc8bd329e1040eac4470399652a5144e43ec9f0b80541c074871740855a6103\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8690a7a41f4375ac2b38c113166d2bd18b5708ed9b97c005bffbbc9d0a18f440\"" Dec 13 01:53:41.973247 env[1438]: time="2024-12-13T01:53:41.973214146Z" level=info msg="StartContainer for \"8690a7a41f4375ac2b38c113166d2bd18b5708ed9b97c005bffbbc9d0a18f440\"" Dec 13 01:53:41.994280 systemd[1]: Started cri-containerd-8690a7a41f4375ac2b38c113166d2bd18b5708ed9b97c005bffbbc9d0a18f440.scope. Dec 13 01:53:42.037221 systemd[1]: cri-containerd-8690a7a41f4375ac2b38c113166d2bd18b5708ed9b97c005bffbbc9d0a18f440.scope: Deactivated successfully. Dec 13 01:53:42.037992 env[1438]: time="2024-12-13T01:53:42.037949556Z" level=info msg="StartContainer for \"8690a7a41f4375ac2b38c113166d2bd18b5708ed9b97c005bffbbc9d0a18f440\" returns successfully" Dec 13 01:53:42.256433 kubelet[1922]: W1213 01:53:42.224768 1922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb22f97a0_507d_4bc7_ac36_8a96b7dacf41.slice/cri-containerd-3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a.scope WatchSource:0}: container "3d33e4bb1288e0fbe9617020e2fc5a514668fd15b1b388759e8814a6c11f364a" in namespace "k8s.io": not found Dec 13 01:53:42.281712 env[1438]: time="2024-12-13T01:53:42.281655790Z" level=info msg="shim disconnected" id=8690a7a41f4375ac2b38c113166d2bd18b5708ed9b97c005bffbbc9d0a18f440 Dec 13 01:53:42.281712 env[1438]: time="2024-12-13T01:53:42.281711391Z" level=warning msg="cleaning up after shim disconnected" id=8690a7a41f4375ac2b38c113166d2bd18b5708ed9b97c005bffbbc9d0a18f440 namespace=k8s.io Dec 13 01:53:42.281712 env[1438]: time="2024-12-13T01:53:42.281722791Z" level=info msg="cleaning up dead shim" Dec 13 01:53:42.291546 env[1438]: time="2024-12-13T01:53:42.291501252Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3747 runtime=io.containerd.runc.v2\n" Dec 13 01:53:42.720515 kubelet[1922]: E1213 01:53:42.720452 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:42.827417 env[1438]: time="2024-12-13T01:53:42.827274127Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:42.833865 env[1438]: time="2024-12-13T01:53:42.833773768Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:42.838113 env[1438]: time="2024-12-13T01:53:42.838077995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:53:42.838566 env[1438]: time="2024-12-13T01:53:42.838535398Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:53:42.840785 env[1438]: time="2024-12-13T01:53:42.840757912Z" level=info msg="CreateContainer within sandbox \"47657070dc8089004a853693e90b9f37408ec3e568ec1f2a0ef69ee17bc67d08\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:53:42.877851 env[1438]: time="2024-12-13T01:53:42.877810445Z" level=info msg="CreateContainer within sandbox \"47657070dc8089004a853693e90b9f37408ec3e568ec1f2a0ef69ee17bc67d08\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2a99ff28a8055ea3344e7ebd99a1f160599bde19b5e86a9190de4f010081e86f\"" Dec 13 01:53:42.878751 env[1438]: time="2024-12-13T01:53:42.878723251Z" level=info msg="StartContainer for \"2a99ff28a8055ea3344e7ebd99a1f160599bde19b5e86a9190de4f010081e86f\"" Dec 13 01:53:42.903943 systemd[1]: run-containerd-runc-k8s.io-2a99ff28a8055ea3344e7ebd99a1f160599bde19b5e86a9190de4f010081e86f-runc.hZXvbi.mount: Deactivated successfully. Dec 13 01:53:42.908773 systemd[1]: Started cri-containerd-2a99ff28a8055ea3344e7ebd99a1f160599bde19b5e86a9190de4f010081e86f.scope. Dec 13 01:53:42.930748 env[1438]: time="2024-12-13T01:53:42.930483277Z" level=info msg="CreateContainer within sandbox \"edc8bd329e1040eac4470399652a5144e43ec9f0b80541c074871740855a6103\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:53:42.952037 env[1438]: time="2024-12-13T01:53:42.951999212Z" level=info msg="StartContainer for \"2a99ff28a8055ea3344e7ebd99a1f160599bde19b5e86a9190de4f010081e86f\" returns successfully" Dec 13 01:53:42.986218 env[1438]: time="2024-12-13T01:53:42.986101427Z" level=info msg="CreateContainer within sandbox \"edc8bd329e1040eac4470399652a5144e43ec9f0b80541c074871740855a6103\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5ef90ebfbf2d5397cadbda87538a079f1325dccc4dbfdfdf928effe104b9b9d1\"" Dec 13 01:53:42.987091 env[1438]: time="2024-12-13T01:53:42.987051033Z" level=info msg="StartContainer for \"5ef90ebfbf2d5397cadbda87538a079f1325dccc4dbfdfdf928effe104b9b9d1\"" Dec 13 01:53:43.006256 systemd[1]: Started cri-containerd-5ef90ebfbf2d5397cadbda87538a079f1325dccc4dbfdfdf928effe104b9b9d1.scope. Dec 13 01:53:43.048612 env[1438]: time="2024-12-13T01:53:43.048569017Z" level=info msg="StartContainer for \"5ef90ebfbf2d5397cadbda87538a079f1325dccc4dbfdfdf928effe104b9b9d1\" returns successfully" Dec 13 01:53:43.053515 systemd[1]: cri-containerd-5ef90ebfbf2d5397cadbda87538a079f1325dccc4dbfdfdf928effe104b9b9d1.scope: Deactivated successfully. Dec 13 01:53:43.259533 env[1438]: time="2024-12-13T01:53:43.259387529Z" level=info msg="shim disconnected" id=5ef90ebfbf2d5397cadbda87538a079f1325dccc4dbfdfdf928effe104b9b9d1 Dec 13 01:53:43.259533 env[1438]: time="2024-12-13T01:53:43.259446029Z" level=warning msg="cleaning up after shim disconnected" id=5ef90ebfbf2d5397cadbda87538a079f1325dccc4dbfdfdf928effe104b9b9d1 namespace=k8s.io Dec 13 01:53:43.259533 env[1438]: time="2024-12-13T01:53:43.259458529Z" level=info msg="cleaning up dead shim" Dec 13 01:53:43.266920 env[1438]: time="2024-12-13T01:53:43.266879576Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3844 runtime=io.containerd.runc.v2\n" Dec 13 01:53:43.720736 kubelet[1922]: E1213 01:53:43.720695 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:43.932021 env[1438]: time="2024-12-13T01:53:43.931969815Z" level=info msg="CreateContainer within sandbox \"edc8bd329e1040eac4470399652a5144e43ec9f0b80541c074871740855a6103\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:53:43.961080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount443231281.mount: Deactivated successfully. Dec 13 01:53:43.962891 kubelet[1922]: I1213 01:53:43.962841 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-8qdxg" podStartSLOduration=2.422109705 podStartE2EDuration="5.962825307s" podCreationTimestamp="2024-12-13 01:53:38 +0000 UTC" firstStartedPulling="2024-12-13 01:53:39.298761402 +0000 UTC m=+74.746531930" lastFinishedPulling="2024-12-13 01:53:42.839477004 +0000 UTC m=+78.287247532" observedRunningTime="2024-12-13 01:53:43.962633106 +0000 UTC m=+79.410403634" watchObservedRunningTime="2024-12-13 01:53:43.962825307 +0000 UTC m=+79.410595835" Dec 13 01:53:43.983559 env[1438]: time="2024-12-13T01:53:43.983029832Z" level=info msg="CreateContainer within sandbox \"edc8bd329e1040eac4470399652a5144e43ec9f0b80541c074871740855a6103\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5d7486970cbfd4952990d0beda85036cd7c3ceddbdfdf03d24b935446a7b58cb\"" Dec 13 01:53:43.983871 env[1438]: time="2024-12-13T01:53:43.983839938Z" level=info msg="StartContainer for \"5d7486970cbfd4952990d0beda85036cd7c3ceddbdfdf03d24b935446a7b58cb\"" Dec 13 01:53:44.000725 systemd[1]: Started cri-containerd-5d7486970cbfd4952990d0beda85036cd7c3ceddbdfdf03d24b935446a7b58cb.scope. Dec 13 01:53:44.024093 systemd[1]: cri-containerd-5d7486970cbfd4952990d0beda85036cd7c3ceddbdfdf03d24b935446a7b58cb.scope: Deactivated successfully. Dec 13 01:53:44.028175 env[1438]: time="2024-12-13T01:53:44.028135111Z" level=info msg="StartContainer for \"5d7486970cbfd4952990d0beda85036cd7c3ceddbdfdf03d24b935446a7b58cb\" returns successfully" Dec 13 01:53:44.061765 env[1438]: time="2024-12-13T01:53:44.061705718Z" level=info msg="shim disconnected" id=5d7486970cbfd4952990d0beda85036cd7c3ceddbdfdf03d24b935446a7b58cb Dec 13 01:53:44.061765 env[1438]: time="2024-12-13T01:53:44.061760718Z" level=warning msg="cleaning up after shim disconnected" id=5d7486970cbfd4952990d0beda85036cd7c3ceddbdfdf03d24b935446a7b58cb namespace=k8s.io Dec 13 01:53:44.062061 env[1438]: time="2024-12-13T01:53:44.061772618Z" level=info msg="cleaning up dead shim" Dec 13 01:53:44.069016 env[1438]: time="2024-12-13T01:53:44.068975462Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:53:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3898 runtime=io.containerd.runc.v2\n" Dec 13 01:53:44.721646 kubelet[1922]: E1213 01:53:44.721583 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:44.939279 env[1438]: time="2024-12-13T01:53:44.939229515Z" level=info msg="CreateContainer within sandbox \"edc8bd329e1040eac4470399652a5144e43ec9f0b80541c074871740855a6103\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:53:44.965482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160332028.mount: Deactivated successfully. Dec 13 01:53:44.982371 env[1438]: time="2024-12-13T01:53:44.982238580Z" level=info msg="CreateContainer within sandbox \"edc8bd329e1040eac4470399652a5144e43ec9f0b80541c074871740855a6103\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1781c8149478b27ccae6be1420dc788f2c812f3a65694972830bc609eb4fdec9\"" Dec 13 01:53:44.983186 env[1438]: time="2024-12-13T01:53:44.983137886Z" level=info msg="StartContainer for \"1781c8149478b27ccae6be1420dc788f2c812f3a65694972830bc609eb4fdec9\"" Dec 13 01:53:45.004300 systemd[1]: Started cri-containerd-1781c8149478b27ccae6be1420dc788f2c812f3a65694972830bc609eb4fdec9.scope. Dec 13 01:53:45.043980 env[1438]: time="2024-12-13T01:53:45.043929456Z" level=info msg="StartContainer for \"1781c8149478b27ccae6be1420dc788f2c812f3a65694972830bc609eb4fdec9\" returns successfully" Dec 13 01:53:45.337937 kubelet[1922]: W1213 01:53:45.337825 1922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45591908_d42f_4295_afeb_f4e95db114ac.slice/cri-containerd-51b7c152347f9ac391e3693d3283d0dcb0a44888af67db113551d951c29b3141.scope WatchSource:0}: task 51b7c152347f9ac391e3693d3283d0dcb0a44888af67db113551d951c29b3141 not found: not found Dec 13 01:53:45.372349 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:53:45.663299 kubelet[1922]: E1213 01:53:45.663168 1922 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:45.722723 kubelet[1922]: E1213 01:53:45.722664 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:45.965546 kubelet[1922]: I1213 01:53:45.965216 1922 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qf2c6" podStartSLOduration=5.965197059 podStartE2EDuration="5.965197059s" podCreationTimestamp="2024-12-13 01:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:53:45.965141658 +0000 UTC m=+81.412912286" watchObservedRunningTime="2024-12-13 01:53:45.965197059 +0000 UTC m=+81.412967687" Dec 13 01:53:46.723842 kubelet[1922]: E1213 01:53:46.723781 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:47.424972 systemd[1]: run-containerd-runc-k8s.io-1781c8149478b27ccae6be1420dc788f2c812f3a65694972830bc609eb4fdec9-runc.ti9uw4.mount: Deactivated successfully. Dec 13 01:53:47.724480 kubelet[1922]: E1213 01:53:47.724296 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:48.057722 systemd-networkd[1581]: lxc_health: Link UP Dec 13 01:53:48.065263 systemd-networkd[1581]: lxc_health: Gained carrier Dec 13 01:53:48.065480 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 01:53:48.448693 kubelet[1922]: W1213 01:53:48.446053 1922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45591908_d42f_4295_afeb_f4e95db114ac.slice/cri-containerd-8690a7a41f4375ac2b38c113166d2bd18b5708ed9b97c005bffbbc9d0a18f440.scope WatchSource:0}: task 8690a7a41f4375ac2b38c113166d2bd18b5708ed9b97c005bffbbc9d0a18f440 not found: not found Dec 13 01:53:48.725611 kubelet[1922]: E1213 01:53:48.725483 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:49.727343 kubelet[1922]: E1213 01:53:49.727283 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:49.812585 systemd-networkd[1581]: lxc_health: Gained IPv6LL Dec 13 01:53:50.728209 kubelet[1922]: E1213 01:53:50.728161 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:51.573000 kubelet[1922]: W1213 01:53:51.572952 1922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45591908_d42f_4295_afeb_f4e95db114ac.slice/cri-containerd-5ef90ebfbf2d5397cadbda87538a079f1325dccc4dbfdfdf928effe104b9b9d1.scope WatchSource:0}: task 5ef90ebfbf2d5397cadbda87538a079f1325dccc4dbfdfdf928effe104b9b9d1 not found: not found Dec 13 01:53:51.729812 kubelet[1922]: E1213 01:53:51.729763 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:52.731186 kubelet[1922]: E1213 01:53:52.731127 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:53.732066 kubelet[1922]: E1213 01:53:53.732019 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:54.684617 kubelet[1922]: W1213 01:53:54.684554 1922 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45591908_d42f_4295_afeb_f4e95db114ac.slice/cri-containerd-5d7486970cbfd4952990d0beda85036cd7c3ceddbdfdf03d24b935446a7b58cb.scope WatchSource:0}: task 5d7486970cbfd4952990d0beda85036cd7c3ceddbdfdf03d24b935446a7b58cb not found: not found Dec 13 01:53:54.733150 kubelet[1922]: E1213 01:53:54.733096 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:55.733718 kubelet[1922]: E1213 01:53:55.733656 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:56.734124 kubelet[1922]: E1213 01:53:56.734064 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:57.734984 kubelet[1922]: E1213 01:53:57.734900 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:53:58.735937 kubelet[1922]: E1213 01:53:58.735896 1922 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"