Dec 13 14:34:43.034455 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:34:43.034476 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:34:43.034485 kernel: BIOS-provided physical RAM map: Dec 13 14:34:43.034491 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 14:34:43.034496 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 14:34:43.034501 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 14:34:43.034510 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 14:34:43.034516 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 14:34:43.034521 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 14:34:43.034529 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 14:34:43.034536 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 14:34:43.034542 kernel: printk: bootconsole [earlyser0] enabled Dec 13 14:34:43.034548 kernel: NX (Execute Disable) protection: active Dec 13 14:34:43.034556 kernel: efi: EFI v2.70 by Microsoft Dec 13 14:34:43.034566 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Dec 13 14:34:43.034575 kernel: random: crng init done Dec 13 14:34:43.034582 kernel: SMBIOS 3.1.0 present. Dec 13 14:34:43.034591 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 14:34:43.034597 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 14:34:43.034606 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 14:34:43.034612 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Dec 13 14:34:43.034620 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 14:34:43.034629 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 14:34:43.034635 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 14:34:43.034642 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 14:34:43.034651 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 14:34:43.034658 kernel: tsc: Detected 2593.908 MHz processor Dec 13 14:34:43.034664 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:34:43.034670 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:34:43.034677 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 14:34:43.034683 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:34:43.034692 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 14:34:43.034701 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 14:34:43.034707 kernel: Using GB pages for direct mapping Dec 13 14:34:43.034714 kernel: Secure boot disabled Dec 13 14:34:43.034722 kernel: ACPI: Early table checksum verification disabled Dec 13 14:34:43.034728 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 14:34:43.034735 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:34:43.034745 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:34:43.034751 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 14:34:43.034764 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 14:34:43.034772 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:34:43.034779 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:34:43.034786 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:34:43.034796 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:34:43.034802 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:34:43.034813 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:34:43.034821 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:34:43.034828 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 14:34:43.034835 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 14:34:43.034844 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 14:34:43.034851 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 14:34:43.037699 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 14:34:43.037723 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 14:34:43.037742 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 14:34:43.037755 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 14:34:43.037768 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 14:34:43.037781 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 14:34:43.037794 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:34:43.037807 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:34:43.037819 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 14:34:43.037832 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 14:34:43.037845 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 14:34:43.037871 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 14:34:43.037884 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 14:34:43.037896 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 14:34:43.037909 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 14:34:43.037922 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 14:34:43.037934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 14:34:43.037947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 14:34:43.037959 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 14:34:43.037971 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 14:34:43.037987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 14:34:43.038000 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 14:34:43.038013 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 14:34:43.038026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 14:34:43.038039 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 14:34:43.038052 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 14:34:43.038065 kernel: Zone ranges: Dec 13 14:34:43.038078 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:34:43.038091 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 14:34:43.038106 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 14:34:43.038118 kernel: Movable zone start for each node Dec 13 14:34:43.038131 kernel: Early memory node ranges Dec 13 14:34:43.038144 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 14:34:43.038156 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 14:34:43.038169 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 14:34:43.038181 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 14:34:43.038194 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 14:34:43.038207 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:34:43.038223 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 14:34:43.038236 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 14:34:43.038248 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 14:34:43.038261 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 14:34:43.038274 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:34:43.038287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:34:43.038300 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:34:43.038312 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 14:34:43.038325 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:34:43.038340 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 14:34:43.038353 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 14:34:43.038366 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:34:43.038380 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:34:43.038393 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:34:43.038405 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:34:43.038418 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:34:43.038431 kernel: Hyper-V: PV spinlocks enabled Dec 13 14:34:43.038443 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:34:43.038459 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 14:34:43.038472 kernel: Policy zone: Normal Dec 13 14:34:43.038486 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:34:43.038499 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:34:43.038512 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 14:34:43.038525 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:34:43.038538 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:34:43.038551 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 308056K reserved, 0K cma-reserved) Dec 13 14:34:43.038567 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:34:43.038581 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:34:43.038602 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:34:43.038618 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:34:43.038633 kernel: rcu: RCU event tracing is enabled. Dec 13 14:34:43.038647 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:34:43.038661 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:34:43.038674 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:34:43.038688 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:34:43.038701 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:34:43.038714 kernel: Using NULL legacy PIC Dec 13 14:34:43.038731 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 14:34:43.038745 kernel: Console: colour dummy device 80x25 Dec 13 14:34:43.038758 kernel: printk: console [tty1] enabled Dec 13 14:34:43.038772 kernel: printk: console [ttyS0] enabled Dec 13 14:34:43.038785 kernel: printk: bootconsole [earlyser0] disabled Dec 13 14:34:43.038801 kernel: ACPI: Core revision 20210730 Dec 13 14:34:43.038815 kernel: Failed to register legacy timer interrupt Dec 13 14:34:43.038829 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:34:43.038842 kernel: Hyper-V: Using IPI hypercalls Dec 13 14:34:43.038856 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Dec 13 14:34:43.038881 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:34:43.038890 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:34:43.038899 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:34:43.038909 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:34:43.038918 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:34:43.038930 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:34:43.038941 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:34:43.038948 kernel: RETBleed: Vulnerable Dec 13 14:34:43.038958 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:34:43.038966 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:34:43.038974 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:34:43.038984 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:34:43.038992 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:34:43.039001 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:34:43.039010 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:34:43.039021 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:34:43.039031 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:34:43.039039 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:34:43.039049 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:34:43.039058 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 14:34:43.039068 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 14:34:43.039075 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 14:34:43.039082 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 14:34:43.039091 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:34:43.039098 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:34:43.039104 kernel: LSM: Security Framework initializing Dec 13 14:34:43.039111 kernel: SELinux: Initializing. Dec 13 14:34:43.039121 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:34:43.039128 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:34:43.039135 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:34:43.039142 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:34:43.039149 kernel: signal: max sigframe size: 3632 Dec 13 14:34:43.039156 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:34:43.039164 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:34:43.039171 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:34:43.039178 kernel: x86: Booting SMP configuration: Dec 13 14:34:43.039196 kernel: .... node #0, CPUs: #1 Dec 13 14:34:43.039206 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 14:34:43.039217 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:34:43.039225 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:34:43.039235 kernel: smpboot: Max logical packages: 1 Dec 13 14:34:43.039243 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Dec 13 14:34:43.039252 kernel: devtmpfs: initialized Dec 13 14:34:43.039261 kernel: x86/mm: Memory block size: 128MB Dec 13 14:34:43.039269 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 14:34:43.039283 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:34:43.039294 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:34:43.039304 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:34:43.039317 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:34:43.039330 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:34:43.039341 kernel: audit: type=2000 audit(1734100481.024:1): state=initialized audit_enabled=0 res=1 Dec 13 14:34:43.039352 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:34:43.039364 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:34:43.039371 kernel: cpuidle: using governor menu Dec 13 14:34:43.039383 kernel: ACPI: bus type PCI registered Dec 13 14:34:43.039393 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:34:43.039401 kernel: dca service started, version 1.12.1 Dec 13 14:34:43.039412 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:34:43.039419 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:34:43.039426 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:34:43.039436 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:34:43.039445 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:34:43.039454 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:34:43.039463 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:34:43.039473 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:34:43.039481 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:34:43.039491 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:34:43.039498 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:34:43.039506 kernel: ACPI: Interpreter enabled Dec 13 14:34:43.039516 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:34:43.039524 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:34:43.039533 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:34:43.039542 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 14:34:43.039553 kernel: iommu: Default domain type: Translated Dec 13 14:34:43.039560 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:34:43.039571 kernel: vgaarb: loaded Dec 13 14:34:43.039578 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:34:43.039585 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:34:43.039595 kernel: PTP clock support registered Dec 13 14:34:43.039603 kernel: Registered efivars operations Dec 13 14:34:43.039612 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:34:43.039619 kernel: PCI: System does not support PCI Dec 13 14:34:43.039631 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 14:34:43.039639 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:34:43.039649 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:34:43.039656 kernel: pnp: PnP ACPI init Dec 13 14:34:43.039664 kernel: pnp: PnP ACPI: found 3 devices Dec 13 14:34:43.039674 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:34:43.039683 kernel: NET: Registered PF_INET protocol family Dec 13 14:34:43.039692 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:34:43.039701 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 14:34:43.039712 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:34:43.039720 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:34:43.039730 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 14:34:43.039737 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 14:34:43.039746 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:34:43.039755 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:34:43.039763 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:34:43.039772 kernel: NET: Registered PF_XDP protocol family Dec 13 14:34:43.039781 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:34:43.039792 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:34:43.039799 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Dec 13 14:34:43.039809 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:34:43.039816 kernel: Initialise system trusted keyrings Dec 13 14:34:43.039823 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 14:34:43.039833 kernel: Key type asymmetric registered Dec 13 14:34:43.039841 kernel: Asymmetric key parser 'x509' registered Dec 13 14:34:43.039850 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:34:43.039869 kernel: io scheduler mq-deadline registered Dec 13 14:34:43.039877 kernel: io scheduler kyber registered Dec 13 14:34:43.039887 kernel: io scheduler bfq registered Dec 13 14:34:43.039895 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:34:43.039902 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:34:43.039913 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:34:43.039920 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 14:34:43.039930 kernel: i8042: PNP: No PS/2 controller found. Dec 13 14:34:43.040060 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 14:34:43.040150 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T14:34:42 UTC (1734100482) Dec 13 14:34:43.040230 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 14:34:43.040241 kernel: fail to initialize ptp_kvm Dec 13 14:34:43.040252 kernel: intel_pstate: CPU model not supported Dec 13 14:34:43.040259 kernel: efifb: probing for efifb Dec 13 14:34:43.040266 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 14:34:43.040277 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 14:34:43.040285 kernel: efifb: scrolling: redraw Dec 13 14:34:43.040296 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:34:43.040303 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:34:43.040315 kernel: fb0: EFI VGA frame buffer device Dec 13 14:34:43.040322 kernel: pstore: Registered efi as persistent store backend Dec 13 14:34:43.040332 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:34:43.040339 kernel: Segment Routing with IPv6 Dec 13 14:34:43.040347 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:34:43.040357 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:34:43.040365 kernel: Key type dns_resolver registered Dec 13 14:34:43.040376 kernel: IPI shorthand broadcast: enabled Dec 13 14:34:43.040383 kernel: sched_clock: Marking stable (796813800, 26988400)->(1020871500, -197069300) Dec 13 14:34:43.040393 kernel: registered taskstats version 1 Dec 13 14:34:43.040401 kernel: Loading compiled-in X.509 certificates Dec 13 14:34:43.040412 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:34:43.040419 kernel: Key type .fscrypt registered Dec 13 14:34:43.040426 kernel: Key type fscrypt-provisioning registered Dec 13 14:34:43.040436 kernel: pstore: Using crash dump compression: deflate Dec 13 14:34:43.040448 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:34:43.040456 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:34:43.040463 kernel: ima: No architecture policies found Dec 13 14:34:43.040473 kernel: clk: Disabling unused clocks Dec 13 14:34:43.040481 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:34:43.040491 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:34:43.040498 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:34:43.040506 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:34:43.040516 kernel: Run /init as init process Dec 13 14:34:43.040524 kernel: with arguments: Dec 13 14:34:43.040535 kernel: /init Dec 13 14:34:43.040542 kernel: with environment: Dec 13 14:34:43.040551 kernel: HOME=/ Dec 13 14:34:43.040560 kernel: TERM=linux Dec 13 14:34:43.040569 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:34:43.040579 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:34:43.040590 systemd[1]: Detected virtualization microsoft. Dec 13 14:34:43.040602 systemd[1]: Detected architecture x86-64. Dec 13 14:34:43.040612 systemd[1]: Running in initrd. Dec 13 14:34:43.040619 systemd[1]: No hostname configured, using default hostname. Dec 13 14:34:43.040628 systemd[1]: Hostname set to . Dec 13 14:34:43.040638 systemd[1]: Initializing machine ID from random generator. Dec 13 14:34:43.040647 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:34:43.040656 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:34:43.040663 systemd[1]: Reached target cryptsetup.target. Dec 13 14:34:43.040675 systemd[1]: Reached target paths.target. Dec 13 14:34:43.040686 systemd[1]: Reached target slices.target. Dec 13 14:34:43.040698 systemd[1]: Reached target swap.target. Dec 13 14:34:43.040706 systemd[1]: Reached target timers.target. Dec 13 14:34:43.040717 systemd[1]: Listening on iscsid.socket. Dec 13 14:34:43.040725 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:34:43.040735 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:34:43.040742 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:34:43.040755 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:34:43.040763 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:34:43.040774 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:34:43.040781 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:34:43.040790 systemd[1]: Reached target sockets.target. Dec 13 14:34:43.040800 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:34:43.040810 systemd[1]: Finished network-cleanup.service. Dec 13 14:34:43.040818 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:34:43.040826 systemd[1]: Starting systemd-journald.service... Dec 13 14:34:43.040838 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:34:43.040848 systemd[1]: Starting systemd-resolved.service... Dec 13 14:34:43.040857 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:34:43.040876 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:34:43.040885 kernel: audit: type=1130 audit(1734100483.038:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.040899 systemd-journald[182]: Journal started Dec 13 14:34:43.040945 systemd-journald[182]: Runtime Journal (/run/log/journal/822b3538104b43efb8fbc31ed2520e14) is 8.0M, max 159.0M, 151.0M free. Dec 13 14:34:43.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.020895 systemd-modules-load[183]: Inserted module 'overlay' Dec 13 14:34:43.066268 systemd[1]: Started systemd-journald.service. Dec 13 14:34:43.069283 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:34:43.074898 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:34:43.088875 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:34:43.089317 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:34:43.094025 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:34:43.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.116713 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:34:43.119500 kernel: audit: type=1130 audit(1734100483.068:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.121249 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:34:43.128197 kernel: Bridge firewalling registered Dec 13 14:34:43.128246 systemd-modules-load[183]: Inserted module 'br_netfilter' Dec 13 14:34:43.133934 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:34:43.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.155031 systemd-resolved[184]: Positive Trust Anchors: Dec 13 14:34:43.216518 kernel: audit: type=1130 audit(1734100483.074:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.216540 kernel: audit: type=1130 audit(1734100483.077:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.216556 kernel: audit: type=1130 audit(1734100483.120:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.216574 kernel: SCSI subsystem initialized Dec 13 14:34:43.216587 kernel: audit: type=1130 audit(1734100483.132:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.216600 kernel: audit: type=1130 audit(1734100483.172:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.216777 dracut-cmdline[200]: dracut-dracut-053 Dec 13 14:34:43.216777 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:34:43.155190 systemd-resolved[184]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:34:43.155227 systemd-resolved[184]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:34:43.158089 systemd-resolved[184]: Defaulting to hostname 'linux'. Dec 13 14:34:43.159157 systemd[1]: Started systemd-resolved.service. Dec 13 14:34:43.172196 systemd[1]: Reached target nss-lookup.target. Dec 13 14:34:43.274896 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:34:43.274935 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:34:43.276128 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:34:43.284566 systemd-modules-load[183]: Inserted module 'dm_multipath' Dec 13 14:34:43.285271 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:34:43.309835 kernel: audit: type=1130 audit(1734100483.289:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.290980 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:34:43.314003 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:34:43.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.329878 kernel: audit: type=1130 audit(1734100483.313:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.346876 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:34:43.364880 kernel: iscsi: registered transport (tcp) Dec 13 14:34:43.392745 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:34:43.392788 kernel: QLogic iSCSI HBA Driver Dec 13 14:34:43.421312 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:34:43.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.424611 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:34:43.475877 kernel: raid6: avx512x4 gen() 18618 MB/s Dec 13 14:34:43.495879 kernel: raid6: avx512x4 xor() 8037 MB/s Dec 13 14:34:43.515871 kernel: raid6: avx512x2 gen() 18735 MB/s Dec 13 14:34:43.535883 kernel: raid6: avx512x2 xor() 29758 MB/s Dec 13 14:34:43.555874 kernel: raid6: avx512x1 gen() 18723 MB/s Dec 13 14:34:43.575872 kernel: raid6: avx512x1 xor() 26663 MB/s Dec 13 14:34:43.596876 kernel: raid6: avx2x4 gen() 18651 MB/s Dec 13 14:34:43.616874 kernel: raid6: avx2x4 xor() 7783 MB/s Dec 13 14:34:43.636873 kernel: raid6: avx2x2 gen() 18618 MB/s Dec 13 14:34:43.656875 kernel: raid6: avx2x2 xor() 22239 MB/s Dec 13 14:34:43.676874 kernel: raid6: avx2x1 gen() 14143 MB/s Dec 13 14:34:43.696871 kernel: raid6: avx2x1 xor() 19339 MB/s Dec 13 14:34:43.716875 kernel: raid6: sse2x4 gen() 11739 MB/s Dec 13 14:34:43.736873 kernel: raid6: sse2x4 xor() 7281 MB/s Dec 13 14:34:43.756871 kernel: raid6: sse2x2 gen() 12928 MB/s Dec 13 14:34:43.776877 kernel: raid6: sse2x2 xor() 7355 MB/s Dec 13 14:34:43.796872 kernel: raid6: sse2x1 gen() 11389 MB/s Dec 13 14:34:43.819356 kernel: raid6: sse2x1 xor() 5757 MB/s Dec 13 14:34:43.819380 kernel: raid6: using algorithm avx512x2 gen() 18735 MB/s Dec 13 14:34:43.819394 kernel: raid6: .... xor() 29758 MB/s, rmw enabled Dec 13 14:34:43.822457 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:34:43.841873 kernel: xor: automatically using best checksumming function avx Dec 13 14:34:43.936884 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:34:43.944631 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:34:43.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.948000 audit: BPF prog-id=7 op=LOAD Dec 13 14:34:43.948000 audit: BPF prog-id=8 op=LOAD Dec 13 14:34:43.949588 systemd[1]: Starting systemd-udevd.service... Dec 13 14:34:43.965087 systemd-udevd[384]: Using default interface naming scheme 'v252'. Dec 13 14:34:43.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:43.971994 systemd[1]: Started systemd-udevd.service. Dec 13 14:34:43.975345 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:34:43.992888 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Dec 13 14:34:44.022221 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:34:44.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:44.027650 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:34:44.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:44.059753 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:34:44.104877 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:34:44.143460 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:34:44.143521 kernel: AES CTR mode by8 optimization enabled Dec 13 14:34:44.145885 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 14:34:44.168880 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 14:34:44.174884 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 14:34:44.182638 kernel: scsi host0: storvsc_host_t Dec 13 14:34:44.182831 kernel: scsi host1: storvsc_host_t Dec 13 14:34:44.182951 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 14:34:44.196378 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 14:34:44.201870 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 14:34:44.208886 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 14:34:44.213874 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:34:44.219873 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 14:34:44.236623 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 14:34:44.236679 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 14:34:44.251644 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 14:34:44.257679 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:34:44.257701 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 14:34:44.270374 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 14:34:44.290416 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 14:34:44.290537 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 14:34:44.290633 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 14:34:44.290732 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 14:34:44.290826 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:34:44.290837 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 14:34:44.351956 kernel: hv_netvsc 7c1e521f-c57f-7c1e-521f-c57f7c1e521f eth0: VF slot 1 added Dec 13 14:34:44.360876 kernel: hv_vmbus: registering driver hv_pci Dec 13 14:34:44.370661 kernel: hv_pci 6c22709e-34b5-4b26-bb67-9040dbfc3822: PCI VMBus probing: Using version 0x10004 Dec 13 14:34:44.445788 kernel: hv_pci 6c22709e-34b5-4b26-bb67-9040dbfc3822: PCI host bridge to bus 34b5:00 Dec 13 14:34:44.445968 kernel: pci_bus 34b5:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 14:34:44.446121 kernel: pci_bus 34b5:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 14:34:44.446262 kernel: pci 34b5:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 14:34:44.446421 kernel: pci 34b5:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 14:34:44.446570 kernel: pci 34b5:00:02.0: enabling Extended Tags Dec 13 14:34:44.446719 kernel: pci 34b5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 34b5:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 14:34:44.446880 kernel: pci_bus 34b5:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 14:34:44.447022 kernel: pci 34b5:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 14:34:44.537881 kernel: mlx5_core 34b5:00:02.0: firmware version: 14.30.5000 Dec 13 14:34:44.803360 kernel: mlx5_core 34b5:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 14:34:44.803488 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (440) Dec 13 14:34:44.803500 kernel: mlx5_core 34b5:00:02.0: Supported tc offload range - chains: 1, prios: 1 Dec 13 14:34:44.803602 kernel: mlx5_core 34b5:00:02.0: mlx5e_tc_post_act_init:40:(pid 195): firmware level support is missing Dec 13 14:34:44.803699 kernel: hv_netvsc 7c1e521f-c57f-7c1e-521f-c57f7c1e521f eth0: VF registering: eth1 Dec 13 14:34:44.803790 kernel: mlx5_core 34b5:00:02.0 eth1: joined to eth0 Dec 13 14:34:44.715917 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:34:44.771773 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:34:44.816908 kernel: mlx5_core 34b5:00:02.0 enP13493s1: renamed from eth1 Dec 13 14:34:44.906244 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:34:45.039192 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:34:45.042080 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:34:45.049327 systemd[1]: Starting disk-uuid.service... Dec 13 14:34:45.063877 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:34:45.071884 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:34:46.080804 disk-uuid[565]: The operation has completed successfully. Dec 13 14:34:46.083566 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:34:46.144628 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:34:46.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:46.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:46.144729 systemd[1]: Finished disk-uuid.service. Dec 13 14:34:46.160148 systemd[1]: Starting verity-setup.service... Dec 13 14:34:46.194884 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:34:46.439502 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:34:46.444249 systemd[1]: Finished verity-setup.service. Dec 13 14:34:46.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:46.448984 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:34:46.521570 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:34:46.521487 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:34:46.525325 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:34:46.529624 systemd[1]: Starting ignition-setup.service... Dec 13 14:34:46.534543 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:34:46.552759 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:34:46.552795 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:34:46.552809 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:34:46.604929 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:34:46.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:46.610000 audit: BPF prog-id=9 op=LOAD Dec 13 14:34:46.612167 systemd[1]: Starting systemd-networkd.service... Dec 13 14:34:46.629353 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:34:46.640274 systemd-networkd[806]: lo: Link UP Dec 13 14:34:46.640283 systemd-networkd[806]: lo: Gained carrier Dec 13 14:34:46.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:46.641177 systemd-networkd[806]: Enumeration completed Dec 13 14:34:46.641240 systemd[1]: Started systemd-networkd.service. Dec 13 14:34:46.644503 systemd[1]: Reached target network.target. Dec 13 14:34:46.647901 systemd-networkd[806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:34:46.650356 systemd[1]: Starting iscsiuio.service... Dec 13 14:34:46.667981 systemd[1]: Started iscsiuio.service. Dec 13 14:34:46.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:46.671869 systemd[1]: Starting iscsid.service... Dec 13 14:34:46.676888 iscsid[815]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:34:46.676888 iscsid[815]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:34:46.676888 iscsid[815]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:34:46.676888 iscsid[815]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:34:46.676888 iscsid[815]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:34:46.676888 iscsid[815]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:34:46.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:46.676669 systemd[1]: Started iscsid.service. Dec 13 14:34:46.715566 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:34:46.715880 kernel: mlx5_core 34b5:00:02.0 enP13493s1: Link up Dec 13 14:34:46.727232 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:34:46.731855 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:34:46.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:46.736434 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:34:46.738712 systemd[1]: Reached target remote-fs.target. Dec 13 14:34:46.754057 kernel: hv_netvsc 7c1e521f-c57f-7c1e-521f-c57f7c1e521f eth0: Data path switched to VF: enP13493s1 Dec 13 14:34:46.741522 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:34:46.755546 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:34:46.766107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:34:46.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:46.757772 systemd-networkd[806]: enP13493s1: Link UP Dec 13 14:34:46.757875 systemd-networkd[806]: eth0: Link UP Dec 13 14:34:46.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:46.763040 systemd-networkd[806]: eth0: Gained carrier Dec 13 14:34:46.764164 systemd[1]: Finished ignition-setup.service. Dec 13 14:34:46.768645 systemd-networkd[806]: enP13493s1: Gained carrier Dec 13 14:34:46.773538 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:34:46.803923 systemd-networkd[806]: eth0: DHCPv4 address 10.200.8.11/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:34:48.034088 systemd-networkd[806]: eth0: Gained IPv6LL Dec 13 14:34:50.237848 ignition[830]: Ignition 2.14.0 Dec 13 14:34:50.237900 ignition[830]: Stage: fetch-offline Dec 13 14:34:50.238002 ignition[830]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:34:50.238053 ignition[830]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:34:50.306446 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:34:50.306655 ignition[830]: parsed url from cmdline: "" Dec 13 14:34:50.306660 ignition[830]: no config URL provided Dec 13 14:34:50.306666 ignition[830]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:34:50.309607 ignition[830]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:34:50.309618 ignition[830]: failed to fetch config: resource requires networking Dec 13 14:34:50.319270 ignition[830]: Ignition finished successfully Dec 13 14:34:50.321478 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:34:50.331544 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 14:34:50.331585 kernel: audit: type=1130 audit(1734100490.326:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:50.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:50.327645 systemd[1]: Starting ignition-fetch.service... Dec 13 14:34:50.336108 ignition[836]: Ignition 2.14.0 Dec 13 14:34:50.336116 ignition[836]: Stage: fetch Dec 13 14:34:50.336214 ignition[836]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:34:50.336238 ignition[836]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:34:50.339586 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:34:50.341184 ignition[836]: parsed url from cmdline: "" Dec 13 14:34:50.341190 ignition[836]: no config URL provided Dec 13 14:34:50.341196 ignition[836]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:34:50.341208 ignition[836]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:34:50.341257 ignition[836]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 14:34:50.451037 ignition[836]: GET result: OK Dec 13 14:34:50.451132 ignition[836]: config has been read from IMDS userdata Dec 13 14:34:50.451153 ignition[836]: parsing config with SHA512: b3fc8e82e8060da93f435a23f51240d04bee9023c814c99622ecea10e22edfc4bd83e7a7c55fa9c86414d4ea02c2b9b80c457589ca1606fe1dae77b6167ffdad Dec 13 14:34:50.454887 unknown[836]: fetched base config from "system" Dec 13 14:34:50.455320 ignition[836]: fetch: fetch complete Dec 13 14:34:50.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:50.454895 unknown[836]: fetched base config from "system" Dec 13 14:34:50.477908 kernel: audit: type=1130 audit(1734100490.459:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:50.455326 ignition[836]: fetch: fetch passed Dec 13 14:34:50.454902 unknown[836]: fetched user config from "azure" Dec 13 14:34:50.455368 ignition[836]: Ignition finished successfully Dec 13 14:34:50.456752 systemd[1]: Finished ignition-fetch.service. Dec 13 14:34:50.461335 systemd[1]: Starting ignition-kargs.service... Dec 13 14:34:50.487144 ignition[842]: Ignition 2.14.0 Dec 13 14:34:50.487152 ignition[842]: Stage: kargs Dec 13 14:34:50.487271 ignition[842]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:34:50.487293 ignition[842]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:34:50.497164 systemd[1]: Finished ignition-kargs.service. Dec 13 14:34:50.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:50.492308 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:34:50.521702 kernel: audit: type=1130 audit(1734100490.498:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:50.500174 systemd[1]: Starting ignition-disks.service... Dec 13 14:34:50.493776 ignition[842]: kargs: kargs passed Dec 13 14:34:50.493824 ignition[842]: Ignition finished successfully Dec 13 14:34:50.533765 ignition[848]: Ignition 2.14.0 Dec 13 14:34:50.533775 ignition[848]: Stage: disks Dec 13 14:34:50.533920 ignition[848]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:34:50.533952 ignition[848]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:34:50.544430 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:34:50.548086 ignition[848]: disks: disks passed Dec 13 14:34:50.548142 ignition[848]: Ignition finished successfully Dec 13 14:34:50.552361 systemd[1]: Finished ignition-disks.service. Dec 13 14:34:50.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:50.554579 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:34:50.573956 kernel: audit: type=1130 audit(1734100490.553:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:50.569792 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:34:50.573981 systemd[1]: Reached target local-fs.target. Dec 13 14:34:50.576029 systemd[1]: Reached target sysinit.target. Dec 13 14:34:50.580160 systemd[1]: Reached target basic.target. Dec 13 14:34:50.583028 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:34:50.636720 systemd-fsck[856]: ROOT: clean, 621/7326000 files, 481077/7359488 blocks Dec 13 14:34:50.652814 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:34:50.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:50.658382 systemd[1]: Mounting sysroot.mount... Dec 13 14:34:50.674144 kernel: audit: type=1130 audit(1734100490.656:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:50.682879 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:34:50.683187 systemd[1]: Mounted sysroot.mount. Dec 13 14:34:50.685320 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:34:50.716489 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:34:50.726581 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 14:34:50.731580 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:34:50.731701 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:34:50.742158 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:34:50.827153 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:34:50.835085 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:34:50.855441 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (867) Dec 13 14:34:50.855498 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:34:50.855514 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:34:50.855525 initrd-setup-root[872]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:34:50.866304 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:34:50.869227 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:34:50.875690 initrd-setup-root[898]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:34:50.895692 initrd-setup-root[906]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:34:50.902822 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:34:51.364842 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:34:51.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:51.370107 systemd[1]: Starting ignition-mount.service... Dec 13 14:34:51.394026 kernel: audit: type=1130 audit(1734100491.369:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:51.386149 systemd[1]: Starting sysroot-boot.service... Dec 13 14:34:51.400015 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:34:51.400158 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:34:51.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:51.413425 systemd[1]: Finished sysroot-boot.service. Dec 13 14:34:51.430501 kernel: audit: type=1130 audit(1734100491.415:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:51.434048 ignition[935]: INFO : Ignition 2.14.0 Dec 13 14:34:51.434048 ignition[935]: INFO : Stage: mount Dec 13 14:34:51.438121 ignition[935]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:34:51.438121 ignition[935]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:34:51.451678 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:34:51.454925 ignition[935]: INFO : mount: mount passed Dec 13 14:34:51.454925 ignition[935]: INFO : Ignition finished successfully Dec 13 14:34:51.459662 systemd[1]: Finished ignition-mount.service. Dec 13 14:34:51.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:51.475874 kernel: audit: type=1130 audit(1734100491.462:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:52.191745 coreos-metadata[866]: Dec 13 14:34:52.191 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 14:34:52.208571 coreos-metadata[866]: Dec 13 14:34:52.208 INFO Fetch successful Dec 13 14:34:52.243991 coreos-metadata[866]: Dec 13 14:34:52.243 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 14:34:52.260631 coreos-metadata[866]: Dec 13 14:34:52.260 INFO Fetch successful Dec 13 14:34:52.277348 coreos-metadata[866]: Dec 13 14:34:52.277 INFO wrote hostname ci-3510.3.6-a-19c473d9c1 to /sysroot/etc/hostname Dec 13 14:34:52.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:52.279415 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 14:34:52.302357 kernel: audit: type=1130 audit(1734100492.284:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:52.285957 systemd[1]: Starting ignition-files.service... Dec 13 14:34:52.305520 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:34:52.325482 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (945) Dec 13 14:34:52.325526 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:34:52.325541 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:34:52.332899 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:34:52.337580 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:34:52.351626 ignition[964]: INFO : Ignition 2.14.0 Dec 13 14:34:52.351626 ignition[964]: INFO : Stage: files Dec 13 14:34:52.355719 ignition[964]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:34:52.355719 ignition[964]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:34:52.368196 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:34:52.389311 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:34:52.393184 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:34:52.393184 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:34:52.431611 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:34:52.435740 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:34:52.452487 unknown[964]: wrote ssh authorized keys file for user: core Dec 13 14:34:52.455358 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:34:52.475642 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:34:52.480623 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:34:52.480623 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:34:52.490307 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:34:52.494739 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:34:52.499285 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:34:52.503770 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:34:52.510479 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:34:52.517033 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:34:52.522199 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:34:52.531372 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem743479889" Dec 13 14:34:52.531372 ignition[964]: CRITICAL : files: createFilesystemsFiles: createFiles: op(7): op(8): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem743479889": device or resource busy Dec 13 14:34:52.531372 ignition[964]: ERROR : files: createFilesystemsFiles: createFiles: op(7): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem743479889", trying btrfs: device or resource busy Dec 13 14:34:52.531372 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem743479889" Dec 13 14:34:52.561133 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (967) Dec 13 14:34:52.561161 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem743479889" Dec 13 14:34:52.561161 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [started] unmounting "/mnt/oem743479889" Dec 13 14:34:52.561161 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [finished] unmounting "/mnt/oem743479889" Dec 13 14:34:52.561161 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:34:52.561161 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:34:52.561161 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:34:52.602326 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem176020050" Dec 13 14:34:52.607344 ignition[964]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem176020050": device or resource busy Dec 13 14:34:52.607344 ignition[964]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem176020050", trying btrfs: device or resource busy Dec 13 14:34:52.607344 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem176020050" Dec 13 14:34:52.624464 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem176020050" Dec 13 14:34:52.624464 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem176020050" Dec 13 14:34:52.623101 systemd[1]: mnt-oem176020050.mount: Deactivated successfully. Dec 13 14:34:52.636230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem176020050" Dec 13 14:34:52.636230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:34:52.636230 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:34:52.650913 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:34:53.209509 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Dec 13 14:34:53.625926 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:34:53.625926 ignition[964]: INFO : files: op(10): [started] processing unit "waagent.service" Dec 13 14:34:53.625926 ignition[964]: INFO : files: op(10): [finished] processing unit "waagent.service" Dec 13 14:34:53.625926 ignition[964]: INFO : files: op(11): [started] processing unit "nvidia.service" Dec 13 14:34:53.625926 ignition[964]: INFO : files: op(11): [finished] processing unit "nvidia.service" Dec 13 14:34:53.625926 ignition[964]: INFO : files: op(12): [started] processing unit "containerd.service" Dec 13 14:34:53.670054 kernel: audit: type=1130 audit(1734100493.638:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.670163 ignition[964]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:34:53.670163 ignition[964]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:34:53.670163 ignition[964]: INFO : files: op(12): [finished] processing unit "containerd.service" Dec 13 14:34:53.670163 ignition[964]: INFO : files: op(14): [started] setting preset to enabled for "waagent.service" Dec 13 14:34:53.670163 ignition[964]: INFO : files: op(14): [finished] setting preset to enabled for "waagent.service" Dec 13 14:34:53.670163 ignition[964]: INFO : files: op(15): [started] setting preset to enabled for "nvidia.service" Dec 13 14:34:53.670163 ignition[964]: INFO : files: op(15): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:34:53.670163 ignition[964]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:34:53.670163 ignition[964]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:34:53.670163 ignition[964]: INFO : files: files passed Dec 13 14:34:53.670163 ignition[964]: INFO : Ignition finished successfully Dec 13 14:34:53.634021 systemd[1]: Finished ignition-files.service. Dec 13 14:34:53.640767 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:34:53.663331 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:34:53.664170 systemd[1]: Starting ignition-quench.service... Dec 13 14:34:53.724779 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:34:53.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.729718 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:34:53.724933 systemd[1]: Finished ignition-quench.service. Dec 13 14:34:53.729802 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:34:53.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.741354 systemd[1]: Reached target ignition-complete.target. Dec 13 14:34:53.744454 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:34:53.759440 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:34:53.759536 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:34:53.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.766224 systemd[1]: Reached target initrd-fs.target. Dec 13 14:34:53.770530 systemd[1]: Reached target initrd.target. Dec 13 14:34:53.772572 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:34:53.773337 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:34:53.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.787260 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:34:53.790290 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:34:53.804279 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:34:53.808760 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:34:53.813891 systemd[1]: Stopped target timers.target. Dec 13 14:34:53.818060 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:34:53.820639 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:34:53.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.825221 systemd[1]: Stopped target initrd.target. Dec 13 14:34:53.829253 systemd[1]: Stopped target basic.target. Dec 13 14:34:53.833388 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:34:53.838144 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:34:53.842905 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:34:53.847982 systemd[1]: Stopped target remote-fs.target. Dec 13 14:34:53.852298 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:34:53.856794 systemd[1]: Stopped target sysinit.target. Dec 13 14:34:53.861148 systemd[1]: Stopped target local-fs.target. Dec 13 14:34:53.865307 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:34:53.869733 systemd[1]: Stopped target swap.target. Dec 13 14:34:53.873778 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:34:53.876362 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:34:53.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.881707 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:34:53.886035 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:34:53.888629 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:34:53.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.893184 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:34:53.896106 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:34:53.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.901144 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:34:53.903673 systemd[1]: Stopped ignition-files.service. Dec 13 14:34:53.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.908182 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:34:53.911114 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 14:34:53.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.917006 systemd[1]: Stopping ignition-mount.service... Dec 13 14:34:53.929410 ignition[1002]: INFO : Ignition 2.14.0 Dec 13 14:34:53.929410 ignition[1002]: INFO : Stage: umount Dec 13 14:34:53.929410 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:34:53.929410 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:34:53.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.919433 systemd[1]: Stopping iscsiuio.service... Dec 13 14:34:53.947590 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:34:53.947590 ignition[1002]: INFO : umount: umount passed Dec 13 14:34:53.947590 ignition[1002]: INFO : Ignition finished successfully Dec 13 14:34:53.922422 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:34:53.924712 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:34:53.924916 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:34:53.929503 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:34:53.929652 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:34:53.969122 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:34:53.971740 systemd[1]: Stopped iscsiuio.service. Dec 13 14:34:53.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.975938 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:34:53.978469 systemd[1]: Stopped ignition-mount.service. Dec 13 14:34:53.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.983096 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:34:53.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.983217 systemd[1]: Stopped ignition-disks.service. Dec 13 14:34:53.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:53.985574 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:34:53.985621 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:34:53.991457 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:34:53.991510 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:34:53.995807 systemd[1]: Stopped target network.target. Dec 13 14:34:54.000442 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:34:54.002476 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:34:54.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.015100 systemd[1]: Stopped target paths.target. Dec 13 14:34:54.017358 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:34:54.018904 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:34:54.021753 systemd[1]: Stopped target slices.target. Dec 13 14:34:54.023670 systemd[1]: Stopped target sockets.target. Dec 13 14:34:54.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.028641 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:34:54.028677 systemd[1]: Closed iscsid.socket. Dec 13 14:34:54.032838 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:34:54.032903 systemd[1]: Closed iscsiuio.socket. Dec 13 14:34:54.036763 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:34:54.036814 systemd[1]: Stopped ignition-setup.service. Dec 13 14:34:54.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.040694 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:34:54.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.044494 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:34:54.047922 systemd-networkd[806]: eth0: DHCPv6 lease lost Dec 13 14:34:54.052228 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:34:54.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.052770 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:34:54.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.052872 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:34:54.059110 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:34:54.059199 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:34:54.066044 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:34:54.066120 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:34:54.085000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:34:54.085000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:34:54.086463 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:34:54.086516 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:34:54.093604 systemd[1]: Stopping network-cleanup.service... Dec 13 14:34:54.097743 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:34:54.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.097805 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:34:54.104422 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:34:54.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.104475 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:34:54.111463 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:34:54.113318 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:34:54.120485 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:34:54.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.125481 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:34:54.127938 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:34:54.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.128079 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:34:54.133818 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:34:54.133936 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:34:54.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.138011 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:34:54.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.138051 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:34:54.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.142983 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:34:54.143030 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:34:54.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.147439 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:34:54.182459 kernel: hv_netvsc 7c1e521f-c57f-7c1e-521f-c57f7c1e521f eth0: Data path switched from VF: enP13493s1 Dec 13 14:34:54.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.147487 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:34:54.152187 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:34:54.152233 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:34:54.157572 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:34:54.161636 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:34:54.161705 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:34:54.166204 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:34:54.166245 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:34:54.168615 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:34:54.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.168664 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:34:54.172152 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:34:54.172611 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:34:54.172692 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:34:54.203036 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:34:54.203132 systemd[1]: Stopped network-cleanup.service. Dec 13 14:34:54.419967 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:34:54.420101 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:34:54.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.427087 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:34:54.432088 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:34:54.432153 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:34:54.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:34:54.439704 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:34:54.479918 systemd[1]: Switching root. Dec 13 14:34:54.483000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:34:54.483000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:34:54.483000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:34:54.483000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:34:54.483000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:34:54.507315 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Dec 13 14:34:54.507375 iscsid[815]: iscsid shutting down. Dec 13 14:34:54.509263 systemd-journald[182]: Journal stopped Dec 13 14:35:09.207800 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:35:09.207824 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:35:09.207838 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:35:09.207846 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:35:09.207854 kernel: SELinux: policy capability open_perms=1 Dec 13 14:35:09.207873 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:35:09.207884 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:35:09.207894 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:35:09.207905 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:35:09.207913 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:35:09.207923 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:35:09.207933 kernel: kauditd_printk_skb: 48 callbacks suppressed Dec 13 14:35:09.207944 kernel: audit: type=1403 audit(1734100497.616:87): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:35:09.207963 systemd[1]: Successfully loaded SELinux policy in 345.153ms. Dec 13 14:35:09.207979 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 46.166ms. Dec 13 14:35:09.207990 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:35:09.208001 systemd[1]: Detected virtualization microsoft. Dec 13 14:35:09.208013 systemd[1]: Detected architecture x86-64. Dec 13 14:35:09.208022 systemd[1]: Detected first boot. Dec 13 14:35:09.208036 systemd[1]: Hostname set to . Dec 13 14:35:09.208045 systemd[1]: Initializing machine ID from random generator. Dec 13 14:35:09.208057 kernel: audit: type=1400 audit(1734100498.311:88): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:35:09.208066 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:35:09.208078 kernel: audit: type=1400 audit(1734100499.983:89): avc: denied { associate } for pid=1055 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:35:09.208089 kernel: audit: type=1300 audit(1734100499.983:89): arch=c000003e syscall=188 success=yes exit=0 a0=c00014f672 a1=c0000d0af8 a2=c0000d8a00 a3=32 items=0 ppid=1038 pid=1055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:35:09.208102 kernel: audit: type=1327 audit(1734100499.983:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:35:09.208112 kernel: audit: type=1400 audit(1734100499.990:90): avc: denied { associate } for pid=1055 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:35:09.208123 kernel: audit: type=1300 audit(1734100499.990:90): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014f749 a2=1ed a3=0 items=2 ppid=1038 pid=1055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:35:09.208137 kernel: audit: type=1307 audit(1734100499.990:90): cwd="/" Dec 13 14:35:09.208147 kernel: audit: type=1302 audit(1734100499.990:90): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:35:09.208157 kernel: audit: type=1302 audit(1734100499.990:90): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:35:09.208170 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:35:09.208184 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:35:09.208193 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:35:09.208207 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:35:09.208218 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:35:09.208228 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:35:09.208241 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:35:09.208251 systemd[1]: Created slice system-getty.slice. Dec 13 14:35:09.208263 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:35:09.208282 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:35:09.208294 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:35:09.208310 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:35:09.208321 systemd[1]: Created slice user.slice. Dec 13 14:35:09.208330 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:35:09.208342 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:35:09.208356 systemd[1]: Set up automount boot.automount. Dec 13 14:35:09.208366 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:35:09.208377 systemd[1]: Reached target integritysetup.target. Dec 13 14:35:09.208387 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:35:09.208399 systemd[1]: Reached target remote-fs.target. Dec 13 14:35:09.208408 systemd[1]: Reached target slices.target. Dec 13 14:35:09.208420 systemd[1]: Reached target swap.target. Dec 13 14:35:09.208432 systemd[1]: Reached target torcx.target. Dec 13 14:35:09.208443 systemd[1]: Reached target veritysetup.target. Dec 13 14:35:09.208455 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:35:09.208466 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:35:09.208479 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:35:09.208488 kernel: audit: type=1400 audit(1734100508.854:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:35:09.208498 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:35:09.208509 kernel: audit: type=1335 audit(1734100508.854:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:35:09.208521 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:35:09.208533 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:35:09.208545 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:35:09.208557 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:35:09.208567 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:35:09.208578 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:35:09.208592 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:35:09.208604 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:35:09.208617 systemd[1]: Mounting media.mount... Dec 13 14:35:09.208629 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:35:09.208640 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:35:09.208651 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:35:09.208661 systemd[1]: Mounting tmp.mount... Dec 13 14:35:09.208675 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:35:09.208687 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:35:09.208699 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:35:09.208711 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:35:09.208722 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:35:09.208733 systemd[1]: Starting modprobe@drm.service... Dec 13 14:35:09.208746 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:35:09.208759 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:35:09.208770 systemd[1]: Starting modprobe@loop.service... Dec 13 14:35:09.208781 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:35:09.208791 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:35:09.208804 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:35:09.208817 systemd[1]: Starting systemd-journald.service... Dec 13 14:35:09.208826 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:35:09.208839 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:35:09.208849 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:35:09.208867 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:35:09.208880 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:35:09.208890 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:35:09.208904 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:35:09.208915 systemd[1]: Mounted media.mount. Dec 13 14:35:09.208927 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:35:09.208940 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:35:09.208949 kernel: loop: module loaded Dec 13 14:35:09.208961 systemd[1]: Mounted tmp.mount. Dec 13 14:35:09.208971 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:35:09.208983 kernel: audit: type=1305 audit(1734100509.191:93): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:35:09.208992 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:35:09.209007 kernel: audit: type=1130 audit(1734100509.201:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.209022 systemd-journald[1149]: Journal started Dec 13 14:35:09.209065 systemd-journald[1149]: Runtime Journal (/run/log/journal/4f35fb28c3c0455bb3d5e3930c02d095) is 8.0M, max 159.0M, 151.0M free. Dec 13 14:35:08.854000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:35:09.191000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:35:09.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.238930 kernel: audit: type=1300 audit(1734100509.191:93): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fffe603b110 a2=4000 a3=7fffe603b1ac items=0 ppid=1 pid=1149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:35:09.191000 audit[1149]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fffe603b110 a2=4000 a3=7fffe603b1ac items=0 ppid=1 pid=1149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:35:09.243728 systemd[1]: Started systemd-journald.service. Dec 13 14:35:09.243776 kernel: audit: type=1327 audit(1734100509.191:93): proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:35:09.191000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:35:09.251568 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:35:09.251742 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:35:09.256632 kernel: fuse: init (API version 7.34) Dec 13 14:35:09.259484 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:35:09.259696 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:35:09.262414 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:35:09.262628 systemd[1]: Finished modprobe@drm.service. Dec 13 14:35:09.277505 kernel: audit: type=1130 audit(1734100509.238:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.277238 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:35:09.277434 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:35:09.280241 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:35:09.280449 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:35:09.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.299601 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:35:09.314986 kernel: audit: type=1130 audit(1734100509.250:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.315048 kernel: audit: type=1130 audit(1734100509.258:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.299854 systemd[1]: Finished modprobe@loop.service. Dec 13 14:35:09.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.334795 kernel: audit: type=1131 audit(1734100509.258:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.317638 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:35:09.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.335153 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:35:09.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.338179 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:35:09.340967 systemd[1]: Reached target network-pre.target. Dec 13 14:35:09.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.345197 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:35:09.352407 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:35:09.354633 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:35:09.378771 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:35:09.382413 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:35:09.385167 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:35:09.386417 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:35:09.388521 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:35:09.389775 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:35:09.393280 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:35:09.399293 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:35:09.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.402138 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:35:09.404818 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:35:09.409048 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:35:09.421339 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:35:09.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.424499 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:35:09.427688 udevadm[1206]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:35:09.443038 systemd-journald[1149]: Time spent on flushing to /var/log/journal/4f35fb28c3c0455bb3d5e3930c02d095 is 24.743ms for 1082 entries. Dec 13 14:35:09.443038 systemd-journald[1149]: System Journal (/var/log/journal/4f35fb28c3c0455bb3d5e3930c02d095) is 8.0M, max 2.6G, 2.6G free. Dec 13 14:35:09.530514 systemd-journald[1149]: Received client request to flush runtime journal. Dec 13 14:35:09.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.459976 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:35:09.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:09.531562 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:35:09.996267 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:35:09.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:10.000435 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:35:10.294126 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:35:10.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:10.484697 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:35:10.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:10.489083 systemd[1]: Starting systemd-udevd.service... Dec 13 14:35:10.508445 systemd-udevd[1217]: Using default interface naming scheme 'v252'. Dec 13 14:35:10.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:10.684315 systemd[1]: Started systemd-udevd.service. Dec 13 14:35:10.689266 systemd[1]: Starting systemd-networkd.service... Dec 13 14:35:10.727213 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:35:10.777588 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:35:10.798000 audit[1227]: AVC avc: denied { confidentiality } for pid=1227 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:35:10.844348 kernel: hv_vmbus: registering driver hv_balloon Dec 13 14:35:10.844461 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 14:35:10.844493 kernel: hv_vmbus: registering driver hv_utils Dec 13 14:35:10.848790 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:35:10.853878 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 14:35:10.853944 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 14:35:10.798000 audit[1227]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56136e9f0a50 a1=f884 a2=7fe1e851abc5 a3=5 items=12 ppid=1217 pid=1227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:35:10.798000 audit: CWD cwd="/" Dec 13 14:35:10.798000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:35:10.798000 audit: PATH item=1 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:35:10.798000 audit: PATH item=2 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:35:10.798000 audit: PATH item=3 name=(null) inode=15869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:35:10.798000 audit: PATH item=4 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:35:10.798000 audit: PATH item=5 name=(null) inode=15870 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:35:10.798000 audit: PATH item=6 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:35:10.798000 audit: PATH item=7 name=(null) inode=15871 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:35:10.798000 audit: PATH item=8 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:35:10.798000 audit: PATH item=9 name=(null) inode=15872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:35:10.798000 audit: PATH item=10 name=(null) inode=15868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:35:10.798000 audit: PATH item=11 name=(null) inode=15873 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:35:10.798000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:35:10.880527 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 14:35:10.880598 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 14:35:10.901449 kernel: Console: switching to colour dummy device 80x25 Dec 13 14:35:10.902883 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:35:10.909227 systemd[1]: Started systemd-userdbd.service. Dec 13 14:35:10.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:10.928616 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 14:35:10.928689 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 14:35:10.928718 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 14:35:10.828928 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1236) Dec 13 14:35:10.890895 systemd-journald[1149]: Time jumped backwards, rotating. Dec 13 14:35:10.910818 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 14:35:11.081932 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Dec 13 14:35:11.095522 systemd-networkd[1229]: lo: Link UP Dec 13 14:35:11.095533 systemd-networkd[1229]: lo: Gained carrier Dec 13 14:35:11.096135 systemd-networkd[1229]: Enumeration completed Dec 13 14:35:11.096279 systemd[1]: Started systemd-networkd.service. Dec 13 14:35:11.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:11.100372 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:35:11.126441 systemd-networkd[1229]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:35:11.128301 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:35:11.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:11.132605 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:35:11.158918 kernel: mlx5_core 34b5:00:02.0 enP13493s1: Link up Dec 13 14:35:11.181936 kernel: hv_netvsc 7c1e521f-c57f-7c1e-521f-c57f7c1e521f eth0: Data path switched to VF: enP13493s1 Dec 13 14:35:11.182560 systemd-networkd[1229]: enP13493s1: Link UP Dec 13 14:35:11.182732 systemd-networkd[1229]: eth0: Link UP Dec 13 14:35:11.182739 systemd-networkd[1229]: eth0: Gained carrier Dec 13 14:35:11.188223 systemd-networkd[1229]: enP13493s1: Gained carrier Dec 13 14:35:11.247021 systemd-networkd[1229]: eth0: DHCPv4 address 10.200.8.11/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:35:11.480558 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:35:11.510002 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:35:11.513241 systemd[1]: Reached target cryptsetup.target. Dec 13 14:35:11.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:11.517251 systemd[1]: Starting lvm2-activation.service... Dec 13 14:35:11.522511 lvm[1298]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:35:11.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:11.548118 systemd[1]: Finished lvm2-activation.service. Dec 13 14:35:11.551214 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:35:11.554257 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:35:11.554293 systemd[1]: Reached target local-fs.target. Dec 13 14:35:11.556787 systemd[1]: Reached target machines.target. Dec 13 14:35:11.560270 systemd[1]: Starting ldconfig.service... Dec 13 14:35:11.562697 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:35:11.562800 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:35:11.564141 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:35:11.567523 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:35:11.571759 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:35:11.575616 systemd[1]: Starting systemd-sysext.service... Dec 13 14:35:11.598239 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1301 (bootctl) Dec 13 14:35:11.599330 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:35:11.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:11.622977 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:35:11.628483 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:35:11.633359 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:35:11.633644 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:35:12.059924 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:35:12.111940 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:35:12.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.120815 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:35:12.121667 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:35:12.131924 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:35:12.136730 (sd-sysext)[1317]: Using extensions 'kubernetes'. Dec 13 14:35:12.137187 (sd-sysext)[1317]: Merged extensions into '/usr'. Dec 13 14:35:12.159168 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:35:12.161457 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:35:12.164223 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:35:12.166240 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:35:12.170276 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:35:12.176123 systemd[1]: Starting modprobe@loop.service... Dec 13 14:35:12.178509 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:35:12.178822 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:35:12.179181 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:35:12.186557 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:35:12.189572 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:35:12.189884 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:35:12.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.193387 systemd[1]: Finished systemd-sysext.service. Dec 13 14:35:12.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.196128 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:35:12.196283 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:35:12.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.199181 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:35:12.199382 systemd[1]: Finished modprobe@loop.service. Dec 13 14:35:12.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.204205 systemd[1]: Starting ensure-sysext.service... Dec 13 14:35:12.207031 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:35:12.207206 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:35:12.208677 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:35:12.216615 systemd[1]: Reloading. Dec 13 14:35:12.225185 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:35:12.242835 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:35:12.258984 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:35:12.287208 /usr/lib/systemd/system-generators/torcx-generator[1350]: time="2024-12-13T14:35:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:35:12.287643 /usr/lib/systemd/system-generators/torcx-generator[1350]: time="2024-12-13T14:35:12Z" level=info msg="torcx already run" Dec 13 14:35:12.380193 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:35:12.380214 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:35:12.398271 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:35:12.475273 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:35:12.475572 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:35:12.476889 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:35:12.480820 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:35:12.484645 systemd[1]: Starting modprobe@loop.service... Dec 13 14:35:12.487053 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:35:12.487317 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:35:12.487612 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:35:12.489278 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:35:12.489507 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:35:12.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.493350 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:35:12.493556 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:35:12.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.496917 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:35:12.497178 systemd[1]: Finished modprobe@loop.service. Dec 13 14:35:12.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.502370 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:35:12.502687 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:35:12.504002 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:35:12.507994 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:35:12.512360 systemd[1]: Starting modprobe@loop.service... Dec 13 14:35:12.514743 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:35:12.514993 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:35:12.515209 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:35:12.516792 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:35:12.517007 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:35:12.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.520456 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:35:12.520728 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:35:12.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.524072 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:35:12.524266 systemd[1]: Finished modprobe@loop.service. Dec 13 14:35:12.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.530417 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:35:12.530762 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:35:12.532120 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:35:12.536126 systemd[1]: Starting modprobe@drm.service... Dec 13 14:35:12.539727 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:35:12.543674 systemd[1]: Starting modprobe@loop.service... Dec 13 14:35:12.546178 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:35:12.546410 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:35:12.546682 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:35:12.548452 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:35:12.548642 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:35:12.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.554523 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:35:12.554702 systemd[1]: Finished modprobe@drm.service. Dec 13 14:35:12.555228 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:35:12.555355 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:35:12.555744 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:35:12.555874 systemd[1]: Finished modprobe@loop.service. Dec 13 14:35:12.556307 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:35:12.556395 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:35:12.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.558139 systemd[1]: Finished ensure-sysext.service. Dec 13 14:35:12.776832 systemd-fsck[1313]: fsck.fat 4.2 (2021-01-31) Dec 13 14:35:12.776832 systemd-fsck[1313]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 14:35:12.779112 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:35:12.784476 systemd[1]: Mounting boot.mount... Dec 13 14:35:12.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.801163 systemd[1]: Mounted boot.mount. Dec 13 14:35:12.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:12.814537 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:35:13.004059 systemd-networkd[1229]: eth0: Gained IPv6LL Dec 13 14:35:13.008862 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:35:13.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:13.663370 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:35:13.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:13.667622 systemd[1]: Starting audit-rules.service... Dec 13 14:35:13.671004 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:35:13.675210 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:35:13.679795 systemd[1]: Starting systemd-resolved.service... Dec 13 14:35:13.684227 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:35:13.689021 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:35:13.695092 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:35:13.701772 kernel: kauditd_printk_skb: 76 callbacks suppressed Dec 13 14:35:13.701826 kernel: audit: type=1130 audit(1734100513.696:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:13.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:13.698166 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:35:13.706219 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:35:13.701000 audit[1463]: SYSTEM_BOOT pid=1463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:35:13.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:13.745156 kernel: audit: type=1127 audit(1734100513.701:161): pid=1463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:35:13.745452 kernel: audit: type=1130 audit(1734100513.713:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:13.863700 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:35:13.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:13.866666 systemd[1]: Reached target time-set.target. Dec 13 14:35:13.880869 kernel: audit: type=1130 audit(1734100513.865:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:13.883314 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:35:13.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:13.900376 kernel: audit: type=1130 audit(1734100513.885:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:35:13.921929 systemd-resolved[1460]: Positive Trust Anchors: Dec 13 14:35:13.921944 systemd-resolved[1460]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:35:13.921985 systemd-resolved[1460]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:35:13.965000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:35:13.966386 augenrules[1480]: No rules Dec 13 14:35:13.967460 systemd[1]: Finished audit-rules.service. Dec 13 14:35:13.965000 audit[1480]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc47907ec0 a2=420 a3=0 items=0 ppid=1455 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:35:13.994460 kernel: audit: type=1305 audit(1734100513.965:165): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:35:13.994531 kernel: audit: type=1300 audit(1734100513.965:165): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc47907ec0 a2=420 a3=0 items=0 ppid=1455 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:35:13.965000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:35:14.004539 kernel: audit: type=1327 audit(1734100513.965:165): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:35:14.008736 systemd-timesyncd[1461]: Contacted time server 193.1.8.106:123 (0.flatcar.pool.ntp.org). Dec 13 14:35:14.008802 systemd-timesyncd[1461]: Initial clock synchronization to Fri 2024-12-13 14:35:14.008376 UTC. Dec 13 14:35:14.036867 systemd-resolved[1460]: Using system hostname 'ci-3510.3.6-a-19c473d9c1'. Dec 13 14:35:14.038521 systemd[1]: Started systemd-resolved.service. Dec 13 14:35:14.040883 systemd[1]: Reached target network.target. Dec 13 14:35:14.043144 systemd[1]: Reached target network-online.target. Dec 13 14:35:14.045369 systemd[1]: Reached target nss-lookup.target. Dec 13 14:35:18.587246 ldconfig[1300]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:35:18.597417 systemd[1]: Finished ldconfig.service. Dec 13 14:35:18.602225 systemd[1]: Starting systemd-update-done.service... Dec 13 14:35:18.622371 systemd[1]: Finished systemd-update-done.service. Dec 13 14:35:18.625091 systemd[1]: Reached target sysinit.target. Dec 13 14:35:18.627496 systemd[1]: Started motdgen.path. Dec 13 14:35:18.629436 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:35:18.632762 systemd[1]: Started logrotate.timer. Dec 13 14:35:18.634924 systemd[1]: Started mdadm.timer. Dec 13 14:35:18.636783 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:35:18.639171 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:35:18.639209 systemd[1]: Reached target paths.target. Dec 13 14:35:18.641316 systemd[1]: Reached target timers.target. Dec 13 14:35:18.643772 systemd[1]: Listening on dbus.socket. Dec 13 14:35:18.646958 systemd[1]: Starting docker.socket... Dec 13 14:35:18.665612 systemd[1]: Listening on sshd.socket. Dec 13 14:35:18.667878 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:35:18.668366 systemd[1]: Listening on docker.socket. Dec 13 14:35:18.670633 systemd[1]: Reached target sockets.target. Dec 13 14:35:18.672894 systemd[1]: Reached target basic.target. Dec 13 14:35:18.675152 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:35:18.675214 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:35:18.675249 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:35:18.676310 systemd[1]: Starting containerd.service... Dec 13 14:35:18.679872 systemd[1]: Starting dbus.service... Dec 13 14:35:18.683537 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:35:18.687416 systemd[1]: Starting extend-filesystems.service... Dec 13 14:35:18.690015 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:35:18.691631 systemd[1]: Starting kubelet.service... Dec 13 14:35:18.695326 systemd[1]: Starting motdgen.service... Dec 13 14:35:18.699113 systemd[1]: Started nvidia.service. Dec 13 14:35:18.702936 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:35:18.708394 systemd[1]: Starting sshd-keygen.service... Dec 13 14:35:18.715318 systemd[1]: Starting systemd-logind.service... Dec 13 14:35:18.717700 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:35:18.717792 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:35:18.719226 systemd[1]: Starting update-engine.service... Dec 13 14:35:18.722983 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:35:18.738009 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:35:18.738303 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:35:18.762098 jq[1494]: false Dec 13 14:35:18.762433 jq[1512]: true Dec 13 14:35:18.763072 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:35:18.763384 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:35:18.785265 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:35:18.785542 systemd[1]: Finished motdgen.service. Dec 13 14:35:18.800935 extend-filesystems[1495]: Found loop1 Dec 13 14:35:18.803478 extend-filesystems[1495]: Found sda Dec 13 14:35:18.803478 extend-filesystems[1495]: Found sda1 Dec 13 14:35:18.803478 extend-filesystems[1495]: Found sda2 Dec 13 14:35:18.803478 extend-filesystems[1495]: Found sda3 Dec 13 14:35:18.803478 extend-filesystems[1495]: Found usr Dec 13 14:35:18.803478 extend-filesystems[1495]: Found sda4 Dec 13 14:35:18.803478 extend-filesystems[1495]: Found sda6 Dec 13 14:35:18.803478 extend-filesystems[1495]: Found sda7 Dec 13 14:35:18.803478 extend-filesystems[1495]: Found sda9 Dec 13 14:35:18.803478 extend-filesystems[1495]: Checking size of /dev/sda9 Dec 13 14:35:18.831326 jq[1525]: true Dec 13 14:35:18.880579 env[1522]: time="2024-12-13T14:35:18.880481459Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:35:18.890170 extend-filesystems[1495]: Old size kept for /dev/sda9 Dec 13 14:35:18.890170 extend-filesystems[1495]: Found sr0 Dec 13 14:35:18.895596 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:35:18.895945 systemd[1]: Finished extend-filesystems.service. Dec 13 14:35:18.955441 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:35:18.955672 systemd-logind[1510]: New seat seat0. Dec 13 14:35:19.002518 bash[1551]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:35:19.002703 env[1522]: time="2024-12-13T14:35:19.001584023Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:35:19.002703 env[1522]: time="2024-12-13T14:35:19.001709421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:35:18.998804 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:35:19.016840 env[1522]: time="2024-12-13T14:35:19.016781718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:35:19.016840 env[1522]: time="2024-12-13T14:35:19.016836618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:35:19.017211 env[1522]: time="2024-12-13T14:35:19.017178013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:35:19.017300 env[1522]: time="2024-12-13T14:35:19.017214012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:35:19.017300 env[1522]: time="2024-12-13T14:35:19.017232912Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:35:19.017300 env[1522]: time="2024-12-13T14:35:19.017245912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:35:19.017414 env[1522]: time="2024-12-13T14:35:19.017341611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:35:19.017636 env[1522]: time="2024-12-13T14:35:19.017604907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:35:19.017885 env[1522]: time="2024-12-13T14:35:19.017854304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:35:19.018020 env[1522]: time="2024-12-13T14:35:19.017886403Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:35:19.018020 env[1522]: time="2024-12-13T14:35:19.017967402Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:35:19.018020 env[1522]: time="2024-12-13T14:35:19.017985502Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:35:19.029166 dbus-daemon[1492]: [system] SELinux support is enabled Dec 13 14:35:19.029366 systemd[1]: Started dbus.service. Dec 13 14:35:19.034527 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:35:19.034557 systemd[1]: Reached target system-config.target. Dec 13 14:35:19.037468 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:35:19.037487 systemd[1]: Reached target user-config.target. Dec 13 14:35:19.045105 systemd[1]: Started systemd-logind.service. Dec 13 14:35:19.054484 env[1522]: time="2024-12-13T14:35:19.047138810Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:35:19.054484 env[1522]: time="2024-12-13T14:35:19.047210409Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:35:19.054484 env[1522]: time="2024-12-13T14:35:19.047225509Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:35:19.054484 env[1522]: time="2024-12-13T14:35:19.047280008Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:35:19.054484 env[1522]: time="2024-12-13T14:35:19.047295008Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:35:19.054484 env[1522]: time="2024-12-13T14:35:19.047309308Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:35:19.054484 env[1522]: time="2024-12-13T14:35:19.047321808Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:35:19.054484 env[1522]: time="2024-12-13T14:35:19.047377807Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:35:19.054484 env[1522]: time="2024-12-13T14:35:19.047390507Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:35:19.054484 env[1522]: time="2024-12-13T14:35:19.047404407Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:35:19.054484 env[1522]: time="2024-12-13T14:35:19.047417406Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:35:19.054484 env[1522]: time="2024-12-13T14:35:19.047441706Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:35:19.054484 env[1522]: time="2024-12-13T14:35:19.047565104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:35:19.054484 env[1522]: time="2024-12-13T14:35:19.047665703Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:35:19.050121 systemd[1]: Started containerd.service. Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048046698Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048073798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048088397Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048146497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048161796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048174896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048186396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048208796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048220896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048234295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048244295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048259295Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048401293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048416793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:35:19.055052 env[1522]: time="2024-12-13T14:35:19.048439493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:35:19.055531 env[1522]: time="2024-12-13T14:35:19.048450492Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:35:19.055531 env[1522]: time="2024-12-13T14:35:19.048467192Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:35:19.055531 env[1522]: time="2024-12-13T14:35:19.048482792Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:35:19.055531 env[1522]: time="2024-12-13T14:35:19.048498892Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:35:19.055531 env[1522]: time="2024-12-13T14:35:19.048542991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:35:19.055695 env[1522]: time="2024-12-13T14:35:19.048748488Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:35:19.055695 env[1522]: time="2024-12-13T14:35:19.048805688Z" level=info msg="Connect containerd service" Dec 13 14:35:19.055695 env[1522]: time="2024-12-13T14:35:19.048866087Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:35:19.055695 env[1522]: time="2024-12-13T14:35:19.049554778Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:35:19.055695 env[1522]: time="2024-12-13T14:35:19.049891073Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:35:19.055695 env[1522]: time="2024-12-13T14:35:19.049964572Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:35:19.055695 env[1522]: time="2024-12-13T14:35:19.050026471Z" level=info msg="containerd successfully booted in 0.174013s" Dec 13 14:35:19.086148 env[1522]: time="2024-12-13T14:35:19.065372865Z" level=info msg="Start subscribing containerd event" Dec 13 14:35:19.086148 env[1522]: time="2024-12-13T14:35:19.065441964Z" level=info msg="Start recovering state" Dec 13 14:35:19.086148 env[1522]: time="2024-12-13T14:35:19.065533963Z" level=info msg="Start event monitor" Dec 13 14:35:19.086148 env[1522]: time="2024-12-13T14:35:19.065550363Z" level=info msg="Start snapshots syncer" Dec 13 14:35:19.086148 env[1522]: time="2024-12-13T14:35:19.065569462Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:35:19.086148 env[1522]: time="2024-12-13T14:35:19.065580762Z" level=info msg="Start streaming server" Dec 13 14:35:19.087776 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:35:19.662670 update_engine[1511]: I1213 14:35:19.662125 1511 main.cc:92] Flatcar Update Engine starting Dec 13 14:35:19.708334 systemd[1]: Started update-engine.service. Dec 13 14:35:19.716700 update_engine[1511]: I1213 14:35:19.708369 1511 update_check_scheduler.cc:74] Next update check in 4m0s Dec 13 14:35:19.713702 systemd[1]: Started locksmithd.service. Dec 13 14:35:19.785277 systemd[1]: Started kubelet.service. Dec 13 14:35:19.860285 sshd_keygen[1513]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:35:19.889264 systemd[1]: Finished sshd-keygen.service. Dec 13 14:35:19.897147 systemd[1]: Starting issuegen.service... Dec 13 14:35:19.901087 systemd[1]: Started waagent.service. Dec 13 14:35:19.913109 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:35:19.913388 systemd[1]: Finished issuegen.service. Dec 13 14:35:19.917776 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:35:19.937313 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:35:19.942339 systemd[1]: Started getty@tty1.service. Dec 13 14:35:19.946794 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:35:19.949816 systemd[1]: Reached target getty.target. Dec 13 14:35:19.952249 systemd[1]: Reached target multi-user.target. Dec 13 14:35:19.958232 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:35:19.971818 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:35:19.972096 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:35:19.982609 systemd[1]: Startup finished in 817ms (firmware) + 27.216s (loader) + 15.260s (kernel) + 23.143s (userspace) = 1min 6.438s. Dec 13 14:35:20.300413 login[1633]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:35:20.302193 login[1637]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:35:20.331244 systemd[1]: Created slice user-500.slice. Dec 13 14:35:20.332707 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:35:20.338446 systemd-logind[1510]: New session 1 of user core. Dec 13 14:35:20.343261 systemd-logind[1510]: New session 2 of user core. Dec 13 14:35:20.374241 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:35:20.376145 systemd[1]: Starting user@500.service... Dec 13 14:35:20.383727 (systemd)[1648]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:20.465522 kubelet[1614]: E1213 14:35:20.465455 1614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:35:20.467146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:35:20.467348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:35:20.579361 systemd[1648]: Queued start job for default target default.target. Dec 13 14:35:20.579673 systemd[1648]: Reached target paths.target. Dec 13 14:35:20.579698 systemd[1648]: Reached target sockets.target. Dec 13 14:35:20.579715 systemd[1648]: Reached target timers.target. Dec 13 14:35:20.579731 systemd[1648]: Reached target basic.target. Dec 13 14:35:20.579883 systemd[1]: Started user@500.service. Dec 13 14:35:20.581109 systemd[1]: Started session-1.scope. Dec 13 14:35:20.581853 systemd[1]: Started session-2.scope. Dec 13 14:35:20.582294 systemd[1648]: Reached target default.target. Dec 13 14:35:20.582517 systemd[1648]: Startup finished in 185ms. Dec 13 14:35:21.153094 locksmithd[1609]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:35:25.445123 waagent[1625]: 2024-12-13T14:35:25.444989Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 14:35:25.459556 waagent[1625]: 2024-12-13T14:35:25.447783Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 14:35:25.459556 waagent[1625]: 2024-12-13T14:35:25.448885Z INFO Daemon Daemon Python: 3.9.16 Dec 13 14:35:25.459556 waagent[1625]: 2024-12-13T14:35:25.450398Z INFO Daemon Daemon Run daemon Dec 13 14:35:25.459556 waagent[1625]: 2024-12-13T14:35:25.451415Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 14:35:25.463621 waagent[1625]: 2024-12-13T14:35:25.463502Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:35:25.470970 waagent[1625]: 2024-12-13T14:35:25.470851Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:35:25.476032 waagent[1625]: 2024-12-13T14:35:25.475972Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:35:25.478992 waagent[1625]: 2024-12-13T14:35:25.478935Z INFO Daemon Daemon Using waagent for provisioning Dec 13 14:35:25.482348 waagent[1625]: 2024-12-13T14:35:25.482291Z INFO Daemon Daemon Activate resource disk Dec 13 14:35:25.485007 waagent[1625]: 2024-12-13T14:35:25.484953Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 14:35:25.495370 waagent[1625]: 2024-12-13T14:35:25.495306Z INFO Daemon Daemon Found device: None Dec 13 14:35:25.497883 waagent[1625]: 2024-12-13T14:35:25.497822Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 14:35:25.502268 waagent[1625]: 2024-12-13T14:35:25.502208Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 14:35:25.508522 waagent[1625]: 2024-12-13T14:35:25.508461Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:35:25.511675 waagent[1625]: 2024-12-13T14:35:25.511617Z INFO Daemon Daemon Running default provisioning handler Dec 13 14:35:25.520848 waagent[1625]: 2024-12-13T14:35:25.520733Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:35:25.536192 waagent[1625]: 2024-12-13T14:35:25.524202Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:35:25.536192 waagent[1625]: 2024-12-13T14:35:25.525387Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:35:25.536192 waagent[1625]: 2024-12-13T14:35:25.526274Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 14:35:25.596427 waagent[1625]: 2024-12-13T14:35:25.596265Z INFO Daemon Daemon Successfully mounted dvd Dec 13 14:35:25.683993 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 14:35:25.703974 waagent[1625]: 2024-12-13T14:35:25.703795Z INFO Daemon Daemon Detect protocol endpoint Dec 13 14:35:25.706846 waagent[1625]: 2024-12-13T14:35:25.706776Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:35:25.710085 waagent[1625]: 2024-12-13T14:35:25.710027Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 14:35:25.713660 waagent[1625]: 2024-12-13T14:35:25.713602Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 14:35:25.716616 waagent[1625]: 2024-12-13T14:35:25.716549Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 14:35:25.719348 waagent[1625]: 2024-12-13T14:35:25.719292Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 14:35:25.831712 waagent[1625]: 2024-12-13T14:35:25.831641Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 14:35:25.835628 waagent[1625]: 2024-12-13T14:35:25.835577Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 14:35:25.838504 waagent[1625]: 2024-12-13T14:35:25.838442Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 14:35:26.390277 waagent[1625]: 2024-12-13T14:35:26.390133Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 14:35:26.402803 waagent[1625]: 2024-12-13T14:35:26.402730Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 14:35:26.406037 waagent[1625]: 2024-12-13T14:35:26.405974Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 14:35:26.491555 waagent[1625]: 2024-12-13T14:35:26.491432Z INFO Daemon Daemon Found private key matching thumbprint 1D5BD845AA48959EAA0040DF3A79356DD03F4780 Dec 13 14:35:26.502778 waagent[1625]: 2024-12-13T14:35:26.492957Z INFO Daemon Daemon Certificate with thumbprint F5318DFD02863D5EE03D7D9231D5A3A76BF7E3CC has no matching private key. Dec 13 14:35:26.502778 waagent[1625]: 2024-12-13T14:35:26.493991Z INFO Daemon Daemon Fetch goal state completed Dec 13 14:35:26.542394 waagent[1625]: 2024-12-13T14:35:26.542320Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 2a969b70-6a66-47b0-99cc-4aee145cb668 New eTag: 14448441207066590733] Dec 13 14:35:26.552013 waagent[1625]: 2024-12-13T14:35:26.544359Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:35:26.554767 waagent[1625]: 2024-12-13T14:35:26.554711Z INFO Daemon Daemon Starting provisioning Dec 13 14:35:26.561923 waagent[1625]: 2024-12-13T14:35:26.556143Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 14:35:26.561923 waagent[1625]: 2024-12-13T14:35:26.557063Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-19c473d9c1] Dec 13 14:35:26.586021 waagent[1625]: 2024-12-13T14:35:26.585871Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-19c473d9c1] Dec 13 14:35:26.594865 waagent[1625]: 2024-12-13T14:35:26.587661Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 14:35:26.594865 waagent[1625]: 2024-12-13T14:35:26.588929Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 14:35:26.602588 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 14:35:26.602897 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 14:35:26.602986 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 14:35:26.603256 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:35:26.607025 systemd-networkd[1229]: eth0: DHCPv6 lease lost Dec 13 14:35:26.608315 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:35:26.608520 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:35:26.611027 systemd[1]: Starting systemd-networkd.service... Dec 13 14:35:26.648262 systemd-networkd[1691]: enP13493s1: Link UP Dec 13 14:35:26.648272 systemd-networkd[1691]: enP13493s1: Gained carrier Dec 13 14:35:26.649549 systemd-networkd[1691]: eth0: Link UP Dec 13 14:35:26.649558 systemd-networkd[1691]: eth0: Gained carrier Dec 13 14:35:26.650072 systemd-networkd[1691]: lo: Link UP Dec 13 14:35:26.650081 systemd-networkd[1691]: lo: Gained carrier Dec 13 14:35:26.650385 systemd-networkd[1691]: eth0: Gained IPv6LL Dec 13 14:35:26.650640 systemd-networkd[1691]: Enumeration completed Dec 13 14:35:26.656091 waagent[1625]: 2024-12-13T14:35:26.652036Z INFO Daemon Daemon Create user account if not exists Dec 13 14:35:26.656091 waagent[1625]: 2024-12-13T14:35:26.653671Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 14:35:26.656091 waagent[1625]: 2024-12-13T14:35:26.654543Z INFO Daemon Daemon Configure sudoer Dec 13 14:35:26.650751 systemd[1]: Started systemd-networkd.service. Dec 13 14:35:26.656461 waagent[1625]: 2024-12-13T14:35:26.656408Z INFO Daemon Daemon Configure sshd Dec 13 14:35:26.657576 waagent[1625]: 2024-12-13T14:35:26.657529Z INFO Daemon Daemon Deploy ssh public key. Dec 13 14:35:26.664364 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:35:26.670715 systemd-networkd[1691]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:35:26.733996 systemd-networkd[1691]: eth0: DHCPv4 address 10.200.8.11/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:35:26.738358 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:35:27.795323 waagent[1625]: 2024-12-13T14:35:27.795220Z INFO Daemon Daemon Provisioning complete Dec 13 14:35:27.812730 waagent[1625]: 2024-12-13T14:35:27.812636Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 14:35:27.816390 waagent[1625]: 2024-12-13T14:35:27.816306Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 14:35:27.822287 waagent[1625]: 2024-12-13T14:35:27.822212Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 14:35:28.090532 waagent[1701]: 2024-12-13T14:35:28.090363Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 14:35:28.091277 waagent[1701]: 2024-12-13T14:35:28.091213Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:35:28.091416 waagent[1701]: 2024-12-13T14:35:28.091364Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:35:28.102325 waagent[1701]: 2024-12-13T14:35:28.102251Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 14:35:28.102480 waagent[1701]: 2024-12-13T14:35:28.102430Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 14:35:28.163546 waagent[1701]: 2024-12-13T14:35:28.163416Z INFO ExtHandler ExtHandler Found private key matching thumbprint 1D5BD845AA48959EAA0040DF3A79356DD03F4780 Dec 13 14:35:28.163779 waagent[1701]: 2024-12-13T14:35:28.163718Z INFO ExtHandler ExtHandler Certificate with thumbprint F5318DFD02863D5EE03D7D9231D5A3A76BF7E3CC has no matching private key. Dec 13 14:35:28.164046 waagent[1701]: 2024-12-13T14:35:28.163992Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 14:35:28.177861 waagent[1701]: 2024-12-13T14:35:28.177798Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 9624653a-2476-4cae-8003-575e1200ef35 New eTag: 14448441207066590733] Dec 13 14:35:28.178390 waagent[1701]: 2024-12-13T14:35:28.178332Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:35:28.233014 waagent[1701]: 2024-12-13T14:35:28.232844Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:35:28.255638 waagent[1701]: 2024-12-13T14:35:28.255544Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1701 Dec 13 14:35:28.259030 waagent[1701]: 2024-12-13T14:35:28.258963Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:35:28.260266 waagent[1701]: 2024-12-13T14:35:28.260209Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:35:28.379725 waagent[1701]: 2024-12-13T14:35:28.379588Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:35:28.380163 waagent[1701]: 2024-12-13T14:35:28.380092Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:35:28.388759 waagent[1701]: 2024-12-13T14:35:28.388703Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:35:28.389247 waagent[1701]: 2024-12-13T14:35:28.389188Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:35:28.390322 waagent[1701]: 2024-12-13T14:35:28.390259Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 14:35:28.391565 waagent[1701]: 2024-12-13T14:35:28.391508Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:35:28.392199 waagent[1701]: 2024-12-13T14:35:28.392141Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:35:28.392720 waagent[1701]: 2024-12-13T14:35:28.392665Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:35:28.393066 waagent[1701]: 2024-12-13T14:35:28.393016Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:35:28.393261 waagent[1701]: 2024-12-13T14:35:28.393207Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:35:28.393455 waagent[1701]: 2024-12-13T14:35:28.393407Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:35:28.393733 waagent[1701]: 2024-12-13T14:35:28.393682Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:35:28.394494 waagent[1701]: 2024-12-13T14:35:28.394441Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:35:28.394751 waagent[1701]: 2024-12-13T14:35:28.394701Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:35:28.395560 waagent[1701]: 2024-12-13T14:35:28.395503Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:35:28.395831 waagent[1701]: 2024-12-13T14:35:28.395777Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:35:28.396324 waagent[1701]: 2024-12-13T14:35:28.396271Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:35:28.396454 waagent[1701]: 2024-12-13T14:35:28.396399Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:35:28.396454 waagent[1701]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:35:28.396454 waagent[1701]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:35:28.396454 waagent[1701]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:35:28.396454 waagent[1701]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:35:28.396454 waagent[1701]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:35:28.396454 waagent[1701]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:35:28.399122 waagent[1701]: 2024-12-13T14:35:28.398914Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:35:28.399726 waagent[1701]: 2024-12-13T14:35:28.399664Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:35:28.399870 waagent[1701]: 2024-12-13T14:35:28.399823Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:35:28.414079 waagent[1701]: 2024-12-13T14:35:28.414016Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 14:35:28.414847 waagent[1701]: 2024-12-13T14:35:28.414795Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:35:28.415661 waagent[1701]: 2024-12-13T14:35:28.415606Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 14:35:28.448101 waagent[1701]: 2024-12-13T14:35:28.447991Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1691' Dec 13 14:35:28.468993 waagent[1701]: 2024-12-13T14:35:28.468923Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 14:35:28.544069 waagent[1701]: 2024-12-13T14:35:28.543943Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:35:28.544069 waagent[1701]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:35:28.544069 waagent[1701]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:35:28.544069 waagent[1701]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1f:c5:7f brd ff:ff:ff:ff:ff:ff Dec 13 14:35:28.544069 waagent[1701]: 3: enP13493s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1f:c5:7f brd ff:ff:ff:ff:ff:ff\ altname enP13493p0s2 Dec 13 14:35:28.544069 waagent[1701]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:35:28.544069 waagent[1701]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:35:28.544069 waagent[1701]: 2: eth0 inet 10.200.8.11/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:35:28.544069 waagent[1701]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:35:28.544069 waagent[1701]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:35:28.544069 waagent[1701]: 2: eth0 inet6 fe80::7e1e:52ff:fe1f:c57f/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:35:28.703494 waagent[1701]: 2024-12-13T14:35:28.703379Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 14:35:28.825952 waagent[1625]: 2024-12-13T14:35:28.825767Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 14:35:28.832114 waagent[1625]: 2024-12-13T14:35:28.832054Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 14:35:29.892688 waagent[1729]: 2024-12-13T14:35:29.892575Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 14:35:29.893428 waagent[1729]: 2024-12-13T14:35:29.893363Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 14:35:29.893570 waagent[1729]: 2024-12-13T14:35:29.893518Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 14:35:29.893713 waagent[1729]: 2024-12-13T14:35:29.893667Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 13 14:35:29.903367 waagent[1729]: 2024-12-13T14:35:29.903260Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:35:29.903759 waagent[1729]: 2024-12-13T14:35:29.903703Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:35:29.903942 waagent[1729]: 2024-12-13T14:35:29.903871Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:35:29.915776 waagent[1729]: 2024-12-13T14:35:29.915694Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 14:35:29.924323 waagent[1729]: 2024-12-13T14:35:29.924261Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 14:35:29.925281 waagent[1729]: 2024-12-13T14:35:29.925222Z INFO ExtHandler Dec 13 14:35:29.925429 waagent[1729]: 2024-12-13T14:35:29.925379Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: dfa55371-9d3b-4c50-993f-e38faf733f2a eTag: 14448441207066590733 source: Fabric] Dec 13 14:35:29.926136 waagent[1729]: 2024-12-13T14:35:29.926079Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 14:35:29.927203 waagent[1729]: 2024-12-13T14:35:29.927144Z INFO ExtHandler Dec 13 14:35:29.927335 waagent[1729]: 2024-12-13T14:35:29.927288Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 14:35:29.934121 waagent[1729]: 2024-12-13T14:35:29.934070Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 14:35:29.934549 waagent[1729]: 2024-12-13T14:35:29.934502Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:35:29.955289 waagent[1729]: 2024-12-13T14:35:29.955216Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 14:35:30.024934 waagent[1729]: 2024-12-13T14:35:30.024796Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F5318DFD02863D5EE03D7D9231D5A3A76BF7E3CC', 'hasPrivateKey': False} Dec 13 14:35:30.026008 waagent[1729]: 2024-12-13T14:35:30.025943Z INFO ExtHandler Downloaded certificate {'thumbprint': '1D5BD845AA48959EAA0040DF3A79356DD03F4780', 'hasPrivateKey': True} Dec 13 14:35:30.026993 waagent[1729]: 2024-12-13T14:35:30.026935Z INFO ExtHandler Fetch goal state completed Dec 13 14:35:30.047078 waagent[1729]: 2024-12-13T14:35:30.046966Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 14:35:30.059106 waagent[1729]: 2024-12-13T14:35:30.058999Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1729 Dec 13 14:35:30.062282 waagent[1729]: 2024-12-13T14:35:30.062205Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:35:30.063315 waagent[1729]: 2024-12-13T14:35:30.063252Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 14:35:30.063607 waagent[1729]: 2024-12-13T14:35:30.063554Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 14:35:30.065601 waagent[1729]: 2024-12-13T14:35:30.065537Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:35:30.070855 waagent[1729]: 2024-12-13T14:35:30.070796Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:35:30.071238 waagent[1729]: 2024-12-13T14:35:30.071181Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:35:30.079474 waagent[1729]: 2024-12-13T14:35:30.079420Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:35:30.079972 waagent[1729]: 2024-12-13T14:35:30.079883Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:35:30.086003 waagent[1729]: 2024-12-13T14:35:30.085892Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 14:35:30.087036 waagent[1729]: 2024-12-13T14:35:30.086971Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 14:35:30.088497 waagent[1729]: 2024-12-13T14:35:30.088439Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:35:30.088950 waagent[1729]: 2024-12-13T14:35:30.088881Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:35:30.089489 waagent[1729]: 2024-12-13T14:35:30.089439Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:35:30.089654 waagent[1729]: 2024-12-13T14:35:30.089607Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:35:30.090308 waagent[1729]: 2024-12-13T14:35:30.090255Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:35:30.091387 waagent[1729]: 2024-12-13T14:35:30.091274Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:35:30.091524 waagent[1729]: 2024-12-13T14:35:30.091473Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:35:30.091636 waagent[1729]: 2024-12-13T14:35:30.091568Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:35:30.091699 waagent[1729]: 2024-12-13T14:35:30.091653Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:35:30.091699 waagent[1729]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:35:30.091699 waagent[1729]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:35:30.091699 waagent[1729]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:35:30.091699 waagent[1729]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:35:30.091699 waagent[1729]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:35:30.091699 waagent[1729]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:35:30.094305 waagent[1729]: 2024-12-13T14:35:30.094160Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:35:30.095292 waagent[1729]: 2024-12-13T14:35:30.095227Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:35:30.096350 waagent[1729]: 2024-12-13T14:35:30.096293Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:35:30.098366 waagent[1729]: 2024-12-13T14:35:30.098248Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:35:30.098977 waagent[1729]: 2024-12-13T14:35:30.098916Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:35:30.099861 waagent[1729]: 2024-12-13T14:35:30.099799Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:35:30.102382 waagent[1729]: 2024-12-13T14:35:30.102203Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:35:30.108515 waagent[1729]: 2024-12-13T14:35:30.108434Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:35:30.108515 waagent[1729]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:35:30.108515 waagent[1729]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:35:30.108515 waagent[1729]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1f:c5:7f brd ff:ff:ff:ff:ff:ff Dec 13 14:35:30.108515 waagent[1729]: 3: enP13493s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1f:c5:7f brd ff:ff:ff:ff:ff:ff\ altname enP13493p0s2 Dec 13 14:35:30.108515 waagent[1729]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:35:30.108515 waagent[1729]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:35:30.108515 waagent[1729]: 2: eth0 inet 10.200.8.11/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:35:30.108515 waagent[1729]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:35:30.108515 waagent[1729]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:35:30.108515 waagent[1729]: 2: eth0 inet6 fe80::7e1e:52ff:fe1f:c57f/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:35:30.128414 waagent[1729]: 2024-12-13T14:35:30.128332Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 14:35:30.162506 waagent[1729]: 2024-12-13T14:35:30.162385Z INFO ExtHandler ExtHandler Dec 13 14:35:30.164344 waagent[1729]: 2024-12-13T14:35:30.164280Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ba007f60-22a1-4b09-9d15-f25589181bba correlation 4866314c-3c05-435e-a715-0000d0f8ccb5 created: 2024-12-13T14:34:02.944847Z] Dec 13 14:35:30.169955 waagent[1729]: 2024-12-13T14:35:30.169881Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 14:35:30.178470 waagent[1729]: 2024-12-13T14:35:30.178396Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 16 ms] Dec 13 14:35:30.204609 waagent[1729]: 2024-12-13T14:35:30.204543Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 14:35:30.221630 waagent[1729]: 2024-12-13T14:35:30.221223Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9FFE07F8-B07A-49C8-97F9-8B6316543310;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 14:35:30.281170 waagent[1729]: 2024-12-13T14:35:30.281052Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 13 14:35:30.281170 waagent[1729]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:35:30.281170 waagent[1729]: pkts bytes target prot opt in out source destination Dec 13 14:35:30.281170 waagent[1729]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:35:30.281170 waagent[1729]: pkts bytes target prot opt in out source destination Dec 13 14:35:30.281170 waagent[1729]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:35:30.281170 waagent[1729]: pkts bytes target prot opt in out source destination Dec 13 14:35:30.281170 waagent[1729]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:35:30.281170 waagent[1729]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:35:30.281170 waagent[1729]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:35:30.288515 waagent[1729]: 2024-12-13T14:35:30.288408Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 14:35:30.288515 waagent[1729]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:35:30.288515 waagent[1729]: pkts bytes target prot opt in out source destination Dec 13 14:35:30.288515 waagent[1729]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:35:30.288515 waagent[1729]: pkts bytes target prot opt in out source destination Dec 13 14:35:30.288515 waagent[1729]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:35:30.288515 waagent[1729]: pkts bytes target prot opt in out source destination Dec 13 14:35:30.288515 waagent[1729]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:35:30.288515 waagent[1729]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:35:30.288515 waagent[1729]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:35:30.289105 waagent[1729]: 2024-12-13T14:35:30.289051Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 14:35:30.712799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:35:30.713166 systemd[1]: Stopped kubelet.service. Dec 13 14:35:30.715273 systemd[1]: Starting kubelet.service... Dec 13 14:35:30.801311 systemd[1]: Started kubelet.service. Dec 13 14:35:31.328846 kubelet[1791]: E1213 14:35:31.328790 1791 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:35:31.332323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:35:31.332528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:35:41.462664 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:35:41.463008 systemd[1]: Stopped kubelet.service. Dec 13 14:35:41.465264 systemd[1]: Starting kubelet.service... Dec 13 14:35:41.783538 systemd[1]: Started kubelet.service. Dec 13 14:35:42.075588 kubelet[1810]: E1213 14:35:42.075469 1810 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:35:42.077524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:35:42.077704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:35:52.212768 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:35:52.213109 systemd[1]: Stopped kubelet.service. Dec 13 14:35:52.215266 systemd[1]: Starting kubelet.service... Dec 13 14:35:52.558860 systemd[1]: Started kubelet.service. Dec 13 14:35:52.863394 kubelet[1825]: E1213 14:35:52.863279 1825 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:35:52.865184 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:35:52.865384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:35:58.815573 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 14:36:02.962779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 14:36:02.963122 systemd[1]: Stopped kubelet.service. Dec 13 14:36:02.965284 systemd[1]: Starting kubelet.service... Dec 13 14:36:03.174467 systemd[1]: Started kubelet.service. Dec 13 14:36:03.621494 kubelet[1840]: E1213 14:36:03.621439 1840 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:36:03.623346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:36:03.623480 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:36:04.617543 update_engine[1511]: I1213 14:36:04.617449 1511 update_attempter.cc:509] Updating boot flags... Dec 13 14:36:10.617778 systemd[1]: Created slice system-sshd.slice. Dec 13 14:36:10.619590 systemd[1]: Started sshd@0-10.200.8.11:22-10.200.16.10:53740.service. Dec 13 14:36:11.582864 sshd[1886]: Accepted publickey for core from 10.200.16.10 port 53740 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:36:11.584613 sshd[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:11.589493 systemd-logind[1510]: New session 3 of user core. Dec 13 14:36:11.590131 systemd[1]: Started session-3.scope. Dec 13 14:36:12.193849 systemd[1]: Started sshd@1-10.200.8.11:22-10.200.16.10:53746.service. Dec 13 14:36:12.906015 sshd[1894]: Accepted publickey for core from 10.200.16.10 port 53746 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:36:12.907676 sshd[1894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:12.913103 systemd[1]: Started session-4.scope. Dec 13 14:36:12.913809 systemd-logind[1510]: New session 4 of user core. Dec 13 14:36:13.408110 sshd[1894]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:13.411040 systemd[1]: sshd@1-10.200.8.11:22-10.200.16.10:53746.service: Deactivated successfully. Dec 13 14:36:13.412113 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:36:13.412204 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:36:13.413552 systemd-logind[1510]: Removed session 4. Dec 13 14:36:13.525379 systemd[1]: Started sshd@2-10.200.8.11:22-10.200.16.10:53750.service. Dec 13 14:36:13.712815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 14:36:13.713128 systemd[1]: Stopped kubelet.service. Dec 13 14:36:13.715438 systemd[1]: Starting kubelet.service... Dec 13 14:36:14.050932 systemd[1]: Started kubelet.service. Dec 13 14:36:14.099414 kubelet[1910]: E1213 14:36:14.099355 1910 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:36:14.101082 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:36:14.380609 sshd[1901]: Accepted publickey for core from 10.200.16.10 port 53750 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:36:14.101292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:36:14.381221 sshd[1901]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:14.386986 systemd[1]: Started session-5.scope. Dec 13 14:36:14.387299 systemd-logind[1510]: New session 5 of user core. Dec 13 14:36:14.780147 sshd[1901]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:14.783477 systemd[1]: sshd@2-10.200.8.11:22-10.200.16.10:53750.service: Deactivated successfully. Dec 13 14:36:14.784980 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:36:14.785015 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:36:14.786176 systemd-logind[1510]: Removed session 5. Dec 13 14:36:14.897336 systemd[1]: Started sshd@3-10.200.8.11:22-10.200.16.10:53764.service. Dec 13 14:36:15.617594 sshd[1923]: Accepted publickey for core from 10.200.16.10 port 53764 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:36:15.619268 sshd[1923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:15.624389 systemd[1]: Started session-6.scope. Dec 13 14:36:15.624634 systemd-logind[1510]: New session 6 of user core. Dec 13 14:36:16.126406 sshd[1923]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:16.129723 systemd[1]: sshd@3-10.200.8.11:22-10.200.16.10:53764.service: Deactivated successfully. Dec 13 14:36:16.130945 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:36:16.131874 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:36:16.133077 systemd-logind[1510]: Removed session 6. Dec 13 14:36:16.242952 systemd[1]: Started sshd@4-10.200.8.11:22-10.200.16.10:53780.service. Dec 13 14:36:16.955153 sshd[1930]: Accepted publickey for core from 10.200.16.10 port 53780 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:36:16.956776 sshd[1930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:16.962107 systemd[1]: Started session-7.scope. Dec 13 14:36:16.962390 systemd-logind[1510]: New session 7 of user core. Dec 13 14:36:17.609033 sudo[1934]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:36:17.609405 sudo[1934]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:36:17.624630 systemd[1]: Starting coreos-metadata.service... Dec 13 14:36:17.722535 coreos-metadata[1938]: Dec 13 14:36:17.722 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 14:36:17.725024 coreos-metadata[1938]: Dec 13 14:36:17.724 INFO Fetch successful Dec 13 14:36:17.725301 coreos-metadata[1938]: Dec 13 14:36:17.725 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 14:36:17.726821 coreos-metadata[1938]: Dec 13 14:36:17.726 INFO Fetch successful Dec 13 14:36:17.727238 coreos-metadata[1938]: Dec 13 14:36:17.727 INFO Fetching http://168.63.129.16/machine/bd759f35-7c96-4a30-adb2-6a9417953d64/f67f077b%2D8196%2D4f5d%2Da743%2D995df4f80695.%5Fci%2D3510.3.6%2Da%2D19c473d9c1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 14:36:17.728771 coreos-metadata[1938]: Dec 13 14:36:17.728 INFO Fetch successful Dec 13 14:36:17.761649 coreos-metadata[1938]: Dec 13 14:36:17.761 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 14:36:17.774042 coreos-metadata[1938]: Dec 13 14:36:17.773 INFO Fetch successful Dec 13 14:36:17.786857 systemd[1]: Finished coreos-metadata.service. Dec 13 14:36:21.438128 systemd[1]: Stopped kubelet.service. Dec 13 14:36:21.441292 systemd[1]: Starting kubelet.service... Dec 13 14:36:21.471623 systemd[1]: Reloading. Dec 13 14:36:21.564517 /usr/lib/systemd/system-generators/torcx-generator[2000]: time="2024-12-13T14:36:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:36:21.579875 /usr/lib/systemd/system-generators/torcx-generator[2000]: time="2024-12-13T14:36:21Z" level=info msg="torcx already run" Dec 13 14:36:21.686891 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:36:21.687109 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:36:21.706155 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:36:21.797175 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:36:21.797278 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:36:21.797628 systemd[1]: Stopped kubelet.service. Dec 13 14:36:21.801189 systemd[1]: Starting kubelet.service... Dec 13 14:36:22.053032 systemd[1]: Started kubelet.service. Dec 13 14:36:22.101422 kubelet[2081]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:36:22.101422 kubelet[2081]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:36:22.101422 kubelet[2081]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:36:22.101973 kubelet[2081]: I1213 14:36:22.101477 2081 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:36:22.615834 kubelet[2081]: I1213 14:36:22.615802 2081 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:36:22.616008 kubelet[2081]: I1213 14:36:22.615854 2081 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:36:22.616147 kubelet[2081]: I1213 14:36:22.616127 2081 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:36:22.734738 kubelet[2081]: I1213 14:36:22.734699 2081 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:36:22.765171 kubelet[2081]: I1213 14:36:22.765127 2081 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:36:22.765623 kubelet[2081]: I1213 14:36:22.765599 2081 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:36:22.765868 kubelet[2081]: I1213 14:36:22.765845 2081 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:36:22.766655 kubelet[2081]: I1213 14:36:22.766635 2081 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:36:22.766733 kubelet[2081]: I1213 14:36:22.766664 2081 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:36:22.766803 kubelet[2081]: I1213 14:36:22.766777 2081 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:36:22.766898 kubelet[2081]: I1213 14:36:22.766882 2081 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:36:22.766976 kubelet[2081]: I1213 14:36:22.766928 2081 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:36:22.766976 kubelet[2081]: I1213 14:36:22.766965 2081 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:36:22.767050 kubelet[2081]: I1213 14:36:22.766984 2081 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:36:22.767625 kubelet[2081]: E1213 14:36:22.767606 2081 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:22.767785 kubelet[2081]: E1213 14:36:22.767771 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:22.768161 kubelet[2081]: I1213 14:36:22.768145 2081 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:36:22.771618 kubelet[2081]: I1213 14:36:22.771599 2081 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:36:22.771784 kubelet[2081]: W1213 14:36:22.771754 2081 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:36:22.773460 kubelet[2081]: I1213 14:36:22.773440 2081 server.go:1256] "Started kubelet" Dec 13 14:36:22.774345 kubelet[2081]: W1213 14:36:22.774327 2081 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:36:22.774462 kubelet[2081]: E1213 14:36:22.774452 2081 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:36:22.774620 kubelet[2081]: W1213 14:36:22.774606 2081 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.200.8.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:36:22.774730 kubelet[2081]: E1213 14:36:22.774718 2081 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:36:22.774847 kubelet[2081]: I1213 14:36:22.774824 2081 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:36:22.782814 kubelet[2081]: I1213 14:36:22.782797 2081 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:36:22.783135 kubelet[2081]: I1213 14:36:22.783123 2081 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:36:22.784143 kubelet[2081]: I1213 14:36:22.784128 2081 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:36:22.787023 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:36:22.787766 kubelet[2081]: I1213 14:36:22.787746 2081 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:36:22.791485 kubelet[2081]: I1213 14:36:22.791465 2081 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:36:22.791863 kubelet[2081]: I1213 14:36:22.791840 2081 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:36:22.791959 kubelet[2081]: I1213 14:36:22.791926 2081 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:36:22.792844 kubelet[2081]: I1213 14:36:22.792825 2081 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:36:22.792951 kubelet[2081]: I1213 14:36:22.792930 2081 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:36:22.795226 kubelet[2081]: E1213 14:36:22.795209 2081 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:36:22.796005 kubelet[2081]: E1213 14:36:22.795991 2081 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.11\" not found" node="10.200.8.11" Dec 13 14:36:22.796697 kubelet[2081]: I1213 14:36:22.796683 2081 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:36:22.817693 kubelet[2081]: I1213 14:36:22.817667 2081 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:36:22.817818 kubelet[2081]: I1213 14:36:22.817801 2081 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:36:22.817889 kubelet[2081]: I1213 14:36:22.817824 2081 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:36:22.821863 kubelet[2081]: I1213 14:36:22.821841 2081 policy_none.go:49] "None policy: Start" Dec 13 14:36:22.822380 kubelet[2081]: I1213 14:36:22.822356 2081 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:36:22.822467 kubelet[2081]: I1213 14:36:22.822390 2081 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:36:22.828648 kubelet[2081]: I1213 14:36:22.828625 2081 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:36:22.828859 kubelet[2081]: I1213 14:36:22.828839 2081 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:36:22.833393 kubelet[2081]: E1213 14:36:22.833378 2081 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.11\" not found" Dec 13 14:36:22.862939 kubelet[2081]: I1213 14:36:22.862921 2081 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:36:22.864750 kubelet[2081]: I1213 14:36:22.864734 2081 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:36:22.864897 kubelet[2081]: I1213 14:36:22.864886 2081 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:36:22.865193 kubelet[2081]: I1213 14:36:22.865180 2081 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:36:22.865379 kubelet[2081]: E1213 14:36:22.865369 2081 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:36:22.893978 kubelet[2081]: I1213 14:36:22.892706 2081 kubelet_node_status.go:73] "Attempting to register node" node="10.200.8.11" Dec 13 14:36:22.900478 kubelet[2081]: I1213 14:36:22.900445 2081 kubelet_node_status.go:76] "Successfully registered node" node="10.200.8.11" Dec 13 14:36:22.912775 kubelet[2081]: E1213 14:36:22.912749 2081 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Dec 13 14:36:23.013897 kubelet[2081]: E1213 14:36:23.013851 2081 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Dec 13 14:36:23.114419 kubelet[2081]: E1213 14:36:23.114360 2081 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Dec 13 14:36:23.214543 kubelet[2081]: E1213 14:36:23.214511 2081 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Dec 13 14:36:23.315332 kubelet[2081]: E1213 14:36:23.315286 2081 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Dec 13 14:36:23.416014 kubelet[2081]: E1213 14:36:23.415963 2081 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Dec 13 14:36:23.516852 kubelet[2081]: E1213 14:36:23.516724 2081 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Dec 13 14:36:23.573789 sudo[1934]: pam_unix(sudo:session): session closed for user root Dec 13 14:36:23.617304 kubelet[2081]: E1213 14:36:23.617254 2081 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Dec 13 14:36:23.686610 kubelet[2081]: I1213 14:36:23.686549 2081 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:36:23.686936 kubelet[2081]: W1213 14:36:23.686866 2081 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:36:23.687054 kubelet[2081]: W1213 14:36:23.686958 2081 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:36:23.703526 sshd[1930]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:23.707054 systemd[1]: sshd@4-10.200.8.11:22-10.200.16.10:53780.service: Deactivated successfully. Dec 13 14:36:23.708298 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:36:23.709792 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:36:23.710828 systemd-logind[1510]: Removed session 7. Dec 13 14:36:23.718048 kubelet[2081]: E1213 14:36:23.718017 2081 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Dec 13 14:36:23.768700 kubelet[2081]: E1213 14:36:23.768558 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:23.818197 kubelet[2081]: E1213 14:36:23.818153 2081 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.200.8.11\" not found" Dec 13 14:36:23.919926 kubelet[2081]: I1213 14:36:23.919876 2081 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:36:23.920403 env[1522]: time="2024-12-13T14:36:23.920355251Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:36:23.920947 kubelet[2081]: I1213 14:36:23.920704 2081 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:36:24.768794 kubelet[2081]: I1213 14:36:24.768730 2081 apiserver.go:52] "Watching apiserver" Dec 13 14:36:24.769386 kubelet[2081]: E1213 14:36:24.769067 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:24.773007 kubelet[2081]: I1213 14:36:24.772979 2081 topology_manager.go:215] "Topology Admit Handler" podUID="fd79e62d-d514-44d4-b031-8ec069538c4f" podNamespace="kube-system" podName="kube-proxy-55mcm" Dec 13 14:36:24.773138 kubelet[2081]: I1213 14:36:24.773074 2081 topology_manager.go:215] "Topology Admit Handler" podUID="529216ce-4852-4f55-9c72-a8133f06a8f4" podNamespace="kube-system" podName="cilium-p9v5z" Dec 13 14:36:24.792829 kubelet[2081]: I1213 14:36:24.792802 2081 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:36:24.805168 kubelet[2081]: I1213 14:36:24.805143 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd79e62d-d514-44d4-b031-8ec069538c4f-lib-modules\") pod \"kube-proxy-55mcm\" (UID: \"fd79e62d-d514-44d4-b031-8ec069538c4f\") " pod="kube-system/kube-proxy-55mcm" Dec 13 14:36:24.805306 kubelet[2081]: I1213 14:36:24.805190 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/529216ce-4852-4f55-9c72-a8133f06a8f4-cilium-config-path\") pod \"cilium-p9v5z\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " pod="kube-system/cilium-p9v5z" Dec 13 14:36:24.805306 kubelet[2081]: I1213 14:36:24.805220 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-host-proc-sys-net\") pod \"cilium-p9v5z\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " pod="kube-system/cilium-p9v5z" Dec 13 14:36:24.805306 kubelet[2081]: I1213 14:36:24.805249 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9gpj\" (UniqueName: \"kubernetes.io/projected/529216ce-4852-4f55-9c72-a8133f06a8f4-kube-api-access-j9gpj\") pod \"cilium-p9v5z\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " pod="kube-system/cilium-p9v5z" Dec 13 14:36:24.805306 kubelet[2081]: I1213 14:36:24.805275 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-cilium-run\") pod \"cilium-p9v5z\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " pod="kube-system/cilium-p9v5z" Dec 13 14:36:24.805306 kubelet[2081]: I1213 14:36:24.805299 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-cni-path\") pod \"cilium-p9v5z\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " pod="kube-system/cilium-p9v5z" Dec 13 14:36:24.805570 kubelet[2081]: I1213 14:36:24.805325 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-lib-modules\") pod \"cilium-p9v5z\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " pod="kube-system/cilium-p9v5z" Dec 13 14:36:24.805570 kubelet[2081]: I1213 14:36:24.805357 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-host-proc-sys-kernel\") pod \"cilium-p9v5z\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " pod="kube-system/cilium-p9v5z" Dec 13 14:36:24.805570 kubelet[2081]: I1213 14:36:24.805384 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/529216ce-4852-4f55-9c72-a8133f06a8f4-hubble-tls\") pod \"cilium-p9v5z\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " pod="kube-system/cilium-p9v5z" Dec 13 14:36:24.805570 kubelet[2081]: I1213 14:36:24.805427 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-xtables-lock\") pod \"cilium-p9v5z\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " pod="kube-system/cilium-p9v5z" Dec 13 14:36:24.805570 kubelet[2081]: I1213 14:36:24.805457 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/529216ce-4852-4f55-9c72-a8133f06a8f4-clustermesh-secrets\") pod \"cilium-p9v5z\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " pod="kube-system/cilium-p9v5z" Dec 13 14:36:24.805570 kubelet[2081]: I1213 14:36:24.805485 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd79e62d-d514-44d4-b031-8ec069538c4f-xtables-lock\") pod \"kube-proxy-55mcm\" (UID: \"fd79e62d-d514-44d4-b031-8ec069538c4f\") " pod="kube-system/kube-proxy-55mcm" Dec 13 14:36:24.805800 kubelet[2081]: I1213 14:36:24.805525 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prhdc\" (UniqueName: \"kubernetes.io/projected/fd79e62d-d514-44d4-b031-8ec069538c4f-kube-api-access-prhdc\") pod \"kube-proxy-55mcm\" (UID: \"fd79e62d-d514-44d4-b031-8ec069538c4f\") " pod="kube-system/kube-proxy-55mcm" Dec 13 14:36:24.805800 kubelet[2081]: I1213 14:36:24.805554 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-hostproc\") pod \"cilium-p9v5z\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " pod="kube-system/cilium-p9v5z" Dec 13 14:36:24.805800 kubelet[2081]: I1213 14:36:24.805592 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-cilium-cgroup\") pod \"cilium-p9v5z\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " pod="kube-system/cilium-p9v5z" Dec 13 14:36:24.805800 kubelet[2081]: I1213 14:36:24.805622 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-etc-cni-netd\") pod \"cilium-p9v5z\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " pod="kube-system/cilium-p9v5z" Dec 13 14:36:24.805800 kubelet[2081]: I1213 14:36:24.805650 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd79e62d-d514-44d4-b031-8ec069538c4f-kube-proxy\") pod \"kube-proxy-55mcm\" (UID: \"fd79e62d-d514-44d4-b031-8ec069538c4f\") " pod="kube-system/kube-proxy-55mcm" Dec 13 14:36:24.805800 kubelet[2081]: I1213 14:36:24.805685 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-bpf-maps\") pod \"cilium-p9v5z\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " pod="kube-system/cilium-p9v5z" Dec 13 14:36:25.078882 env[1522]: time="2024-12-13T14:36:25.077689553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p9v5z,Uid:529216ce-4852-4f55-9c72-a8133f06a8f4,Namespace:kube-system,Attempt:0,}" Dec 13 14:36:25.081044 env[1522]: time="2024-12-13T14:36:25.077691553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-55mcm,Uid:fd79e62d-d514-44d4-b031-8ec069538c4f,Namespace:kube-system,Attempt:0,}" Dec 13 14:36:25.769723 kubelet[2081]: E1213 14:36:25.769689 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:25.888212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3416009160.mount: Deactivated successfully. Dec 13 14:36:25.906158 env[1522]: time="2024-12-13T14:36:25.906110088Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:25.908795 env[1522]: time="2024-12-13T14:36:25.908757148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:25.918229 env[1522]: time="2024-12-13T14:36:25.918191964Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:25.923044 env[1522]: time="2024-12-13T14:36:25.923014074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:25.925491 env[1522]: time="2024-12-13T14:36:25.925460230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:25.928542 env[1522]: time="2024-12-13T14:36:25.928510100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:25.931226 env[1522]: time="2024-12-13T14:36:25.931195161Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:25.934621 env[1522]: time="2024-12-13T14:36:25.934585539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:25.979985 env[1522]: time="2024-12-13T14:36:25.979815373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:36:25.979985 env[1522]: time="2024-12-13T14:36:25.979854773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:36:25.979985 env[1522]: time="2024-12-13T14:36:25.979869074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:36:25.981204 env[1522]: time="2024-12-13T14:36:25.980040178Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa pid=2129 runtime=io.containerd.runc.v2 Dec 13 14:36:26.010223 env[1522]: time="2024-12-13T14:36:26.009031135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:36:26.010223 env[1522]: time="2024-12-13T14:36:26.009077436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:36:26.010223 env[1522]: time="2024-12-13T14:36:26.009091837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:36:26.010223 env[1522]: time="2024-12-13T14:36:26.009279941Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b63591c99efb0dc6ec0fc5b4fb0533a046605863e7bb8e53260419476021fba9 pid=2150 runtime=io.containerd.runc.v2 Dec 13 14:36:26.042848 env[1522]: time="2024-12-13T14:36:26.041895766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p9v5z,Uid:529216ce-4852-4f55-9c72-a8133f06a8f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa\"" Dec 13 14:36:26.045800 env[1522]: time="2024-12-13T14:36:26.045753852Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:36:26.064725 env[1522]: time="2024-12-13T14:36:26.064685073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-55mcm,Uid:fd79e62d-d514-44d4-b031-8ec069538c4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b63591c99efb0dc6ec0fc5b4fb0533a046605863e7bb8e53260419476021fba9\"" Dec 13 14:36:26.770402 kubelet[2081]: E1213 14:36:26.770340 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:27.770658 kubelet[2081]: E1213 14:36:27.770596 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:28.771227 kubelet[2081]: E1213 14:36:28.771104 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:29.771640 kubelet[2081]: E1213 14:36:29.771599 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:30.772575 kubelet[2081]: E1213 14:36:30.772533 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:31.773340 kubelet[2081]: E1213 14:36:31.773237 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:32.773634 kubelet[2081]: E1213 14:36:32.773594 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:33.774378 kubelet[2081]: E1213 14:36:33.774297 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:34.775015 kubelet[2081]: E1213 14:36:34.774892 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:35.045655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346043277.mount: Deactivated successfully. Dec 13 14:36:35.775567 kubelet[2081]: E1213 14:36:35.775521 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:36.776213 kubelet[2081]: E1213 14:36:36.776152 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:37.685611 env[1522]: time="2024-12-13T14:36:37.685559458Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:37.690935 env[1522]: time="2024-12-13T14:36:37.690869246Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:37.695154 env[1522]: time="2024-12-13T14:36:37.695119916Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:37.695707 env[1522]: time="2024-12-13T14:36:37.695676625Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:36:37.697125 env[1522]: time="2024-12-13T14:36:37.697097349Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:36:37.697862 env[1522]: time="2024-12-13T14:36:37.697829161Z" level=info msg="CreateContainer within sandbox \"42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:36:37.721456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843093933.mount: Deactivated successfully. Dec 13 14:36:37.776915 env[1522]: time="2024-12-13T14:36:37.776829370Z" level=info msg="CreateContainer within sandbox \"42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34\"" Dec 13 14:36:37.777315 kubelet[2081]: E1213 14:36:37.777161 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:37.778215 env[1522]: time="2024-12-13T14:36:37.778177492Z" level=info msg="StartContainer for \"eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34\"" Dec 13 14:36:37.835628 env[1522]: time="2024-12-13T14:36:37.835573943Z" level=info msg="StartContainer for \"eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34\" returns successfully" Dec 13 14:36:38.718887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34-rootfs.mount: Deactivated successfully. Dec 13 14:36:38.777558 kubelet[2081]: E1213 14:36:38.777498 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:39.778020 kubelet[2081]: E1213 14:36:39.777980 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:40.778878 kubelet[2081]: E1213 14:36:40.778837 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:41.636820 env[1522]: time="2024-12-13T14:36:41.636759399Z" level=info msg="shim disconnected" id=eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34 Dec 13 14:36:41.636820 env[1522]: time="2024-12-13T14:36:41.636816000Z" level=warning msg="cleaning up after shim disconnected" id=eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34 namespace=k8s.io Dec 13 14:36:41.636820 env[1522]: time="2024-12-13T14:36:41.636827600Z" level=info msg="cleaning up dead shim" Dec 13 14:36:41.645210 env[1522]: time="2024-12-13T14:36:41.645176325Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2256 runtime=io.containerd.runc.v2\n" Dec 13 14:36:41.779735 kubelet[2081]: E1213 14:36:41.779664 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:41.903490 env[1522]: time="2024-12-13T14:36:41.903043680Z" level=info msg="CreateContainer within sandbox \"42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:36:41.929603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount282334477.mount: Deactivated successfully. Dec 13 14:36:41.946773 env[1522]: time="2024-12-13T14:36:41.946723333Z" level=info msg="CreateContainer within sandbox \"42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929\"" Dec 13 14:36:41.947374 env[1522]: time="2024-12-13T14:36:41.947342842Z" level=info msg="StartContainer for \"4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929\"" Dec 13 14:36:42.057315 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:36:42.057687 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:36:42.060247 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:36:42.063280 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:36:42.066347 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:36:42.077021 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:36:42.079776 env[1522]: time="2024-12-13T14:36:42.079737292Z" level=info msg="StartContainer for \"4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929\" returns successfully" Dec 13 14:36:42.196428 env[1522]: time="2024-12-13T14:36:42.195705082Z" level=info msg="shim disconnected" id=4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929 Dec 13 14:36:42.196428 env[1522]: time="2024-12-13T14:36:42.195760083Z" level=warning msg="cleaning up after shim disconnected" id=4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929 namespace=k8s.io Dec 13 14:36:42.196428 env[1522]: time="2024-12-13T14:36:42.195772683Z" level=info msg="cleaning up dead shim" Dec 13 14:36:42.205117 env[1522]: time="2024-12-13T14:36:42.205067919Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2322 runtime=io.containerd.runc.v2\n" Dec 13 14:36:42.767618 kubelet[2081]: E1213 14:36:42.767537 2081 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:42.780853 kubelet[2081]: E1213 14:36:42.780787 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:42.906312 env[1522]: time="2024-12-13T14:36:42.906266439Z" level=info msg="CreateContainer within sandbox \"42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:36:42.925990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929-rootfs.mount: Deactivated successfully. Dec 13 14:36:42.940992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4041396600.mount: Deactivated successfully. Dec 13 14:36:42.949598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3885662332.mount: Deactivated successfully. Dec 13 14:36:42.962103 env[1522]: time="2024-12-13T14:36:42.962060352Z" level=info msg="CreateContainer within sandbox \"42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b\"" Dec 13 14:36:42.962846 env[1522]: time="2024-12-13T14:36:42.962817863Z" level=info msg="StartContainer for \"3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b\"" Dec 13 14:36:43.054686 env[1522]: time="2024-12-13T14:36:43.054108175Z" level=info msg="StartContainer for \"3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b\" returns successfully" Dec 13 14:36:43.781019 kubelet[2081]: E1213 14:36:43.780971 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:43.843043 env[1522]: time="2024-12-13T14:36:43.842977988Z" level=info msg="shim disconnected" id=3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b Dec 13 14:36:43.843043 env[1522]: time="2024-12-13T14:36:43.843040389Z" level=warning msg="cleaning up after shim disconnected" id=3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b namespace=k8s.io Dec 13 14:36:43.843043 env[1522]: time="2024-12-13T14:36:43.843056389Z" level=info msg="cleaning up dead shim" Dec 13 14:36:43.851180 env[1522]: time="2024-12-13T14:36:43.851137604Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2383 runtime=io.containerd.runc.v2\n" Dec 13 14:36:43.854833 env[1522]: time="2024-12-13T14:36:43.854795456Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:43.866485 env[1522]: time="2024-12-13T14:36:43.866448721Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:43.869727 env[1522]: time="2024-12-13T14:36:43.869698068Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:43.876642 env[1522]: time="2024-12-13T14:36:43.876607566Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:43.877030 env[1522]: time="2024-12-13T14:36:43.876999071Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:36:43.878940 env[1522]: time="2024-12-13T14:36:43.878894298Z" level=info msg="CreateContainer within sandbox \"b63591c99efb0dc6ec0fc5b4fb0533a046605863e7bb8e53260419476021fba9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:36:43.910124 env[1522]: time="2024-12-13T14:36:43.910086142Z" level=info msg="CreateContainer within sandbox \"42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:36:43.916208 env[1522]: time="2024-12-13T14:36:43.916172528Z" level=info msg="CreateContainer within sandbox \"b63591c99efb0dc6ec0fc5b4fb0533a046605863e7bb8e53260419476021fba9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8dcc5c2f78f8337b31b9e8c61a45053e61498d41eaa5a2a9b67c659b2f17d730\"" Dec 13 14:36:43.916686 env[1522]: time="2024-12-13T14:36:43.916655235Z" level=info msg="StartContainer for \"8dcc5c2f78f8337b31b9e8c61a45053e61498d41eaa5a2a9b67c659b2f17d730\"" Dec 13 14:36:43.950138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4232307932.mount: Deactivated successfully. Dec 13 14:36:43.981051 env[1522]: time="2024-12-13T14:36:43.981006950Z" level=info msg="CreateContainer within sandbox \"42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286\"" Dec 13 14:36:43.981628 env[1522]: time="2024-12-13T14:36:43.981595258Z" level=info msg="StartContainer for \"1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286\"" Dec 13 14:36:44.003627 env[1522]: time="2024-12-13T14:36:44.003587470Z" level=info msg="StartContainer for \"8dcc5c2f78f8337b31b9e8c61a45053e61498d41eaa5a2a9b67c659b2f17d730\" returns successfully" Dec 13 14:36:44.044871 env[1522]: time="2024-12-13T14:36:44.044738340Z" level=info msg="StartContainer for \"1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286\" returns successfully" Dec 13 14:36:44.077492 env[1522]: time="2024-12-13T14:36:44.077440594Z" level=info msg="shim disconnected" id=1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286 Dec 13 14:36:44.077492 env[1522]: time="2024-12-13T14:36:44.077491794Z" level=warning msg="cleaning up after shim disconnected" id=1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286 namespace=k8s.io Dec 13 14:36:44.077772 env[1522]: time="2024-12-13T14:36:44.077502994Z" level=info msg="cleaning up dead shim" Dec 13 14:36:44.087608 env[1522]: time="2024-12-13T14:36:44.087563734Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2492 runtime=io.containerd.runc.v2\n" Dec 13 14:36:44.781610 kubelet[2081]: E1213 14:36:44.781565 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:44.916414 env[1522]: time="2024-12-13T14:36:44.916365724Z" level=info msg="CreateContainer within sandbox \"42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:36:44.925614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3758168443.mount: Deactivated successfully. Dec 13 14:36:44.949861 env[1522]: time="2024-12-13T14:36:44.949806488Z" level=info msg="CreateContainer within sandbox \"42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\"" Dec 13 14:36:44.950322 env[1522]: time="2024-12-13T14:36:44.950285894Z" level=info msg="StartContainer for \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\"" Dec 13 14:36:45.006468 env[1522]: time="2024-12-13T14:36:45.006412571Z" level=info msg="StartContainer for \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\" returns successfully" Dec 13 14:36:45.196186 kubelet[2081]: I1213 14:36:45.194227 2081 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:36:45.649933 kernel: Initializing XFRM netlink socket Dec 13 14:36:45.781957 kubelet[2081]: E1213 14:36:45.781882 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:45.925772 systemd[1]: run-containerd-runc-k8s.io-af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049-runc.wvNCgI.mount: Deactivated successfully. Dec 13 14:36:45.951516 kubelet[2081]: I1213 14:36:45.951482 2081 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-p9v5z" podStartSLOduration=12.300192751 podStartE2EDuration="23.951429351s" podCreationTimestamp="2024-12-13 14:36:22 +0000 UTC" firstStartedPulling="2024-12-13 14:36:26.044890233 +0000 UTC m=+3.983376734" lastFinishedPulling="2024-12-13 14:36:37.696126833 +0000 UTC m=+15.634613334" observedRunningTime="2024-12-13 14:36:45.95058224 +0000 UTC m=+23.889068741" watchObservedRunningTime="2024-12-13 14:36:45.951429351 +0000 UTC m=+23.889915852" Dec 13 14:36:45.951991 kubelet[2081]: I1213 14:36:45.951960 2081 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-55mcm" podStartSLOduration=6.140295877 podStartE2EDuration="23.951934158s" podCreationTimestamp="2024-12-13 14:36:22 +0000 UTC" firstStartedPulling="2024-12-13 14:36:26.065667595 +0000 UTC m=+4.004154196" lastFinishedPulling="2024-12-13 14:36:43.877305876 +0000 UTC m=+21.815792477" observedRunningTime="2024-12-13 14:36:44.93844923 +0000 UTC m=+22.876935831" watchObservedRunningTime="2024-12-13 14:36:45.951934158 +0000 UTC m=+23.890420759" Dec 13 14:36:46.782338 kubelet[2081]: E1213 14:36:46.782277 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:47.292698 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:36:47.290127 systemd-networkd[1691]: cilium_host: Link UP Dec 13 14:36:47.290248 systemd-networkd[1691]: cilium_net: Link UP Dec 13 14:36:47.290252 systemd-networkd[1691]: cilium_net: Gained carrier Dec 13 14:36:47.290380 systemd-networkd[1691]: cilium_host: Gained carrier Dec 13 14:36:47.296087 systemd-networkd[1691]: cilium_host: Gained IPv6LL Dec 13 14:36:47.512978 systemd-networkd[1691]: cilium_vxlan: Link UP Dec 13 14:36:47.512987 systemd-networkd[1691]: cilium_vxlan: Gained carrier Dec 13 14:36:47.782933 kubelet[2081]: E1213 14:36:47.782824 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:47.784977 kernel: NET: Registered PF_ALG protocol family Dec 13 14:36:48.044057 systemd-networkd[1691]: cilium_net: Gained IPv6LL Dec 13 14:36:48.524331 systemd-networkd[1691]: lxc_health: Link UP Dec 13 14:36:48.544462 systemd-networkd[1691]: lxc_health: Gained carrier Dec 13 14:36:48.544961 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:36:48.784012 kubelet[2081]: E1213 14:36:48.783852 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:48.940065 systemd-networkd[1691]: cilium_vxlan: Gained IPv6LL Dec 13 14:36:49.784889 kubelet[2081]: E1213 14:36:49.784840 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:50.028198 systemd-networkd[1691]: lxc_health: Gained IPv6LL Dec 13 14:36:50.786582 kubelet[2081]: E1213 14:36:50.786531 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:51.787957 kubelet[2081]: E1213 14:36:51.787893 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:51.986114 kubelet[2081]: I1213 14:36:51.986077 2081 topology_manager.go:215] "Topology Admit Handler" podUID="ae6b7e1e-24db-4861-b0ad-b9952c31a8da" podNamespace="default" podName="nginx-deployment-6d5f899847-fzk9m" Dec 13 14:36:52.006308 kubelet[2081]: I1213 14:36:52.006277 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phx9j\" (UniqueName: \"kubernetes.io/projected/ae6b7e1e-24db-4861-b0ad-b9952c31a8da-kube-api-access-phx9j\") pod \"nginx-deployment-6d5f899847-fzk9m\" (UID: \"ae6b7e1e-24db-4861-b0ad-b9952c31a8da\") " pod="default/nginx-deployment-6d5f899847-fzk9m" Dec 13 14:36:52.294768 env[1522]: time="2024-12-13T14:36:52.294217367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-fzk9m,Uid:ae6b7e1e-24db-4861-b0ad-b9952c31a8da,Namespace:default,Attempt:0,}" Dec 13 14:36:52.377352 systemd-networkd[1691]: lxc771a2abe9af6: Link UP Dec 13 14:36:52.386008 kernel: eth0: renamed from tmp548ad Dec 13 14:36:52.397019 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:36:52.403980 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc771a2abe9af6: link becomes ready Dec 13 14:36:52.407557 systemd-networkd[1691]: lxc771a2abe9af6: Gained carrier Dec 13 14:36:52.789277 kubelet[2081]: E1213 14:36:52.789228 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:52.842564 env[1522]: time="2024-12-13T14:36:52.842490533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:36:52.842564 env[1522]: time="2024-12-13T14:36:52.842527133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:36:52.842564 env[1522]: time="2024-12-13T14:36:52.842540733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:36:52.843084 env[1522]: time="2024-12-13T14:36:52.843027039Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/548ad1e71d6c2f2e7e9f2e202fca4eb1898224eef277bf9070fde583356a0974 pid=3135 runtime=io.containerd.runc.v2 Dec 13 14:36:52.904143 kubelet[2081]: E1213 14:36:52.904102 2081 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podae6b7e1e-24db-4861-b0ad-b9952c31a8da/548ad1e71d6c2f2e7e9f2e202fca4eb1898224eef277bf9070fde583356a0974\": RecentStats: unable to find data in memory cache]" Dec 13 14:36:52.913782 env[1522]: time="2024-12-13T14:36:52.912974138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-fzk9m,Uid:ae6b7e1e-24db-4861-b0ad-b9952c31a8da,Namespace:default,Attempt:0,} returns sandbox id \"548ad1e71d6c2f2e7e9f2e202fca4eb1898224eef277bf9070fde583356a0974\"" Dec 13 14:36:52.915223 env[1522]: time="2024-12-13T14:36:52.915185164Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:36:52.946651 kubelet[2081]: I1213 14:36:52.946366 2081 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:36:53.129068 systemd[1]: run-containerd-runc-k8s.io-548ad1e71d6c2f2e7e9f2e202fca4eb1898224eef277bf9070fde583356a0974-runc.fcnEMh.mount: Deactivated successfully. Dec 13 14:36:53.790007 kubelet[2081]: E1213 14:36:53.789958 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:53.868182 systemd-networkd[1691]: lxc771a2abe9af6: Gained IPv6LL Dec 13 14:36:54.790144 kubelet[2081]: E1213 14:36:54.790080 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:55.723145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3799772048.mount: Deactivated successfully. Dec 13 14:36:55.791268 kubelet[2081]: E1213 14:36:55.791222 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:56.791572 kubelet[2081]: E1213 14:36:56.791530 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:57.181439 env[1522]: time="2024-12-13T14:36:57.181309341Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:57.187225 env[1522]: time="2024-12-13T14:36:57.187188901Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:57.191616 env[1522]: time="2024-12-13T14:36:57.191581446Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:57.195665 env[1522]: time="2024-12-13T14:36:57.195637287Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:36:57.196249 env[1522]: time="2024-12-13T14:36:57.196217693Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:36:57.203593 env[1522]: time="2024-12-13T14:36:57.203556068Z" level=info msg="CreateContainer within sandbox \"548ad1e71d6c2f2e7e9f2e202fca4eb1898224eef277bf9070fde583356a0974\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:36:57.231049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4096007555.mount: Deactivated successfully. Dec 13 14:36:57.246644 env[1522]: time="2024-12-13T14:36:57.246592206Z" level=info msg="CreateContainer within sandbox \"548ad1e71d6c2f2e7e9f2e202fca4eb1898224eef277bf9070fde583356a0974\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e1a3fe62459dab2c3703091661ede842b3877fec673e18fb9f74f3d0502fe8e5\"" Dec 13 14:36:57.247190 env[1522]: time="2024-12-13T14:36:57.247145312Z" level=info msg="StartContainer for \"e1a3fe62459dab2c3703091661ede842b3877fec673e18fb9f74f3d0502fe8e5\"" Dec 13 14:36:57.296735 env[1522]: time="2024-12-13T14:36:57.296698717Z" level=info msg="StartContainer for \"e1a3fe62459dab2c3703091661ede842b3877fec673e18fb9f74f3d0502fe8e5\" returns successfully" Dec 13 14:36:57.792805 kubelet[2081]: E1213 14:36:57.792744 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:57.961677 kubelet[2081]: I1213 14:36:57.961638 2081 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-fzk9m" podStartSLOduration=2.679837755 podStartE2EDuration="6.961605293s" podCreationTimestamp="2024-12-13 14:36:51 +0000 UTC" firstStartedPulling="2024-12-13 14:36:52.914729758 +0000 UTC m=+30.853216259" lastFinishedPulling="2024-12-13 14:36:57.196497296 +0000 UTC m=+35.134983797" observedRunningTime="2024-12-13 14:36:57.96128719 +0000 UTC m=+35.899773691" watchObservedRunningTime="2024-12-13 14:36:57.961605293 +0000 UTC m=+35.900091894" Dec 13 14:36:58.793058 kubelet[2081]: E1213 14:36:58.792994 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:36:59.794006 kubelet[2081]: E1213 14:36:59.793949 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:00.794686 kubelet[2081]: E1213 14:37:00.794599 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:01.795889 kubelet[2081]: E1213 14:37:01.795825 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:02.767971 kubelet[2081]: E1213 14:37:02.767922 2081 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:02.796381 kubelet[2081]: E1213 14:37:02.796330 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:03.796576 kubelet[2081]: E1213 14:37:03.796518 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:04.104625 kubelet[2081]: I1213 14:37:04.104317 2081 topology_manager.go:215] "Topology Admit Handler" podUID="53defa6c-cf52-45da-9540-512db1188b25" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:37:04.184011 kubelet[2081]: I1213 14:37:04.183958 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/53defa6c-cf52-45da-9540-512db1188b25-data\") pod \"nfs-server-provisioner-0\" (UID: \"53defa6c-cf52-45da-9540-512db1188b25\") " pod="default/nfs-server-provisioner-0" Dec 13 14:37:04.184204 kubelet[2081]: I1213 14:37:04.184023 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glq65\" (UniqueName: \"kubernetes.io/projected/53defa6c-cf52-45da-9540-512db1188b25-kube-api-access-glq65\") pod \"nfs-server-provisioner-0\" (UID: \"53defa6c-cf52-45da-9540-512db1188b25\") " pod="default/nfs-server-provisioner-0" Dec 13 14:37:04.408011 env[1522]: time="2024-12-13T14:37:04.407863141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:53defa6c-cf52-45da-9540-512db1188b25,Namespace:default,Attempt:0,}" Dec 13 14:37:04.464290 systemd-networkd[1691]: lxc9c27c5b5169d: Link UP Dec 13 14:37:04.476860 kernel: eth0: renamed from tmp93975 Dec 13 14:37:04.488325 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:37:04.488407 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9c27c5b5169d: link becomes ready Dec 13 14:37:04.488614 systemd-networkd[1691]: lxc9c27c5b5169d: Gained carrier Dec 13 14:37:04.633357 env[1522]: time="2024-12-13T14:37:04.633264915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:37:04.633357 env[1522]: time="2024-12-13T14:37:04.633323015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:37:04.633568 env[1522]: time="2024-12-13T14:37:04.633337215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:37:04.634343 env[1522]: time="2024-12-13T14:37:04.633871320Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9397516a849a76a06c7501c3266a74954692fbc0130c7c4891ce4ed4cec55454 pid=3258 runtime=io.containerd.runc.v2 Dec 13 14:37:04.692643 env[1522]: time="2024-12-13T14:37:04.692605435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:53defa6c-cf52-45da-9540-512db1188b25,Namespace:default,Attempt:0,} returns sandbox id \"9397516a849a76a06c7501c3266a74954692fbc0130c7c4891ce4ed4cec55454\"" Dec 13 14:37:04.694462 env[1522]: time="2024-12-13T14:37:04.694432851Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:37:04.797568 kubelet[2081]: E1213 14:37:04.797518 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:05.297468 systemd[1]: run-containerd-runc-k8s.io-9397516a849a76a06c7501c3266a74954692fbc0130c7c4891ce4ed4cec55454-runc.4HHu4x.mount: Deactivated successfully. Dec 13 14:37:05.798169 kubelet[2081]: E1213 14:37:05.798112 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:06.284180 systemd-networkd[1691]: lxc9c27c5b5169d: Gained IPv6LL Dec 13 14:37:06.798772 kubelet[2081]: E1213 14:37:06.798715 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:07.799132 kubelet[2081]: E1213 14:37:07.799067 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:08.799455 kubelet[2081]: E1213 14:37:08.799410 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:09.793677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687871262.mount: Deactivated successfully. Dec 13 14:37:09.800053 kubelet[2081]: E1213 14:37:09.799990 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:10.800121 kubelet[2081]: E1213 14:37:10.800072 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:11.800634 kubelet[2081]: E1213 14:37:11.800589 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:12.244995 env[1522]: time="2024-12-13T14:37:12.244944134Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:12.253085 env[1522]: time="2024-12-13T14:37:12.253037294Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:12.257147 env[1522]: time="2024-12-13T14:37:12.257105325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:12.282050 env[1522]: time="2024-12-13T14:37:12.281995411Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:12.282868 env[1522]: time="2024-12-13T14:37:12.282826417Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:37:12.285296 env[1522]: time="2024-12-13T14:37:12.285264035Z" level=info msg="CreateContainer within sandbox \"9397516a849a76a06c7501c3266a74954692fbc0130c7c4891ce4ed4cec55454\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:37:12.498580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3175451860.mount: Deactivated successfully. Dec 13 14:37:12.507909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3897987643.mount: Deactivated successfully. Dec 13 14:37:12.631685 env[1522]: time="2024-12-13T14:37:12.631633321Z" level=info msg="CreateContainer within sandbox \"9397516a849a76a06c7501c3266a74954692fbc0130c7c4891ce4ed4cec55454\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"62ca83ea2d4ed00dd454b05299861cf07634c64a32569ae0d7fc44e6d74128d7\"" Dec 13 14:37:12.632580 env[1522]: time="2024-12-13T14:37:12.632534828Z" level=info msg="StartContainer for \"62ca83ea2d4ed00dd454b05299861cf07634c64a32569ae0d7fc44e6d74128d7\"" Dec 13 14:37:12.686932 env[1522]: time="2024-12-13T14:37:12.685061820Z" level=info msg="StartContainer for \"62ca83ea2d4ed00dd454b05299861cf07634c64a32569ae0d7fc44e6d74128d7\" returns successfully" Dec 13 14:37:12.800891 kubelet[2081]: E1213 14:37:12.800749 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:12.994554 kubelet[2081]: I1213 14:37:12.994517 2081 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.405218257 podStartE2EDuration="8.994484831s" podCreationTimestamp="2024-12-13 14:37:04 +0000 UTC" firstStartedPulling="2024-12-13 14:37:04.693950046 +0000 UTC m=+42.632436547" lastFinishedPulling="2024-12-13 14:37:12.28321662 +0000 UTC m=+50.221703121" observedRunningTime="2024-12-13 14:37:12.994031228 +0000 UTC m=+50.932517829" watchObservedRunningTime="2024-12-13 14:37:12.994484831 +0000 UTC m=+50.932971332" Dec 13 14:37:13.801868 kubelet[2081]: E1213 14:37:13.801804 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:14.802046 kubelet[2081]: E1213 14:37:14.801991 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:15.802947 kubelet[2081]: E1213 14:37:15.802880 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:16.803949 kubelet[2081]: E1213 14:37:16.803882 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:17.804229 kubelet[2081]: E1213 14:37:17.804182 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:18.805013 kubelet[2081]: E1213 14:37:18.804951 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:19.805717 kubelet[2081]: E1213 14:37:19.805660 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:20.806424 kubelet[2081]: E1213 14:37:20.806360 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:21.807509 kubelet[2081]: E1213 14:37:21.807443 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:22.687672 kubelet[2081]: I1213 14:37:22.687626 2081 topology_manager.go:215] "Topology Admit Handler" podUID="06f59ec1-a349-4857-b5a5-9a036b1de1f0" podNamespace="default" podName="test-pod-1" Dec 13 14:37:22.767202 kubelet[2081]: E1213 14:37:22.767159 2081 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:22.798541 kubelet[2081]: I1213 14:37:22.798511 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ca28a865-a0bc-42dd-b52a-ae814794f80a\" (UniqueName: \"kubernetes.io/nfs/06f59ec1-a349-4857-b5a5-9a036b1de1f0-pvc-ca28a865-a0bc-42dd-b52a-ae814794f80a\") pod \"test-pod-1\" (UID: \"06f59ec1-a349-4857-b5a5-9a036b1de1f0\") " pod="default/test-pod-1" Dec 13 14:37:22.798944 kubelet[2081]: I1213 14:37:22.798921 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlmsj\" (UniqueName: \"kubernetes.io/projected/06f59ec1-a349-4857-b5a5-9a036b1de1f0-kube-api-access-xlmsj\") pod \"test-pod-1\" (UID: \"06f59ec1-a349-4857-b5a5-9a036b1de1f0\") " pod="default/test-pod-1" Dec 13 14:37:22.807635 kubelet[2081]: E1213 14:37:22.807607 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:23.021933 kernel: FS-Cache: Loaded Dec 13 14:37:23.119700 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:37:23.119832 kernel: RPC: Registered udp transport module. Dec 13 14:37:23.119858 kernel: RPC: Registered tcp transport module. Dec 13 14:37:23.125924 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:37:23.263927 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:37:23.450989 kernel: NFS: Registering the id_resolver key type Dec 13 14:37:23.451130 kernel: Key type id_resolver registered Dec 13 14:37:23.451161 kernel: Key type id_legacy registered Dec 13 14:37:23.711297 nfsidmap[3378]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-19c473d9c1' Dec 13 14:37:23.726010 nfsidmap[3379]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-19c473d9c1' Dec 13 14:37:23.808260 kubelet[2081]: E1213 14:37:23.808181 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:23.891602 env[1522]: time="2024-12-13T14:37:23.891555142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:06f59ec1-a349-4857-b5a5-9a036b1de1f0,Namespace:default,Attempt:0,}" Dec 13 14:37:23.947332 systemd-networkd[1691]: lxcf382b46a6f47: Link UP Dec 13 14:37:23.958611 kernel: eth0: renamed from tmp21dfa Dec 13 14:37:23.970470 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:37:23.970560 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf382b46a6f47: link becomes ready Dec 13 14:37:23.970642 systemd-networkd[1691]: lxcf382b46a6f47: Gained carrier Dec 13 14:37:24.139482 env[1522]: time="2024-12-13T14:37:24.139402854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:37:24.139686 env[1522]: time="2024-12-13T14:37:24.139444854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:37:24.139686 env[1522]: time="2024-12-13T14:37:24.139472754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:37:24.139686 env[1522]: time="2024-12-13T14:37:24.139625855Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21dfa66a719a15d4624fc2c4b87d0cd77570ba92bf21e2083fed917a5968c7cf pid=3405 runtime=io.containerd.runc.v2 Dec 13 14:37:24.163057 systemd[1]: run-containerd-runc-k8s.io-21dfa66a719a15d4624fc2c4b87d0cd77570ba92bf21e2083fed917a5968c7cf-runc.bbgVCE.mount: Deactivated successfully. Dec 13 14:37:24.204577 env[1522]: time="2024-12-13T14:37:24.204537248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:06f59ec1-a349-4857-b5a5-9a036b1de1f0,Namespace:default,Attempt:0,} returns sandbox id \"21dfa66a719a15d4624fc2c4b87d0cd77570ba92bf21e2083fed917a5968c7cf\"" Dec 13 14:37:24.206112 env[1522]: time="2024-12-13T14:37:24.206081157Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:37:24.607821 env[1522]: time="2024-12-13T14:37:24.607770989Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:24.614398 env[1522]: time="2024-12-13T14:37:24.614360129Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:24.617453 env[1522]: time="2024-12-13T14:37:24.617420147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:24.621984 env[1522]: time="2024-12-13T14:37:24.621952775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:24.622616 env[1522]: time="2024-12-13T14:37:24.622584778Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:37:24.624669 env[1522]: time="2024-12-13T14:37:24.624640291Z" level=info msg="CreateContainer within sandbox \"21dfa66a719a15d4624fc2c4b87d0cd77570ba92bf21e2083fed917a5968c7cf\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:37:24.654520 env[1522]: time="2024-12-13T14:37:24.654488071Z" level=info msg="CreateContainer within sandbox \"21dfa66a719a15d4624fc2c4b87d0cd77570ba92bf21e2083fed917a5968c7cf\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"7f536df1615eefeabeb648d4ebf4e6e49009b960121c222d35969500ff841d2c\"" Dec 13 14:37:24.655133 env[1522]: time="2024-12-13T14:37:24.655082275Z" level=info msg="StartContainer for \"7f536df1615eefeabeb648d4ebf4e6e49009b960121c222d35969500ff841d2c\"" Dec 13 14:37:24.704803 env[1522]: time="2024-12-13T14:37:24.704766376Z" level=info msg="StartContainer for \"7f536df1615eefeabeb648d4ebf4e6e49009b960121c222d35969500ff841d2c\" returns successfully" Dec 13 14:37:24.808728 kubelet[2081]: E1213 14:37:24.808661 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:25.013709 kubelet[2081]: I1213 14:37:25.013674 2081 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.596393319 podStartE2EDuration="20.013624744s" podCreationTimestamp="2024-12-13 14:37:05 +0000 UTC" firstStartedPulling="2024-12-13 14:37:24.205632555 +0000 UTC m=+62.144119056" lastFinishedPulling="2024-12-13 14:37:24.62286378 +0000 UTC m=+62.561350481" observedRunningTime="2024-12-13 14:37:25.013614344 +0000 UTC m=+62.952100845" watchObservedRunningTime="2024-12-13 14:37:25.013624744 +0000 UTC m=+62.952111245" Dec 13 14:37:25.292289 systemd-networkd[1691]: lxcf382b46a6f47: Gained IPv6LL Dec 13 14:37:25.809332 kubelet[2081]: E1213 14:37:25.809267 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:26.810374 kubelet[2081]: E1213 14:37:26.810285 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:27.810651 kubelet[2081]: E1213 14:37:27.810581 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:28.773036 systemd[1]: run-containerd-runc-k8s.io-af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049-runc.SytdiL.mount: Deactivated successfully. Dec 13 14:37:28.789808 env[1522]: time="2024-12-13T14:37:28.789627254Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:37:28.794643 env[1522]: time="2024-12-13T14:37:28.794608883Z" level=info msg="StopContainer for \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\" with timeout 2 (s)" Dec 13 14:37:28.794965 env[1522]: time="2024-12-13T14:37:28.794925584Z" level=info msg="Stop container \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\" with signal terminated" Dec 13 14:37:28.802554 systemd-networkd[1691]: lxc_health: Link DOWN Dec 13 14:37:28.802562 systemd-networkd[1691]: lxc_health: Lost carrier Dec 13 14:37:28.812093 kubelet[2081]: E1213 14:37:28.811953 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:28.841651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049-rootfs.mount: Deactivated successfully. Dec 13 14:37:29.812348 kubelet[2081]: E1213 14:37:29.812288 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:33.228874 env[1522]: time="2024-12-13T14:37:30.804864703Z" level=info msg="Kill container \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\"" Dec 13 14:37:33.228874 env[1522]: time="2024-12-13T14:37:32.953140244Z" level=error msg="collecting metrics for af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049" error="cgroups: cgroup deleted: unknown" Dec 13 14:37:33.229468 kubelet[2081]: E1213 14:37:30.812531 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:33.229468 kubelet[2081]: E1213 14:37:31.812952 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:33.229468 kubelet[2081]: I1213 14:37:32.613066 2081 topology_manager.go:215] "Topology Admit Handler" podUID="04884876-9c7b-4343-b018-ccf2115d2a16" podNamespace="kube-system" podName="cilium-operator-5cc964979-qwf5q" Dec 13 14:37:33.229468 kubelet[2081]: I1213 14:37:32.658852 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlgd2\" (UniqueName: \"kubernetes.io/projected/04884876-9c7b-4343-b018-ccf2115d2a16-kube-api-access-mlgd2\") pod \"cilium-operator-5cc964979-qwf5q\" (UID: \"04884876-9c7b-4343-b018-ccf2115d2a16\") " pod="kube-system/cilium-operator-5cc964979-qwf5q" Dec 13 14:37:33.229468 kubelet[2081]: I1213 14:37:32.658969 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04884876-9c7b-4343-b018-ccf2115d2a16-cilium-config-path\") pod \"cilium-operator-5cc964979-qwf5q\" (UID: \"04884876-9c7b-4343-b018-ccf2115d2a16\") " pod="kube-system/cilium-operator-5cc964979-qwf5q" Dec 13 14:37:33.229468 kubelet[2081]: E1213 14:37:32.813648 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:33.229468 kubelet[2081]: E1213 14:37:32.846520 2081 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:37:33.531437 env[1522]: time="2024-12-13T14:37:33.530928107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qwf5q,Uid:04884876-9c7b-4343-b018-ccf2115d2a16,Namespace:kube-system,Attempt:0,}" Dec 13 14:37:33.814825 kubelet[2081]: E1213 14:37:33.814679 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:34.218188 kubelet[2081]: I1213 14:37:34.217971 2081 setters.go:568] "Node became not ready" node="10.200.8.11" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:37:34Z","lastTransitionTime":"2024-12-13T14:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:37:34.814835 kubelet[2081]: E1213 14:37:34.814781 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:36.850166 kubelet[2081]: E1213 14:37:35.815339 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:36.850166 kubelet[2081]: E1213 14:37:36.815859 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:37.371847 env[1522]: time="2024-12-13T14:37:37.371768224Z" level=error msg="get state for af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049" error="context deadline exceeded: unknown" Dec 13 14:37:37.371847 env[1522]: time="2024-12-13T14:37:37.371821724Z" level=warning msg="unknown status" status=0 Dec 13 14:37:37.816756 kubelet[2081]: E1213 14:37:37.816690 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:37.847521 kubelet[2081]: E1213 14:37:37.847478 2081 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:37:38.817284 kubelet[2081]: E1213 14:37:38.817219 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:38.942301 env[1522]: time="2024-12-13T14:37:38.822578757Z" level=error msg="failed to handle container TaskExit event &TaskExit{ContainerID:af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049,ID:af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049,Pid:2635,ExitStatus:0,ExitedAt:2024-12-13 14:37:28.821807137 +0000 UTC,XXX_unrecognized:[],}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Dec 13 14:37:39.818103 kubelet[2081]: E1213 14:37:39.818045 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:40.066342 env[1522]: time="2024-12-13T14:37:40.066268156Z" level=info msg="TaskExit event &TaskExit{ContainerID:af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049,ID:af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049,Pid:2635,ExitStatus:0,ExitedAt:2024-12-13 14:37:28.821807137 +0000 UTC,XXX_unrecognized:[],}" Dec 13 14:37:40.818532 kubelet[2081]: E1213 14:37:40.818473 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:41.104333 env[1522]: time="2024-12-13T14:37:41.104017071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:37:41.104333 env[1522]: time="2024-12-13T14:37:41.104053771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:37:41.104333 env[1522]: time="2024-12-13T14:37:41.104070171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:37:41.104894 env[1522]: time="2024-12-13T14:37:41.104843975Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ceb9fca67972de8153fb6f0720120d80345fd902b983b9fd75bb63192a578cfd pid=3549 runtime=io.containerd.runc.v2 Dec 13 14:37:41.165153 env[1522]: time="2024-12-13T14:37:41.164443460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qwf5q,Uid:04884876-9c7b-4343-b018-ccf2115d2a16,Namespace:kube-system,Attempt:0,} returns sandbox id \"ceb9fca67972de8153fb6f0720120d80345fd902b983b9fd75bb63192a578cfd\"" Dec 13 14:37:41.166135 env[1522]: time="2024-12-13T14:37:41.166099668Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:37:41.749131 env[1522]: E1213 14:37:41.749104 1522 exec.go:87] error executing command in container: failed to exec in container: failed to create exec "c5753ea59c6131032a2eb408a3e482713b9254c52c1009a5356b6ad225e1946b": cannot exec in a deleted state: unknown Dec 13 14:37:41.756108 env[1522]: time="2024-12-13T14:37:41.756063588Z" level=info msg="StopContainer for \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\" returns successfully" Dec 13 14:37:41.756737 env[1522]: time="2024-12-13T14:37:41.756698591Z" level=info msg="StopPodSandbox for \"42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa\"" Dec 13 14:37:41.756843 env[1522]: time="2024-12-13T14:37:41.756762091Z" level=info msg="Container to stop \"3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:37:41.756843 env[1522]: time="2024-12-13T14:37:41.756782092Z" level=info msg="Container to stop \"1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:37:41.756843 env[1522]: time="2024-12-13T14:37:41.756796892Z" level=info msg="Container to stop \"eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:37:41.756843 env[1522]: time="2024-12-13T14:37:41.756812192Z" level=info msg="Container to stop \"4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:37:41.756843 env[1522]: time="2024-12-13T14:37:41.756826992Z" level=info msg="Container to stop \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:37:41.759777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa-shm.mount: Deactivated successfully. Dec 13 14:37:41.786059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa-rootfs.mount: Deactivated successfully. Dec 13 14:37:41.794304 env[1522]: time="2024-12-13T14:37:41.794259671Z" level=info msg="shim disconnected" id=42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa Dec 13 14:37:41.794460 env[1522]: time="2024-12-13T14:37:41.794308171Z" level=warning msg="cleaning up after shim disconnected" id=42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa namespace=k8s.io Dec 13 14:37:41.794460 env[1522]: time="2024-12-13T14:37:41.794320071Z" level=info msg="cleaning up dead shim" Dec 13 14:37:41.795026 env[1522]: time="2024-12-13T14:37:41.794892874Z" level=info msg="shim disconnected" id=af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049 Dec 13 14:37:41.795026 env[1522]: time="2024-12-13T14:37:41.794960074Z" level=warning msg="cleaning up after shim disconnected" id=af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049 namespace=k8s.io Dec 13 14:37:41.795026 env[1522]: time="2024-12-13T14:37:41.794973174Z" level=info msg="cleaning up dead shim" Dec 13 14:37:41.806205 env[1522]: time="2024-12-13T14:37:41.806169428Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:37:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3604 runtime=io.containerd.runc.v2\n" Dec 13 14:37:41.807307 env[1522]: time="2024-12-13T14:37:41.807277833Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:37:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3603 runtime=io.containerd.runc.v2\n" Dec 13 14:37:41.807578 env[1522]: time="2024-12-13T14:37:41.807545834Z" level=info msg="TearDown network for sandbox \"42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa\" successfully" Dec 13 14:37:41.807653 env[1522]: time="2024-12-13T14:37:41.807576534Z" level=info msg="StopPodSandbox for \"42e7203233a0338f30d27413d640e63a42cbee1f7e87fa7c78c6a0e8c04d39fa\" returns successfully" Dec 13 14:37:41.819090 kubelet[2081]: E1213 14:37:41.819068 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:41.853187 kubelet[2081]: I1213 14:37:41.853147 2081 topology_manager.go:215] "Topology Admit Handler" podUID="c96ad03d-9f3b-4d39-a90c-2b4853282c3a" podNamespace="kube-system" podName="cilium-tm65d" Dec 13 14:37:41.853326 kubelet[2081]: E1213 14:37:41.853209 2081 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="529216ce-4852-4f55-9c72-a8133f06a8f4" containerName="mount-cgroup" Dec 13 14:37:41.853326 kubelet[2081]: E1213 14:37:41.853224 2081 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="529216ce-4852-4f55-9c72-a8133f06a8f4" containerName="apply-sysctl-overwrites" Dec 13 14:37:41.853326 kubelet[2081]: E1213 14:37:41.853235 2081 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="529216ce-4852-4f55-9c72-a8133f06a8f4" containerName="mount-bpf-fs" Dec 13 14:37:41.853326 kubelet[2081]: E1213 14:37:41.853247 2081 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="529216ce-4852-4f55-9c72-a8133f06a8f4" containerName="clean-cilium-state" Dec 13 14:37:41.853326 kubelet[2081]: E1213 14:37:41.853257 2081 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="529216ce-4852-4f55-9c72-a8133f06a8f4" containerName="cilium-agent" Dec 13 14:37:41.853326 kubelet[2081]: I1213 14:37:41.853289 2081 memory_manager.go:354] "RemoveStaleState removing state" podUID="529216ce-4852-4f55-9c72-a8133f06a8f4" containerName="cilium-agent" Dec 13 14:37:41.917709 kubelet[2081]: I1213 14:37:41.917680 2081 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-cni-path\") pod \"529216ce-4852-4f55-9c72-a8133f06a8f4\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " Dec 13 14:37:41.917894 kubelet[2081]: I1213 14:37:41.917746 2081 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-etc-cni-netd\") pod \"529216ce-4852-4f55-9c72-a8133f06a8f4\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " Dec 13 14:37:41.917894 kubelet[2081]: I1213 14:37:41.917780 2081 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9gpj\" (UniqueName: \"kubernetes.io/projected/529216ce-4852-4f55-9c72-a8133f06a8f4-kube-api-access-j9gpj\") pod \"529216ce-4852-4f55-9c72-a8133f06a8f4\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " Dec 13 14:37:41.917894 kubelet[2081]: I1213 14:37:41.917805 2081 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-lib-modules\") pod \"529216ce-4852-4f55-9c72-a8133f06a8f4\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " Dec 13 14:37:41.917894 kubelet[2081]: I1213 14:37:41.917828 2081 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-xtables-lock\") pod \"529216ce-4852-4f55-9c72-a8133f06a8f4\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " Dec 13 14:37:41.917894 kubelet[2081]: I1213 14:37:41.917856 2081 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/529216ce-4852-4f55-9c72-a8133f06a8f4-cilium-config-path\") pod \"529216ce-4852-4f55-9c72-a8133f06a8f4\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " Dec 13 14:37:41.917894 kubelet[2081]: I1213 14:37:41.917884 2081 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/529216ce-4852-4f55-9c72-a8133f06a8f4-hubble-tls\") pod \"529216ce-4852-4f55-9c72-a8133f06a8f4\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " Dec 13 14:37:41.918175 kubelet[2081]: I1213 14:37:41.917920 2081 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-hostproc\") pod \"529216ce-4852-4f55-9c72-a8133f06a8f4\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " Dec 13 14:37:41.918175 kubelet[2081]: I1213 14:37:41.917946 2081 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-host-proc-sys-net\") pod \"529216ce-4852-4f55-9c72-a8133f06a8f4\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " Dec 13 14:37:41.918175 kubelet[2081]: I1213 14:37:41.917972 2081 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-cilium-run\") pod \"529216ce-4852-4f55-9c72-a8133f06a8f4\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " Dec 13 14:37:41.918175 kubelet[2081]: I1213 14:37:41.918000 2081 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/529216ce-4852-4f55-9c72-a8133f06a8f4-clustermesh-secrets\") pod \"529216ce-4852-4f55-9c72-a8133f06a8f4\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " Dec 13 14:37:41.918175 kubelet[2081]: I1213 14:37:41.918023 2081 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-cilium-cgroup\") pod \"529216ce-4852-4f55-9c72-a8133f06a8f4\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " Dec 13 14:37:41.918175 kubelet[2081]: I1213 14:37:41.918047 2081 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-bpf-maps\") pod \"529216ce-4852-4f55-9c72-a8133f06a8f4\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " Dec 13 14:37:41.918424 kubelet[2081]: I1213 14:37:41.918072 2081 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-host-proc-sys-kernel\") pod \"529216ce-4852-4f55-9c72-a8133f06a8f4\" (UID: \"529216ce-4852-4f55-9c72-a8133f06a8f4\") " Dec 13 14:37:41.918424 kubelet[2081]: I1213 14:37:41.918147 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-hostproc\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918424 kubelet[2081]: I1213 14:37:41.918178 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-cni-path\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918424 kubelet[2081]: I1213 14:37:41.918207 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-lib-modules\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918424 kubelet[2081]: I1213 14:37:41.918237 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-xtables-lock\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918424 kubelet[2081]: I1213 14:37:41.918269 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-cilium-config-path\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918656 kubelet[2081]: I1213 14:37:41.918300 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-cilium-ipsec-secrets\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918656 kubelet[2081]: I1213 14:37:41.918329 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-bpf-maps\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918656 kubelet[2081]: I1213 14:37:41.918359 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-cilium-cgroup\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918656 kubelet[2081]: I1213 14:37:41.918387 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-hubble-tls\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918656 kubelet[2081]: I1213 14:37:41.918419 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-cilium-run\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918656 kubelet[2081]: I1213 14:37:41.918447 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-etc-cni-netd\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918887 kubelet[2081]: I1213 14:37:41.918477 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-host-proc-sys-kernel\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918887 kubelet[2081]: I1213 14:37:41.918509 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnjrp\" (UniqueName: \"kubernetes.io/projected/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-kube-api-access-hnjrp\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918887 kubelet[2081]: I1213 14:37:41.918542 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-clustermesh-secrets\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918887 kubelet[2081]: I1213 14:37:41.918571 2081 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c96ad03d-9f3b-4d39-a90c-2b4853282c3a-host-proc-sys-net\") pod \"cilium-tm65d\" (UID: \"c96ad03d-9f3b-4d39-a90c-2b4853282c3a\") " pod="kube-system/cilium-tm65d" Dec 13 14:37:41.918887 kubelet[2081]: I1213 14:37:41.917697 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-cni-path" (OuterVolumeSpecName: "cni-path") pod "529216ce-4852-4f55-9c72-a8133f06a8f4" (UID: "529216ce-4852-4f55-9c72-a8133f06a8f4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:37:41.919126 kubelet[2081]: I1213 14:37:41.918657 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "529216ce-4852-4f55-9c72-a8133f06a8f4" (UID: "529216ce-4852-4f55-9c72-a8133f06a8f4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:37:41.919126 kubelet[2081]: I1213 14:37:41.919038 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "529216ce-4852-4f55-9c72-a8133f06a8f4" (UID: "529216ce-4852-4f55-9c72-a8133f06a8f4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:37:41.919126 kubelet[2081]: I1213 14:37:41.919069 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "529216ce-4852-4f55-9c72-a8133f06a8f4" (UID: "529216ce-4852-4f55-9c72-a8133f06a8f4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:37:41.920923 kubelet[2081]: I1213 14:37:41.920884 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529216ce-4852-4f55-9c72-a8133f06a8f4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "529216ce-4852-4f55-9c72-a8133f06a8f4" (UID: "529216ce-4852-4f55-9c72-a8133f06a8f4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:37:41.921498 kubelet[2081]: I1213 14:37:41.921476 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-hostproc" (OuterVolumeSpecName: "hostproc") pod "529216ce-4852-4f55-9c72-a8133f06a8f4" (UID: "529216ce-4852-4f55-9c72-a8133f06a8f4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:37:41.921638 kubelet[2081]: I1213 14:37:41.921622 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "529216ce-4852-4f55-9c72-a8133f06a8f4" (UID: "529216ce-4852-4f55-9c72-a8133f06a8f4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:37:41.921735 kubelet[2081]: I1213 14:37:41.921643 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "529216ce-4852-4f55-9c72-a8133f06a8f4" (UID: "529216ce-4852-4f55-9c72-a8133f06a8f4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:37:41.921827 kubelet[2081]: I1213 14:37:41.921660 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "529216ce-4852-4f55-9c72-a8133f06a8f4" (UID: "529216ce-4852-4f55-9c72-a8133f06a8f4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:37:41.921951 kubelet[2081]: I1213 14:37:41.921933 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "529216ce-4852-4f55-9c72-a8133f06a8f4" (UID: "529216ce-4852-4f55-9c72-a8133f06a8f4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:37:41.922077 kubelet[2081]: I1213 14:37:41.922060 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "529216ce-4852-4f55-9c72-a8133f06a8f4" (UID: "529216ce-4852-4f55-9c72-a8133f06a8f4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:37:41.925224 kubelet[2081]: I1213 14:37:41.925183 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529216ce-4852-4f55-9c72-a8133f06a8f4-kube-api-access-j9gpj" (OuterVolumeSpecName: "kube-api-access-j9gpj") pod "529216ce-4852-4f55-9c72-a8133f06a8f4" (UID: "529216ce-4852-4f55-9c72-a8133f06a8f4"). InnerVolumeSpecName "kube-api-access-j9gpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:37:41.925514 kubelet[2081]: I1213 14:37:41.925484 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529216ce-4852-4f55-9c72-a8133f06a8f4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "529216ce-4852-4f55-9c72-a8133f06a8f4" (UID: "529216ce-4852-4f55-9c72-a8133f06a8f4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:37:41.927373 kubelet[2081]: I1213 14:37:41.927346 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529216ce-4852-4f55-9c72-a8133f06a8f4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "529216ce-4852-4f55-9c72-a8133f06a8f4" (UID: "529216ce-4852-4f55-9c72-a8133f06a8f4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:37:42.022627 kubelet[2081]: I1213 14:37:42.019255 2081 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-host-proc-sys-net\") on node \"10.200.8.11\" DevicePath \"\"" Dec 13 14:37:42.022910 kubelet[2081]: I1213 14:37:42.022862 2081 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-cilium-run\") on node \"10.200.8.11\" DevicePath \"\"" Dec 13 14:37:42.023060 kubelet[2081]: I1213 14:37:42.023045 2081 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/529216ce-4852-4f55-9c72-a8133f06a8f4-clustermesh-secrets\") on node \"10.200.8.11\" DevicePath \"\"" Dec 13 14:37:42.023202 kubelet[2081]: I1213 14:37:42.023189 2081 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-cilium-cgroup\") on node \"10.200.8.11\" DevicePath \"\"" Dec 13 14:37:42.023333 kubelet[2081]: I1213 14:37:42.023320 2081 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-bpf-maps\") on node \"10.200.8.11\" DevicePath \"\"" Dec 13 14:37:42.023473 kubelet[2081]: I1213 14:37:42.023459 2081 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-host-proc-sys-kernel\") on node \"10.200.8.11\" DevicePath \"\"" Dec 13 14:37:42.023600 kubelet[2081]: I1213 14:37:42.023587 2081 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-cni-path\") on node \"10.200.8.11\" DevicePath \"\"" Dec 13 14:37:42.023724 kubelet[2081]: I1213 14:37:42.023711 2081 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-etc-cni-netd\") on node \"10.200.8.11\" DevicePath \"\"" Dec 13 14:37:42.023844 kubelet[2081]: I1213 14:37:42.023835 2081 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j9gpj\" (UniqueName: \"kubernetes.io/projected/529216ce-4852-4f55-9c72-a8133f06a8f4-kube-api-access-j9gpj\") on node \"10.200.8.11\" DevicePath \"\"" Dec 13 14:37:42.023972 kubelet[2081]: I1213 14:37:42.023960 2081 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-lib-modules\") on node \"10.200.8.11\" DevicePath \"\"" Dec 13 14:37:42.024118 kubelet[2081]: I1213 14:37:42.024106 2081 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-xtables-lock\") on node \"10.200.8.11\" DevicePath \"\"" Dec 13 14:37:42.024218 kubelet[2081]: I1213 14:37:42.024208 2081 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/529216ce-4852-4f55-9c72-a8133f06a8f4-cilium-config-path\") on node \"10.200.8.11\" DevicePath \"\"" Dec 13 14:37:42.024315 kubelet[2081]: I1213 14:37:42.024305 2081 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/529216ce-4852-4f55-9c72-a8133f06a8f4-hubble-tls\") on node \"10.200.8.11\" DevicePath \"\"" Dec 13 14:37:42.024409 kubelet[2081]: I1213 14:37:42.024400 2081 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/529216ce-4852-4f55-9c72-a8133f06a8f4-hostproc\") on node \"10.200.8.11\" DevicePath \"\"" Dec 13 14:37:42.038756 kubelet[2081]: I1213 14:37:42.038729 2081 scope.go:117] "RemoveContainer" containerID="af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049" Dec 13 14:37:42.042220 env[1522]: time="2024-12-13T14:37:42.042179254Z" level=info msg="RemoveContainer for \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\"" Dec 13 14:37:42.053654 env[1522]: time="2024-12-13T14:37:42.053615508Z" level=info msg="RemoveContainer for \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\" returns successfully" Dec 13 14:37:42.053872 kubelet[2081]: I1213 14:37:42.053842 2081 scope.go:117] "RemoveContainer" containerID="1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286" Dec 13 14:37:42.055602 env[1522]: time="2024-12-13T14:37:42.055571817Z" level=info msg="RemoveContainer for \"1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286\"" Dec 13 14:37:42.062263 env[1522]: time="2024-12-13T14:37:42.062224448Z" level=info msg="RemoveContainer for \"1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286\" returns successfully" Dec 13 14:37:42.062420 kubelet[2081]: I1213 14:37:42.062399 2081 scope.go:117] "RemoveContainer" containerID="3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b" Dec 13 14:37:42.063896 env[1522]: time="2024-12-13T14:37:42.063862856Z" level=info msg="RemoveContainer for \"3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b\"" Dec 13 14:37:42.073287 env[1522]: time="2024-12-13T14:37:42.073253001Z" level=info msg="RemoveContainer for \"3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b\" returns successfully" Dec 13 14:37:42.073445 kubelet[2081]: I1213 14:37:42.073403 2081 scope.go:117] "RemoveContainer" containerID="4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929" Dec 13 14:37:42.074428 env[1522]: time="2024-12-13T14:37:42.074401906Z" level=info msg="RemoveContainer for \"4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929\"" Dec 13 14:37:42.081369 env[1522]: time="2024-12-13T14:37:42.081333139Z" level=info msg="RemoveContainer for \"4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929\" returns successfully" Dec 13 14:37:42.081527 kubelet[2081]: I1213 14:37:42.081506 2081 scope.go:117] "RemoveContainer" containerID="eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34" Dec 13 14:37:42.082428 env[1522]: time="2024-12-13T14:37:42.082403444Z" level=info msg="RemoveContainer for \"eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34\"" Dec 13 14:37:42.088505 env[1522]: time="2024-12-13T14:37:42.088472172Z" level=info msg="RemoveContainer for \"eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34\" returns successfully" Dec 13 14:37:42.088645 kubelet[2081]: I1213 14:37:42.088623 2081 scope.go:117] "RemoveContainer" containerID="af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049" Dec 13 14:37:42.088858 env[1522]: time="2024-12-13T14:37:42.088780374Z" level=error msg="ContainerStatus for \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\": not found" Dec 13 14:37:42.089009 kubelet[2081]: E1213 14:37:42.088991 2081 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\": not found" containerID="af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049" Dec 13 14:37:42.089092 kubelet[2081]: I1213 14:37:42.089052 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049"} err="failed to get container status \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\": rpc error: code = NotFound desc = an error occurred when try to find container \"af2a3d315ae18a235d844da07ad3f928c05ab62f2d407970deffb700afe8e049\": not found" Dec 13 14:37:42.089092 kubelet[2081]: I1213 14:37:42.089070 2081 scope.go:117] "RemoveContainer" containerID="1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286" Dec 13 14:37:42.089272 env[1522]: time="2024-12-13T14:37:42.089219176Z" level=error msg="ContainerStatus for \"1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286\": not found" Dec 13 14:37:42.097948 kubelet[2081]: E1213 14:37:42.096973 2081 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286\": not found" containerID="1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286" Dec 13 14:37:42.097948 kubelet[2081]: I1213 14:37:42.097016 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286"} err="failed to get container status \"1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ac7d1f38c6637e3a648ec8f028830a0a9f43d2e69f1dc76c89d352d5c1f8286\": not found" Dec 13 14:37:42.097948 kubelet[2081]: I1213 14:37:42.097031 2081 scope.go:117] "RemoveContainer" containerID="3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b" Dec 13 14:37:42.098131 env[1522]: time="2024-12-13T14:37:42.097198314Z" level=error msg="ContainerStatus for \"3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b\": not found" Dec 13 14:37:42.098296 kubelet[2081]: E1213 14:37:42.098267 2081 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b\": not found" containerID="3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b" Dec 13 14:37:42.098413 kubelet[2081]: I1213 14:37:42.098402 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b"} err="failed to get container status \"3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ca3dbb8f2434ba16750f496a97f3ffaa8ffc70086254ae6c99e671a2fb3ae8b\": not found" Dec 13 14:37:42.098502 kubelet[2081]: I1213 14:37:42.098488 2081 scope.go:117] "RemoveContainer" containerID="4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929" Dec 13 14:37:42.099089 systemd[1]: var-lib-kubelet-pods-529216ce\x2d4852\x2d4f55\x2d9c72\x2da8133f06a8f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj9gpj.mount: Deactivated successfully. Dec 13 14:37:42.099253 systemd[1]: var-lib-kubelet-pods-529216ce\x2d4852\x2d4f55\x2d9c72\x2da8133f06a8f4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:37:42.099384 systemd[1]: var-lib-kubelet-pods-529216ce\x2d4852\x2d4f55\x2d9c72\x2da8133f06a8f4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:37:42.102401 env[1522]: time="2024-12-13T14:37:42.102342338Z" level=error msg="ContainerStatus for \"4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929\": not found" Dec 13 14:37:42.102555 kubelet[2081]: E1213 14:37:42.102512 2081 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929\": not found" containerID="4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929" Dec 13 14:37:42.102555 kubelet[2081]: I1213 14:37:42.102543 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929"} err="failed to get container status \"4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b486115bfa5c64696a59e09234f8af22f47f885aecd1a2fafefa3c8110fc929\": not found" Dec 13 14:37:42.102555 kubelet[2081]: I1213 14:37:42.102556 2081 scope.go:117] "RemoveContainer" containerID="eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34" Dec 13 14:37:42.102762 env[1522]: time="2024-12-13T14:37:42.102708540Z" level=error msg="ContainerStatus for \"eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34\": not found" Dec 13 14:37:42.102941 kubelet[2081]: E1213 14:37:42.102928 2081 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34\": not found" containerID="eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34" Dec 13 14:37:42.103088 kubelet[2081]: I1213 14:37:42.103071 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34"} err="failed to get container status \"eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb11481979b3f20198993a927992cece5131bcfe83d25623bba280e99fdc5f34\": not found" Dec 13 14:37:42.157363 env[1522]: time="2024-12-13T14:37:42.157306698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tm65d,Uid:c96ad03d-9f3b-4d39-a90c-2b4853282c3a,Namespace:kube-system,Attempt:0,}" Dec 13 14:37:42.201815 env[1522]: time="2024-12-13T14:37:42.201748308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:37:42.202517 env[1522]: time="2024-12-13T14:37:42.201793508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:37:42.202517 env[1522]: time="2024-12-13T14:37:42.201814108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:37:42.202517 env[1522]: time="2024-12-13T14:37:42.201982209Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5e5359719a389f509451c9e1223dd155e7f0951782cf795066fe76e8b66abd0 pid=3644 runtime=io.containerd.runc.v2 Dec 13 14:37:42.235660 systemd[1]: run-containerd-runc-k8s.io-d5e5359719a389f509451c9e1223dd155e7f0951782cf795066fe76e8b66abd0-runc.aVXgCS.mount: Deactivated successfully. Dec 13 14:37:42.255066 env[1522]: time="2024-12-13T14:37:42.255016459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tm65d,Uid:c96ad03d-9f3b-4d39-a90c-2b4853282c3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5e5359719a389f509451c9e1223dd155e7f0951782cf795066fe76e8b66abd0\"" Dec 13 14:37:42.258172 env[1522]: time="2024-12-13T14:37:42.258137374Z" level=info msg="CreateContainer within sandbox \"d5e5359719a389f509451c9e1223dd155e7f0951782cf795066fe76e8b66abd0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:37:42.284045 env[1522]: time="2024-12-13T14:37:42.283881296Z" level=info msg="CreateContainer within sandbox \"d5e5359719a389f509451c9e1223dd155e7f0951782cf795066fe76e8b66abd0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"327246953e4b85d18a1a04146f922b1d2720255662f44cd36b0a993898ba724d\"" Dec 13 14:37:42.284838 env[1522]: time="2024-12-13T14:37:42.284807900Z" level=info msg="StartContainer for \"327246953e4b85d18a1a04146f922b1d2720255662f44cd36b0a993898ba724d\"" Dec 13 14:37:42.331819 env[1522]: time="2024-12-13T14:37:42.331769022Z" level=info msg="StartContainer for \"327246953e4b85d18a1a04146f922b1d2720255662f44cd36b0a993898ba724d\" returns successfully" Dec 13 14:37:42.767782 kubelet[2081]: E1213 14:37:42.767715 2081 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:42.820239 kubelet[2081]: E1213 14:37:42.820182 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:42.849416 kubelet[2081]: E1213 14:37:42.849370 2081 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:37:42.982754 kubelet[2081]: I1213 14:37:42.982692 2081 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="529216ce-4852-4f55-9c72-a8133f06a8f4" path="/var/lib/kubelet/pods/529216ce-4852-4f55-9c72-a8133f06a8f4/volumes" Dec 13 14:37:43.237461 env[1522]: time="2024-12-13T14:37:43.237388789Z" level=error msg="collecting metrics for 327246953e4b85d18a1a04146f922b1d2720255662f44cd36b0a993898ba724d" error="cgroups: cgroup deleted: unknown" Dec 13 14:37:43.331492 env[1522]: time="2024-12-13T14:37:43.331427128Z" level=info msg="shim disconnected" id=327246953e4b85d18a1a04146f922b1d2720255662f44cd36b0a993898ba724d Dec 13 14:37:43.331492 env[1522]: time="2024-12-13T14:37:43.331486428Z" level=warning msg="cleaning up after shim disconnected" id=327246953e4b85d18a1a04146f922b1d2720255662f44cd36b0a993898ba724d namespace=k8s.io Dec 13 14:37:43.331492 env[1522]: time="2024-12-13T14:37:43.331500928Z" level=info msg="cleaning up dead shim" Dec 13 14:37:43.340102 env[1522]: time="2024-12-13T14:37:43.340060768Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:37:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3728 runtime=io.containerd.runc.v2\n" Dec 13 14:37:43.820515 kubelet[2081]: E1213 14:37:43.820450 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:44.052926 env[1522]: time="2024-12-13T14:37:44.052876795Z" level=info msg="CreateContainer within sandbox \"d5e5359719a389f509451c9e1223dd155e7f0951782cf795066fe76e8b66abd0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:37:44.281882 env[1522]: time="2024-12-13T14:37:44.281815353Z" level=info msg="CreateContainer within sandbox \"d5e5359719a389f509451c9e1223dd155e7f0951782cf795066fe76e8b66abd0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6fb2899d130eb3903e43d0aebf13b41df61181742b489bb54d324df044f54a2b\"" Dec 13 14:37:44.282879 env[1522]: time="2024-12-13T14:37:44.282830357Z" level=info msg="StartContainer for \"6fb2899d130eb3903e43d0aebf13b41df61181742b489bb54d324df044f54a2b\"" Dec 13 14:37:44.353026 env[1522]: time="2024-12-13T14:37:44.352975281Z" level=info msg="StartContainer for \"6fb2899d130eb3903e43d0aebf13b41df61181742b489bb54d324df044f54a2b\" returns successfully" Dec 13 14:37:44.821085 kubelet[2081]: E1213 14:37:44.821028 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:45.129683 env[1522]: time="2024-12-13T14:37:45.129518062Z" level=info msg="shim disconnected" id=6fb2899d130eb3903e43d0aebf13b41df61181742b489bb54d324df044f54a2b Dec 13 14:37:45.129683 env[1522]: time="2024-12-13T14:37:45.129582762Z" level=warning msg="cleaning up after shim disconnected" id=6fb2899d130eb3903e43d0aebf13b41df61181742b489bb54d324df044f54a2b namespace=k8s.io Dec 13 14:37:45.129683 env[1522]: time="2024-12-13T14:37:45.129596862Z" level=info msg="cleaning up dead shim" Dec 13 14:37:45.138585 env[1522]: time="2024-12-13T14:37:45.138544703Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:37:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3791 runtime=io.containerd.runc.v2\n" Dec 13 14:37:45.143559 systemd[1]: run-containerd-runc-k8s.io-6fb2899d130eb3903e43d0aebf13b41df61181742b489bb54d324df044f54a2b-runc.hRuqb8.mount: Deactivated successfully. Dec 13 14:37:45.143739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fb2899d130eb3903e43d0aebf13b41df61181742b489bb54d324df044f54a2b-rootfs.mount: Deactivated successfully. Dec 13 14:37:45.822163 kubelet[2081]: E1213 14:37:45.822103 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:46.060856 env[1522]: time="2024-12-13T14:37:46.060802614Z" level=info msg="CreateContainer within sandbox \"d5e5359719a389f509451c9e1223dd155e7f0951782cf795066fe76e8b66abd0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:37:46.822631 kubelet[2081]: E1213 14:37:46.822570 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:47.390007 env[1522]: time="2024-12-13T14:37:47.389948103Z" level=info msg="CreateContainer within sandbox \"d5e5359719a389f509451c9e1223dd155e7f0951782cf795066fe76e8b66abd0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b5749dce75290ea8f8202aa0004f973d1cda01ca09f41fe2bb04d112040c2e46\"" Dec 13 14:37:47.391023 env[1522]: time="2024-12-13T14:37:47.390984207Z" level=info msg="StartContainer for \"b5749dce75290ea8f8202aa0004f973d1cda01ca09f41fe2bb04d112040c2e46\"" Dec 13 14:37:47.485441 env[1522]: time="2024-12-13T14:37:47.485388829Z" level=info msg="StartContainer for \"b5749dce75290ea8f8202aa0004f973d1cda01ca09f41fe2bb04d112040c2e46\" returns successfully" Dec 13 14:37:47.877150 kubelet[2081]: E1213 14:37:47.823096 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:47.877150 kubelet[2081]: E1213 14:37:47.850084 2081 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:37:47.985451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5749dce75290ea8f8202aa0004f973d1cda01ca09f41fe2bb04d112040c2e46-rootfs.mount: Deactivated successfully. Dec 13 14:37:48.542311 env[1522]: time="2024-12-13T14:37:48.542259332Z" level=info msg="shim disconnected" id=b5749dce75290ea8f8202aa0004f973d1cda01ca09f41fe2bb04d112040c2e46 Dec 13 14:37:48.542921 env[1522]: time="2024-12-13T14:37:48.542889534Z" level=warning msg="cleaning up after shim disconnected" id=b5749dce75290ea8f8202aa0004f973d1cda01ca09f41fe2bb04d112040c2e46 namespace=k8s.io Dec 13 14:37:48.543023 env[1522]: time="2024-12-13T14:37:48.543007735Z" level=info msg="cleaning up dead shim" Dec 13 14:37:48.552541 env[1522]: time="2024-12-13T14:37:48.552505577Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:37:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3851 runtime=io.containerd.runc.v2\n" Dec 13 14:37:48.823710 kubelet[2081]: E1213 14:37:48.823225 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:49.071655 env[1522]: time="2024-12-13T14:37:49.071613172Z" level=info msg="CreateContainer within sandbox \"d5e5359719a389f509451c9e1223dd155e7f0951782cf795066fe76e8b66abd0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:37:49.283315 env[1522]: time="2024-12-13T14:37:49.283250899Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:49.388262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097714780.mount: Deactivated successfully. Dec 13 14:37:49.397593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304369025.mount: Deactivated successfully. Dec 13 14:37:49.427292 env[1522]: time="2024-12-13T14:37:49.427238630Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:49.630857 env[1522]: time="2024-12-13T14:37:49.630709822Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:49.631867 env[1522]: time="2024-12-13T14:37:49.631822526Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:37:49.632075 env[1522]: time="2024-12-13T14:37:49.632036927Z" level=info msg="CreateContainer within sandbox \"d5e5359719a389f509451c9e1223dd155e7f0951782cf795066fe76e8b66abd0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"424b43cb321bed1b2c2701c012299fa3e6bc26e3ce6624ea3b64cb5107cf3865\"" Dec 13 14:37:49.633336 env[1522]: time="2024-12-13T14:37:49.633292833Z" level=info msg="StartContainer for \"424b43cb321bed1b2c2701c012299fa3e6bc26e3ce6624ea3b64cb5107cf3865\"" Dec 13 14:37:49.636189 env[1522]: time="2024-12-13T14:37:49.636139145Z" level=info msg="CreateContainer within sandbox \"ceb9fca67972de8153fb6f0720120d80345fd902b983b9fd75bb63192a578cfd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:37:49.793159 env[1522]: time="2024-12-13T14:37:49.793100033Z" level=info msg="StartContainer for \"424b43cb321bed1b2c2701c012299fa3e6bc26e3ce6624ea3b64cb5107cf3865\" returns successfully" Dec 13 14:37:49.824372 kubelet[2081]: E1213 14:37:49.824332 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:52.129842 kubelet[2081]: E1213 14:37:50.824824 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:52.129842 kubelet[2081]: E1213 14:37:51.825921 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:50.382551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-424b43cb321bed1b2c2701c012299fa3e6bc26e3ce6624ea3b64cb5107cf3865-rootfs.mount: Deactivated successfully. Dec 13 14:37:52.549227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180312977.mount: Deactivated successfully. Dec 13 14:37:52.554778 env[1522]: time="2024-12-13T14:37:52.554566439Z" level=info msg="shim disconnected" id=424b43cb321bed1b2c2701c012299fa3e6bc26e3ce6624ea3b64cb5107cf3865 Dec 13 14:37:52.554778 env[1522]: time="2024-12-13T14:37:52.554627839Z" level=warning msg="cleaning up after shim disconnected" id=424b43cb321bed1b2c2701c012299fa3e6bc26e3ce6624ea3b64cb5107cf3865 namespace=k8s.io Dec 13 14:37:52.554778 env[1522]: time="2024-12-13T14:37:52.554641939Z" level=info msg="cleaning up dead shim" Dec 13 14:37:52.561755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2935887172.mount: Deactivated successfully. Dec 13 14:37:52.563916 env[1522]: time="2024-12-13T14:37:52.563870678Z" level=info msg="CreateContainer within sandbox \"ceb9fca67972de8153fb6f0720120d80345fd902b983b9fd75bb63192a578cfd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"42ca90cd5cb9845f1fa39c0c66e0d7165d469fdc7809e14d65ad87319a915add\"" Dec 13 14:37:52.564677 env[1522]: time="2024-12-13T14:37:52.564649582Z" level=info msg="StartContainer for \"42ca90cd5cb9845f1fa39c0c66e0d7165d469fdc7809e14d65ad87319a915add\"" Dec 13 14:37:52.571782 env[1522]: time="2024-12-13T14:37:52.571755612Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:37:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3909 runtime=io.containerd.runc.v2\n" Dec 13 14:37:52.624078 env[1522]: time="2024-12-13T14:37:52.624033434Z" level=info msg="StartContainer for \"42ca90cd5cb9845f1fa39c0c66e0d7165d469fdc7809e14d65ad87319a915add\" returns successfully" Dec 13 14:37:52.826617 kubelet[2081]: E1213 14:37:52.826390 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:52.858150 kubelet[2081]: E1213 14:37:52.858116 2081 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:37:53.086835 env[1522]: time="2024-12-13T14:37:53.086729001Z" level=info msg="CreateContainer within sandbox \"d5e5359719a389f509451c9e1223dd155e7f0951782cf795066fe76e8b66abd0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:37:53.114527 kubelet[2081]: I1213 14:37:53.114496 2081 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-qwf5q" podStartSLOduration=12.647659254 podStartE2EDuration="21.114460918s" podCreationTimestamp="2024-12-13 14:37:32 +0000 UTC" firstStartedPulling="2024-12-13 14:37:41.165583465 +0000 UTC m=+79.104069966" lastFinishedPulling="2024-12-13 14:37:49.632385029 +0000 UTC m=+87.570871630" observedRunningTime="2024-12-13 14:37:53.114448918 +0000 UTC m=+91.052935419" watchObservedRunningTime="2024-12-13 14:37:53.114460918 +0000 UTC m=+91.052947419" Dec 13 14:37:53.119858 env[1522]: time="2024-12-13T14:37:53.119816140Z" level=info msg="CreateContainer within sandbox \"d5e5359719a389f509451c9e1223dd155e7f0951782cf795066fe76e8b66abd0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2001a78f7a85434d541897f165edd5ce2c18a15a60b37f90fd3a2c2b2572b0bb\"" Dec 13 14:37:53.120352 env[1522]: time="2024-12-13T14:37:53.120321142Z" level=info msg="StartContainer for \"2001a78f7a85434d541897f165edd5ce2c18a15a60b37f90fd3a2c2b2572b0bb\"" Dec 13 14:37:53.171139 env[1522]: time="2024-12-13T14:37:53.171075656Z" level=info msg="StartContainer for \"2001a78f7a85434d541897f165edd5ce2c18a15a60b37f90fd3a2c2b2572b0bb\" returns successfully" Dec 13 14:37:53.526929 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:37:53.827062 kubelet[2081]: E1213 14:37:53.826923 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:54.109580 kubelet[2081]: I1213 14:37:54.109273 2081 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-tm65d" podStartSLOduration=21.109239809 podStartE2EDuration="21.109239809s" podCreationTimestamp="2024-12-13 14:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:37:54.109124508 +0000 UTC m=+92.047611109" watchObservedRunningTime="2024-12-13 14:37:54.109239809 +0000 UTC m=+92.047726410" Dec 13 14:37:54.661724 systemd[1]: run-containerd-runc-k8s.io-2001a78f7a85434d541897f165edd5ce2c18a15a60b37f90fd3a2c2b2572b0bb-runc.G0xP1w.mount: Deactivated successfully. Dec 13 14:37:54.827914 kubelet[2081]: E1213 14:37:54.827867 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:55.829312 kubelet[2081]: E1213 14:37:55.829245 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:56.130303 systemd-networkd[1691]: lxc_health: Link UP Dec 13 14:37:56.144738 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:37:56.144562 systemd-networkd[1691]: lxc_health: Gained carrier Dec 13 14:37:56.785491 systemd[1]: run-containerd-runc-k8s.io-2001a78f7a85434d541897f165edd5ce2c18a15a60b37f90fd3a2c2b2572b0bb-runc.drH0bg.mount: Deactivated successfully. Dec 13 14:37:56.830983 kubelet[2081]: E1213 14:37:56.830945 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:56.859768 kubelet[2081]: E1213 14:37:56.859731 2081 upgradeaware.go:439] Error proxying data from backend to client: read tcp 127.0.0.1:44812->127.0.0.1:45663: read: connection reset by peer Dec 13 14:37:57.831359 kubelet[2081]: E1213 14:37:57.831315 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:57.868073 systemd-networkd[1691]: lxc_health: Gained IPv6LL Dec 13 14:37:58.832058 kubelet[2081]: E1213 14:37:58.832008 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:37:58.966524 systemd[1]: run-containerd-runc-k8s.io-2001a78f7a85434d541897f165edd5ce2c18a15a60b37f90fd3a2c2b2572b0bb-runc.wdtuik.mount: Deactivated successfully. Dec 13 14:37:59.044533 kubelet[2081]: E1213 14:37:59.044494 2081 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44818->127.0.0.1:45663: write tcp 127.0.0.1:44818->127.0.0.1:45663: write: broken pipe Dec 13 14:37:59.832658 kubelet[2081]: E1213 14:37:59.832617 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:38:00.834264 kubelet[2081]: E1213 14:38:00.834209 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:38:01.834850 kubelet[2081]: E1213 14:38:01.834793 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:38:02.768225 kubelet[2081]: E1213 14:38:02.768144 2081 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:38:02.835347 kubelet[2081]: E1213 14:38:02.835303 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:38:03.836018 kubelet[2081]: E1213 14:38:03.835963 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:38:04.836738 kubelet[2081]: E1213 14:38:04.836692 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:38:05.837786 kubelet[2081]: E1213 14:38:05.837722 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:38:06.838165 kubelet[2081]: E1213 14:38:06.838115 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:38:07.838971 kubelet[2081]: E1213 14:38:07.838914 2081 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"