Dec 13 14:32:33.127106 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:32:33.127139 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:32:33.127149 kernel: BIOS-provided physical RAM map: Dec 13 14:32:33.127157 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 14:32:33.127163 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 14:32:33.127168 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 14:32:33.127181 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 14:32:33.127187 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 14:32:33.127196 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 14:32:33.127201 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 14:32:33.127207 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 14:32:33.127213 kernel: printk: bootconsole [earlyser0] enabled Dec 13 14:32:33.127221 kernel: NX (Execute Disable) protection: active Dec 13 14:32:33.127227 kernel: efi: EFI v2.70 by Microsoft Dec 13 14:32:33.127240 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Dec 13 14:32:33.127246 kernel: random: crng init done Dec 13 14:32:33.127252 kernel: SMBIOS 3.1.0 present. Dec 13 14:32:33.127262 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 14:32:33.127268 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 14:32:33.127278 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 14:32:33.127284 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Dec 13 14:32:33.127290 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 14:32:33.127301 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 14:32:33.127308 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 14:32:33.127316 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 14:32:33.127324 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 14:32:33.127331 kernel: tsc: Detected 2593.905 MHz processor Dec 13 14:32:33.127339 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:32:33.127347 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:32:33.127354 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 14:32:33.127363 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:32:33.127369 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 14:32:33.127380 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 14:32:33.127388 kernel: Using GB pages for direct mapping Dec 13 14:32:33.127395 kernel: Secure boot disabled Dec 13 14:32:33.127404 kernel: ACPI: Early table checksum verification disabled Dec 13 14:32:33.127410 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 14:32:33.127417 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:32:33.127426 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:32:33.127433 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 14:32:33.127448 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 14:32:33.127455 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:32:33.127465 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:32:33.127471 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:32:33.127481 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:32:33.127488 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:32:33.127499 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:32:33.127508 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:32:33.127516 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 14:32:33.127524 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 14:32:33.127531 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 14:32:33.127540 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 14:32:33.127548 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 14:32:33.127556 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 14:32:33.127567 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 14:32:33.127574 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 14:32:33.127583 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 14:32:33.127591 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 14:32:33.127599 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:32:33.127607 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:32:33.127614 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 14:32:33.127622 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 14:32:33.127631 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 14:32:33.127643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 14:32:33.127651 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 14:32:33.127658 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 14:32:33.127668 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 14:32:33.127675 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 14:32:33.127684 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 14:32:33.127702 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 14:32:33.127711 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 14:32:33.127718 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 14:32:33.127730 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 14:32:33.127737 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 14:32:33.127745 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 14:32:33.127754 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 14:32:33.127762 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 14:32:33.127771 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 14:32:33.127778 kernel: Zone ranges: Dec 13 14:32:33.127786 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:32:33.127794 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 14:32:33.127807 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 14:32:33.127814 kernel: Movable zone start for each node Dec 13 14:32:33.127821 kernel: Early memory node ranges Dec 13 14:32:33.127831 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 14:32:33.127838 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 14:32:33.127848 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 14:32:33.127855 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 14:32:33.127862 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 14:32:33.127871 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:32:33.127884 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 14:32:33.127892 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 14:32:33.127899 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 14:32:33.127906 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 14:32:33.127916 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:32:33.127924 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:32:33.127934 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:32:33.127941 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 14:32:33.127948 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:32:33.127960 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 14:32:33.127969 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 14:32:33.127977 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:32:33.127984 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:32:33.127992 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:32:33.128001 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:32:33.128009 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:32:33.128018 kernel: Hyper-V: PV spinlocks enabled Dec 13 14:32:33.128025 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:32:33.128036 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 14:32:33.128044 kernel: Policy zone: Normal Dec 13 14:32:33.128056 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:32:33.128063 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:32:33.128070 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 14:32:33.128079 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:32:33.128088 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:32:33.128098 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 308056K reserved, 0K cma-reserved) Dec 13 14:32:33.128108 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:32:33.128116 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:32:33.128135 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:32:33.128146 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:32:33.128155 kernel: rcu: RCU event tracing is enabled. Dec 13 14:32:33.128165 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:32:33.128173 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:32:33.128183 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:32:33.128190 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:32:33.128198 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:32:33.128208 kernel: Using NULL legacy PIC Dec 13 14:32:33.128222 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 14:32:33.128229 kernel: Console: colour dummy device 80x25 Dec 13 14:32:33.128237 kernel: printk: console [tty1] enabled Dec 13 14:32:33.128247 kernel: printk: console [ttyS0] enabled Dec 13 14:32:33.128255 kernel: printk: bootconsole [earlyser0] disabled Dec 13 14:32:33.128268 kernel: ACPI: Core revision 20210730 Dec 13 14:32:33.128275 kernel: Failed to register legacy timer interrupt Dec 13 14:32:33.128284 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:32:33.128293 kernel: Hyper-V: Using IPI hypercalls Dec 13 14:32:33.128303 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Dec 13 14:32:33.128311 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:32:33.128318 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:32:33.128329 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:32:33.128336 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:32:33.128346 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:32:33.128356 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:32:33.128366 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:32:33.128374 kernel: RETBleed: Vulnerable Dec 13 14:32:33.128384 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:32:33.128391 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:32:33.128398 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:32:33.128408 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:32:33.128416 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:32:33.128426 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:32:33.128433 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:32:33.128446 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:32:33.128453 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:32:33.128464 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:32:33.128471 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:32:33.128478 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 14:32:33.128488 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 14:32:33.128495 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 14:32:33.128505 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 14:32:33.128513 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:32:33.128520 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:32:33.128530 kernel: LSM: Security Framework initializing Dec 13 14:32:33.128537 kernel: SELinux: Initializing. Dec 13 14:32:33.128550 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:32:33.128557 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:32:33.128568 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:32:33.128575 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:32:33.128582 kernel: signal: max sigframe size: 3632 Dec 13 14:32:33.128593 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:32:33.128600 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:32:33.128608 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:32:33.128618 kernel: x86: Booting SMP configuration: Dec 13 14:32:33.128626 kernel: .... node #0, CPUs: #1 Dec 13 14:32:33.128639 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 14:32:33.128647 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:32:33.128656 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:32:33.128665 kernel: smpboot: Max logical packages: 1 Dec 13 14:32:33.128675 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Dec 13 14:32:33.128683 kernel: devtmpfs: initialized Dec 13 14:32:33.128696 kernel: x86/mm: Memory block size: 128MB Dec 13 14:32:33.128706 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 14:32:33.128719 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:32:33.128727 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:32:33.128734 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:32:33.128745 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:32:33.128753 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:32:33.128763 kernel: audit: type=2000 audit(1734100352.024:1): state=initialized audit_enabled=0 res=1 Dec 13 14:32:33.128770 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:32:33.128778 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:32:33.128790 kernel: cpuidle: using governor menu Dec 13 14:32:33.128803 kernel: ACPI: bus type PCI registered Dec 13 14:32:33.128810 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:32:33.128819 kernel: dca service started, version 1.12.1 Dec 13 14:32:33.128829 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:32:33.128838 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:32:33.128847 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:32:33.128854 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:32:33.128865 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:32:33.128872 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:32:33.128885 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:32:33.128892 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:32:33.128899 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:32:33.128910 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:32:33.128917 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:32:33.128927 kernel: ACPI: Interpreter enabled Dec 13 14:32:33.128934 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:32:33.128942 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:32:33.128952 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:32:33.128966 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 14:32:33.128974 kernel: iommu: Default domain type: Translated Dec 13 14:32:33.128981 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:32:33.128992 kernel: vgaarb: loaded Dec 13 14:32:33.128999 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:32:33.129009 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:32:33.129017 kernel: PTP clock support registered Dec 13 14:32:33.129025 kernel: Registered efivars operations Dec 13 14:32:33.129034 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:32:33.129044 kernel: PCI: System does not support PCI Dec 13 14:32:33.129055 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 14:32:33.129063 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:32:33.129073 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:32:33.129082 kernel: pnp: PnP ACPI init Dec 13 14:32:33.129091 kernel: pnp: PnP ACPI: found 3 devices Dec 13 14:32:33.129098 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:32:33.129108 kernel: NET: Registered PF_INET protocol family Dec 13 14:32:33.129116 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:32:33.129130 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 14:32:33.129138 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:32:33.129146 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:32:33.129155 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 14:32:33.129163 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 14:32:33.129173 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:32:33.129180 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:32:33.129190 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:32:33.129198 kernel: NET: Registered PF_XDP protocol family Dec 13 14:32:33.129211 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:32:33.129219 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:32:33.129227 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Dec 13 14:32:33.129238 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:32:33.129245 kernel: Initialise system trusted keyrings Dec 13 14:32:33.129255 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 14:32:33.129263 kernel: Key type asymmetric registered Dec 13 14:32:33.129270 kernel: Asymmetric key parser 'x509' registered Dec 13 14:32:33.129280 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:32:33.129292 kernel: io scheduler mq-deadline registered Dec 13 14:32:33.129300 kernel: io scheduler kyber registered Dec 13 14:32:33.129308 kernel: io scheduler bfq registered Dec 13 14:32:33.129315 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:32:33.129325 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:32:33.129334 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:32:33.129344 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 14:32:33.129351 kernel: i8042: PNP: No PS/2 controller found. Dec 13 14:32:33.129517 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 14:32:33.129610 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T14:32:32 UTC (1734100352) Dec 13 14:32:33.129700 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 14:32:33.129713 kernel: fail to initialize ptp_kvm Dec 13 14:32:33.129721 kernel: intel_pstate: CPU model not supported Dec 13 14:32:33.129729 kernel: efifb: probing for efifb Dec 13 14:32:33.129740 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 14:32:33.129748 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 14:32:33.129758 kernel: efifb: scrolling: redraw Dec 13 14:32:33.129768 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:32:33.129779 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:32:33.129787 kernel: fb0: EFI VGA frame buffer device Dec 13 14:32:33.129794 kernel: pstore: Registered efi as persistent store backend Dec 13 14:32:33.129805 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:32:33.129812 kernel: Segment Routing with IPv6 Dec 13 14:32:33.129820 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:32:33.129830 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:32:33.129838 kernel: Key type dns_resolver registered Dec 13 14:32:33.129851 kernel: IPI shorthand broadcast: enabled Dec 13 14:32:33.129858 kernel: sched_clock: Marking stable (783090700, 25051200)->(1038830600, -230688700) Dec 13 14:32:33.129866 kernel: registered taskstats version 1 Dec 13 14:32:33.129877 kernel: Loading compiled-in X.509 certificates Dec 13 14:32:33.129887 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:32:33.129895 kernel: Key type .fscrypt registered Dec 13 14:32:33.129902 kernel: Key type fscrypt-provisioning registered Dec 13 14:32:33.129913 kernel: pstore: Using crash dump compression: deflate Dec 13 14:32:33.129925 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:32:33.129934 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:32:33.129941 kernel: ima: No architecture policies found Dec 13 14:32:33.129949 kernel: clk: Disabling unused clocks Dec 13 14:32:33.129960 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:32:33.129967 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:32:33.129978 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:32:33.129986 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:32:33.129993 kernel: Run /init as init process Dec 13 14:32:33.130001 kernel: with arguments: Dec 13 14:32:33.130015 kernel: /init Dec 13 14:32:33.130022 kernel: with environment: Dec 13 14:32:33.130032 kernel: HOME=/ Dec 13 14:32:33.130039 kernel: TERM=linux Dec 13 14:32:33.130047 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:32:33.130057 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:32:33.130067 systemd[1]: Detected virtualization microsoft. Dec 13 14:32:33.130078 systemd[1]: Detected architecture x86-64. Dec 13 14:32:33.130085 systemd[1]: Running in initrd. Dec 13 14:32:33.130093 systemd[1]: No hostname configured, using default hostname. Dec 13 14:32:33.130100 systemd[1]: Hostname set to . Dec 13 14:32:33.130108 systemd[1]: Initializing machine ID from random generator. Dec 13 14:32:33.130116 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:32:33.130127 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:32:33.130135 systemd[1]: Reached target cryptsetup.target. Dec 13 14:32:33.130144 systemd[1]: Reached target paths.target. Dec 13 14:32:33.130155 systemd[1]: Reached target slices.target. Dec 13 14:32:33.130170 systemd[1]: Reached target swap.target. Dec 13 14:32:33.130181 systemd[1]: Reached target timers.target. Dec 13 14:32:33.130190 systemd[1]: Listening on iscsid.socket. Dec 13 14:32:33.130201 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:32:33.130208 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:32:33.130222 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:32:33.130237 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:32:33.130245 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:32:33.130254 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:32:33.130264 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:32:33.130274 systemd[1]: Reached target sockets.target. Dec 13 14:32:33.130283 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:32:33.130291 systemd[1]: Finished network-cleanup.service. Dec 13 14:32:33.130299 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:32:33.130308 systemd[1]: Starting systemd-journald.service... Dec 13 14:32:33.130321 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:32:33.130329 systemd[1]: Starting systemd-resolved.service... Dec 13 14:32:33.130337 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:32:33.130348 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:32:33.130357 kernel: audit: type=1130 audit(1734100353.126:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.130372 systemd-journald[183]: Journal started Dec 13 14:32:33.130431 systemd-journald[183]: Runtime Journal (/run/log/journal/523611c5405c48f6aec73b7e6311584c) is 8.0M, max 159.0M, 151.0M free. Dec 13 14:32:33.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.130946 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 14:32:33.144535 systemd[1]: Started systemd-journald.service. Dec 13 14:32:33.154617 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:32:33.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.177456 kernel: audit: type=1130 audit(1734100353.153:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.173028 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:32:33.181954 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:32:33.192368 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:32:33.206079 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:32:33.221088 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:32:33.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.237845 kernel: audit: type=1130 audit(1734100353.160:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.235257 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:32:33.241730 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:32:33.251409 systemd-resolved[185]: Positive Trust Anchors: Dec 13 14:32:33.253946 kernel: Bridge firewalling registered Dec 13 14:32:33.251425 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:32:33.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.259145 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:32:33.294482 kernel: audit: type=1130 audit(1734100353.176:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.294524 kernel: audit: type=1130 audit(1734100353.234:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.294599 dracut-cmdline[201]: dracut-dracut-053 Dec 13 14:32:33.294599 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:32:33.326649 kernel: audit: type=1130 audit(1734100353.239:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.262243 systemd-resolved[185]: Defaulting to hostname 'linux'. Dec 13 14:32:33.271292 systemd[1]: Started systemd-resolved.service. Dec 13 14:32:33.295019 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 14:32:33.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.296753 systemd[1]: Reached target nss-lookup.target. Dec 13 14:32:33.357064 kernel: audit: type=1130 audit(1734100353.295:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.357118 kernel: SCSI subsystem initialized Dec 13 14:32:33.374715 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:32:33.378713 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:32:33.385716 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:32:33.389064 systemd-modules-load[184]: Inserted module 'dm_multipath' Dec 13 14:32:33.408259 kernel: audit: type=1130 audit(1734100353.393:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.389993 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:32:33.395173 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:32:33.424535 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:32:33.440607 kernel: audit: type=1130 audit(1734100353.426:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.444711 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:32:33.464718 kernel: iscsi: registered transport (tcp) Dec 13 14:32:33.493272 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:32:33.493377 kernel: QLogic iSCSI HBA Driver Dec 13 14:32:33.524889 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:32:33.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.530448 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:32:33.582722 kernel: raid6: avx512x4 gen() 26818 MB/s Dec 13 14:32:33.602705 kernel: raid6: avx512x4 xor() 6208 MB/s Dec 13 14:32:33.624764 kernel: raid6: avx512x2 gen() 26945 MB/s Dec 13 14:32:33.644749 kernel: raid6: avx512x2 xor() 26542 MB/s Dec 13 14:32:33.664739 kernel: raid6: avx512x1 gen() 26175 MB/s Dec 13 14:32:33.685730 kernel: raid6: avx512x1 xor() 18464 MB/s Dec 13 14:32:33.706751 kernel: raid6: avx2x4 gen() 19719 MB/s Dec 13 14:32:33.728756 kernel: raid6: avx2x4 xor() 5531 MB/s Dec 13 14:32:33.748706 kernel: raid6: avx2x2 gen() 22116 MB/s Dec 13 14:32:33.768704 kernel: raid6: avx2x2 xor() 21774 MB/s Dec 13 14:32:33.788701 kernel: raid6: avx2x1 gen() 21509 MB/s Dec 13 14:32:33.808705 kernel: raid6: avx2x1 xor() 19014 MB/s Dec 13 14:32:33.828701 kernel: raid6: sse2x4 gen() 10417 MB/s Dec 13 14:32:33.848702 kernel: raid6: sse2x4 xor() 6443 MB/s Dec 13 14:32:33.869705 kernel: raid6: sse2x2 gen() 11642 MB/s Dec 13 14:32:33.889701 kernel: raid6: sse2x2 xor() 7422 MB/s Dec 13 14:32:33.909701 kernel: raid6: sse2x1 gen() 10375 MB/s Dec 13 14:32:33.933856 kernel: raid6: sse2x1 xor() 5894 MB/s Dec 13 14:32:33.933884 kernel: raid6: using algorithm avx512x2 gen() 26945 MB/s Dec 13 14:32:33.933897 kernel: raid6: .... xor() 26542 MB/s, rmw enabled Dec 13 14:32:33.937314 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:32:33.955716 kernel: xor: automatically using best checksumming function avx Dec 13 14:32:34.055723 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:32:34.064962 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:32:34.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.068000 audit: BPF prog-id=7 op=LOAD Dec 13 14:32:34.068000 audit: BPF prog-id=8 op=LOAD Dec 13 14:32:34.070056 systemd[1]: Starting systemd-udevd.service... Dec 13 14:32:34.085902 systemd-udevd[384]: Using default interface naming scheme 'v252'. Dec 13 14:32:34.090753 systemd[1]: Started systemd-udevd.service. Dec 13 14:32:34.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.095825 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:32:34.117655 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation Dec 13 14:32:34.151496 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:32:34.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.157399 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:32:34.195042 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:32:34.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.240717 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:32:34.255712 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 14:32:34.286709 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 14:32:34.292720 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 14:32:34.313723 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 14:32:34.326720 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:32:34.332712 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 14:32:34.340311 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 14:32:34.340374 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 14:32:34.349854 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 14:32:34.359834 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:32:34.359920 kernel: AES CTR mode by8 optimization enabled Dec 13 14:32:34.363298 kernel: scsi host0: storvsc_host_t Dec 13 14:32:34.368713 kernel: scsi host1: storvsc_host_t Dec 13 14:32:34.374735 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 14:32:34.381709 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 14:32:34.408067 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Dec 13 14:32:34.416861 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:32:34.416884 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 14:32:34.435244 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Dec 13 14:32:34.435444 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Dec 13 14:32:34.435621 kernel: sd 1:0:0:0: [sda] Write Protect is off Dec 13 14:32:34.435803 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 14:32:34.435971 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 14:32:34.436133 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:32:34.436153 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Dec 13 14:32:34.455711 kernel: hv_netvsc 7c1e5235-f0d7-7c1e-5235-f0d77c1e5235 eth0: VF slot 1 added Dec 13 14:32:34.470764 kernel: hv_vmbus: registering driver hv_pci Dec 13 14:32:34.470851 kernel: hv_pci 84b19388-a169-4c2a-bfc6-c2ec5c5cf0db: PCI VMBus probing: Using version 0x10004 Dec 13 14:32:34.553337 kernel: hv_pci 84b19388-a169-4c2a-bfc6-c2ec5c5cf0db: PCI host bridge to bus a169:00 Dec 13 14:32:34.553535 kernel: pci_bus a169:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 14:32:34.553739 kernel: pci_bus a169:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 14:32:34.553890 kernel: pci a169:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 14:32:34.554108 kernel: pci a169:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 14:32:34.554275 kernel: pci a169:00:02.0: enabling Extended Tags Dec 13 14:32:34.554470 kernel: pci a169:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a169:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 14:32:34.554626 kernel: pci_bus a169:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 14:32:34.554789 kernel: pci a169:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 14:32:34.594718 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (433) Dec 13 14:32:34.611000 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:32:34.629042 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:32:34.680336 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:32:34.694763 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:32:34.700520 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:32:34.716955 kernel: mlx5_core a169:00:02.0: firmware version: 14.30.5000 Dec 13 14:32:35.022208 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:32:35.022240 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:32:35.022254 kernel: mlx5_core a169:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 14:32:35.022437 kernel: mlx5_core a169:00:02.0: Supported tc offload range - chains: 1, prios: 1 Dec 13 14:32:35.022592 kernel: mlx5_core a169:00:02.0: mlx5e_tc_post_act_init:40:(pid 190): firmware level support is missing Dec 13 14:32:35.022833 kernel: hv_netvsc 7c1e5235-f0d7-7c1e-5235-f0d77c1e5235 eth0: VF registering: eth1 Dec 13 14:32:35.023011 kernel: mlx5_core a169:00:02.0 eth1: joined to eth0 Dec 13 14:32:34.714993 systemd[1]: Starting disk-uuid.service... Dec 13 14:32:35.035717 kernel: mlx5_core a169:00:02.0 enP41321s1: renamed from eth1 Dec 13 14:32:35.761719 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:32:35.762473 disk-uuid[551]: The operation has completed successfully. Dec 13 14:32:35.850754 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:32:35.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:35.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:35.850878 systemd[1]: Finished disk-uuid.service. Dec 13 14:32:35.854094 systemd[1]: Starting verity-setup.service... Dec 13 14:32:35.884715 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:32:35.976876 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:32:35.982713 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:32:35.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:35.985000 systemd[1]: Finished verity-setup.service. Dec 13 14:32:36.062725 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:32:36.062629 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:32:36.064856 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:32:36.065943 systemd[1]: Starting ignition-setup.service... Dec 13 14:32:36.077648 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:32:36.100825 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:32:36.100920 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:32:36.100940 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:32:36.144489 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:32:36.158946 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:32:36.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:36.163000 audit: BPF prog-id=9 op=LOAD Dec 13 14:32:36.164250 systemd[1]: Starting systemd-networkd.service... Dec 13 14:32:36.192620 systemd-networkd[830]: lo: Link UP Dec 13 14:32:36.192629 systemd-networkd[830]: lo: Gained carrier Dec 13 14:32:36.193619 systemd-networkd[830]: Enumeration completed Dec 13 14:32:36.194030 systemd[1]: Started systemd-networkd.service. Dec 13 14:32:36.196239 systemd-networkd[830]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:32:36.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:36.203054 systemd[1]: Reached target network.target. Dec 13 14:32:36.210103 systemd[1]: Starting iscsiuio.service... Dec 13 14:32:36.223342 systemd[1]: Finished ignition-setup.service. Dec 13 14:32:36.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:36.227815 systemd[1]: Started iscsiuio.service. Dec 13 14:32:36.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:36.232527 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:32:36.237506 systemd[1]: Starting iscsid.service... Dec 13 14:32:36.244193 iscsid[837]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:32:36.244193 iscsid[837]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:32:36.244193 iscsid[837]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:32:36.244193 iscsid[837]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:32:36.274580 kernel: mlx5_core a169:00:02.0 enP41321s1: Link up Dec 13 14:32:36.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:36.246006 systemd[1]: Started iscsid.service. Dec 13 14:32:36.276603 iscsid[837]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:32:36.276603 iscsid[837]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:32:36.264048 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:32:36.293997 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:32:36.298307 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:32:36.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:36.302592 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:32:36.318905 kernel: hv_netvsc 7c1e5235-f0d7-7c1e-5235-f0d77c1e5235 eth0: Data path switched to VF: enP41321s1 Dec 13 14:32:36.319138 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:32:36.316063 systemd[1]: Reached target remote-fs.target. Dec 13 14:32:36.324384 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:32:36.332384 systemd-networkd[830]: enP41321s1: Link UP Dec 13 14:32:36.333428 systemd-networkd[830]: eth0: Link UP Dec 13 14:32:36.334317 systemd-networkd[830]: eth0: Gained carrier Dec 13 14:32:36.339952 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:32:36.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:36.345877 systemd-networkd[830]: enP41321s1: Gained carrier Dec 13 14:32:36.370824 systemd-networkd[830]: eth0: DHCPv4 address 10.200.8.26/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:32:37.011683 ignition[836]: Ignition 2.14.0 Dec 13 14:32:37.011730 ignition[836]: Stage: fetch-offline Dec 13 14:32:37.011828 ignition[836]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:37.011877 ignition[836]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:32:37.048018 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:32:37.048247 ignition[836]: parsed url from cmdline: "" Dec 13 14:32:37.050705 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:32:37.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.048251 ignition[836]: no config URL provided Dec 13 14:32:37.048258 ignition[836]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:32:37.048267 ignition[836]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:32:37.048274 ignition[836]: failed to fetch config: resource requires networking Dec 13 14:32:37.049591 ignition[836]: Ignition finished successfully Dec 13 14:32:37.067051 systemd[1]: Starting ignition-fetch.service... Dec 13 14:32:37.077119 ignition[856]: Ignition 2.14.0 Dec 13 14:32:37.077132 ignition[856]: Stage: fetch Dec 13 14:32:37.077286 ignition[856]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:37.077322 ignition[856]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:32:37.088183 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:32:37.088364 ignition[856]: parsed url from cmdline: "" Dec 13 14:32:37.088368 ignition[856]: no config URL provided Dec 13 14:32:37.088374 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:32:37.088382 ignition[856]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:32:37.088422 ignition[856]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 14:32:37.166460 ignition[856]: GET result: OK Dec 13 14:32:37.166594 ignition[856]: config has been read from IMDS userdata Dec 13 14:32:37.166627 ignition[856]: parsing config with SHA512: cdc23bbb55009c20fcd15a49a75a21b10255983deb9998d0bb99794cea43cea3ac4d6b853892464cd20115f564ee06d527bc9ac31138559f3dc22e0ce885308d Dec 13 14:32:37.170393 unknown[856]: fetched base config from "system" Dec 13 14:32:37.171002 ignition[856]: fetch: fetch complete Dec 13 14:32:37.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.170405 unknown[856]: fetched base config from "system" Dec 13 14:32:37.171011 ignition[856]: fetch: fetch passed Dec 13 14:32:37.170414 unknown[856]: fetched user config from "azure" Dec 13 14:32:37.171078 ignition[856]: Ignition finished successfully Dec 13 14:32:37.172994 systemd[1]: Finished ignition-fetch.service. Dec 13 14:32:37.176805 systemd[1]: Starting ignition-kargs.service... Dec 13 14:32:37.190510 ignition[862]: Ignition 2.14.0 Dec 13 14:32:37.190516 ignition[862]: Stage: kargs Dec 13 14:32:37.190643 ignition[862]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:37.190669 ignition[862]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:32:37.202578 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:32:37.203624 ignition[862]: kargs: kargs passed Dec 13 14:32:37.207018 systemd[1]: Finished ignition-kargs.service. Dec 13 14:32:37.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.203671 ignition[862]: Ignition finished successfully Dec 13 14:32:37.215675 systemd[1]: Starting ignition-disks.service... Dec 13 14:32:37.226336 ignition[868]: Ignition 2.14.0 Dec 13 14:32:37.226346 ignition[868]: Stage: disks Dec 13 14:32:37.226495 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:37.226534 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:32:37.230154 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:32:37.232027 ignition[868]: disks: disks passed Dec 13 14:32:37.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.233059 systemd[1]: Finished ignition-disks.service. Dec 13 14:32:37.258498 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 13 14:32:37.260528 kernel: audit: type=1130 audit(1734100357.237:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.232104 ignition[868]: Ignition finished successfully Dec 13 14:32:37.242884 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:32:37.258477 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:32:37.260537 systemd[1]: Reached target local-fs.target. Dec 13 14:32:37.262491 systemd[1]: Reached target sysinit.target. Dec 13 14:32:37.264481 systemd[1]: Reached target basic.target. Dec 13 14:32:37.269489 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:32:37.299199 systemd-fsck[876]: ROOT: clean, 621/7326000 files, 481077/7359488 blocks Dec 13 14:32:37.303818 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:32:37.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.309304 systemd[1]: Mounting sysroot.mount... Dec 13 14:32:37.325465 kernel: audit: type=1130 audit(1734100357.307:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.338742 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:32:37.339046 systemd[1]: Mounted sysroot.mount. Dec 13 14:32:37.341141 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:32:37.352615 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:32:37.358569 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 14:32:37.364335 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:32:37.364385 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:32:37.367794 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:32:37.385548 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:32:37.392011 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:32:37.400754 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (886) Dec 13 14:32:37.403204 initrd-setup-root[891]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:32:37.416287 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:32:37.416367 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:32:37.416382 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:32:37.422702 initrd-setup-root[915]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:32:37.430158 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:32:37.435100 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:32:37.442581 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:32:37.581195 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:32:37.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.587206 systemd[1]: Starting ignition-mount.service... Dec 13 14:32:37.605438 kernel: audit: type=1130 audit(1734100357.585:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.601820 systemd[1]: Starting sysroot-boot.service... Dec 13 14:32:37.607873 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:32:37.608038 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:32:37.637052 ignition[953]: INFO : Ignition 2.14.0 Dec 13 14:32:37.639457 ignition[953]: INFO : Stage: mount Dec 13 14:32:37.639457 ignition[953]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:37.639457 ignition[953]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:32:37.650718 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:32:37.650718 ignition[953]: INFO : mount: mount passed Dec 13 14:32:37.650718 ignition[953]: INFO : Ignition finished successfully Dec 13 14:32:37.658909 systemd[1]: Finished ignition-mount.service. Dec 13 14:32:37.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.673772 kernel: audit: type=1130 audit(1734100357.660:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.676240 systemd[1]: Finished sysroot-boot.service. Dec 13 14:32:37.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.690979 kernel: audit: type=1130 audit(1734100357.677:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.795068 coreos-metadata[885]: Dec 13 14:32:37.794 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 14:32:37.801588 coreos-metadata[885]: Dec 13 14:32:37.801 INFO Fetch successful Dec 13 14:32:37.837373 coreos-metadata[885]: Dec 13 14:32:37.837 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 14:32:37.853843 coreos-metadata[885]: Dec 13 14:32:37.853 INFO Fetch successful Dec 13 14:32:37.862061 coreos-metadata[885]: Dec 13 14:32:37.862 INFO wrote hostname ci-3510.3.6-a-dd62f2eb18 to /sysroot/etc/hostname Dec 13 14:32:37.868164 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 14:32:37.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.884286 kernel: audit: type=1130 audit(1734100357.868:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:37.883192 systemd[1]: Starting ignition-files.service... Dec 13 14:32:37.895754 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:32:37.915454 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (964) Dec 13 14:32:37.915526 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:32:37.915539 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:32:37.922574 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:32:37.926868 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:32:37.942936 ignition[983]: INFO : Ignition 2.14.0 Dec 13 14:32:37.942936 ignition[983]: INFO : Stage: files Dec 13 14:32:37.947485 ignition[983]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:37.947485 ignition[983]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:32:37.947485 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:32:37.960571 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:32:37.963828 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:32:37.963828 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:32:37.973071 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:32:37.976583 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:32:37.979897 unknown[983]: wrote ssh authorized keys file for user: core Dec 13 14:32:37.982334 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:32:37.988948 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:32:37.993482 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:32:37.997701 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:32:38.002042 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:32:38.006268 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:32:38.012394 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:32:38.018585 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:32:38.023223 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:32:38.037379 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3719452047" Dec 13 14:32:38.037379 ignition[983]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3719452047": device or resource busy Dec 13 14:32:38.037379 ignition[983]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3719452047", trying btrfs: device or resource busy Dec 13 14:32:38.037379 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3719452047" Dec 13 14:32:38.058221 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (985) Dec 13 14:32:38.058250 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3719452047" Dec 13 14:32:38.063128 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem3719452047" Dec 13 14:32:38.067316 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem3719452047" Dec 13 14:32:38.071537 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:32:38.075905 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:32:38.080485 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:32:38.094094 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1573227815" Dec 13 14:32:38.098941 ignition[983]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1573227815": device or resource busy Dec 13 14:32:38.098941 ignition[983]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1573227815", trying btrfs: device or resource busy Dec 13 14:32:38.098941 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1573227815" Dec 13 14:32:38.114610 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1573227815" Dec 13 14:32:38.119270 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem1573227815" Dec 13 14:32:38.123507 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem1573227815" Dec 13 14:32:38.127404 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:32:38.132001 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:32:38.137269 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 14:32:38.238010 systemd-networkd[830]: eth0: Gained IPv6LL Dec 13 14:32:38.706436 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Dec 13 14:32:39.106260 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:32:39.106260 ignition[983]: INFO : files: op(f): [started] processing unit "waagent.service" Dec 13 14:32:39.106260 ignition[983]: INFO : files: op(f): [finished] processing unit "waagent.service" Dec 13 14:32:39.106260 ignition[983]: INFO : files: op(10): [started] processing unit "nvidia.service" Dec 13 14:32:39.106260 ignition[983]: INFO : files: op(10): [finished] processing unit "nvidia.service" Dec 13 14:32:39.106260 ignition[983]: INFO : files: op(11): [started] setting preset to enabled for "waagent.service" Dec 13 14:32:39.120214 ignition[983]: INFO : files: op(11): [finished] setting preset to enabled for "waagent.service" Dec 13 14:32:39.120214 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "nvidia.service" Dec 13 14:32:39.120214 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:32:39.120214 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:32:39.120214 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:32:39.120214 ignition[983]: INFO : files: files passed Dec 13 14:32:39.120214 ignition[983]: INFO : Ignition finished successfully Dec 13 14:32:39.115768 systemd[1]: Finished ignition-files.service. Dec 13 14:32:39.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.153457 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:32:39.171566 kernel: audit: type=1130 audit(1734100359.151:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.167059 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:32:39.181665 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:32:39.194966 kernel: audit: type=1130 audit(1734100359.181:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.169883 systemd[1]: Starting ignition-quench.service... Dec 13 14:32:39.174204 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:32:39.182015 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:32:39.182130 systemd[1]: Finished ignition-quench.service. Dec 13 14:32:39.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.207984 systemd[1]: Reached target ignition-complete.target. Dec 13 14:32:39.234506 kernel: audit: type=1130 audit(1734100359.207:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.234545 kernel: audit: type=1131 audit(1734100359.207:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.232284 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:32:39.251963 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:32:39.252092 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:32:39.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.258683 systemd[1]: Reached target initrd-fs.target. Dec 13 14:32:39.262641 systemd[1]: Reached target initrd.target. Dec 13 14:32:39.264476 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:32:39.265505 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:32:39.279030 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:32:39.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.284244 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:32:39.295337 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:32:39.299342 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:32:39.304011 systemd[1]: Stopped target timers.target. Dec 13 14:32:39.307819 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:32:39.310181 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:32:39.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.314191 systemd[1]: Stopped target initrd.target. Dec 13 14:32:39.317749 systemd[1]: Stopped target basic.target. Dec 13 14:32:39.321452 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:32:39.325800 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:32:39.330060 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:32:39.334470 systemd[1]: Stopped target remote-fs.target. Dec 13 14:32:39.338729 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:32:39.342764 systemd[1]: Stopped target sysinit.target. Dec 13 14:32:39.346570 systemd[1]: Stopped target local-fs.target. Dec 13 14:32:39.350440 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:32:39.354486 systemd[1]: Stopped target swap.target. Dec 13 14:32:39.357965 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:32:39.360430 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:32:39.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.364445 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:32:39.368383 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:32:39.370784 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:32:39.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.374777 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:32:39.377551 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:32:39.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.382439 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:32:39.384902 systemd[1]: Stopped ignition-files.service. Dec 13 14:32:39.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.388796 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:32:39.391510 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 14:32:39.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.397215 systemd[1]: Stopping ignition-mount.service... Dec 13 14:32:39.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.400967 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:32:39.403016 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:32:39.415305 ignition[1021]: INFO : Ignition 2.14.0 Dec 13 14:32:39.415305 ignition[1021]: INFO : Stage: umount Dec 13 14:32:39.415305 ignition[1021]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:39.415305 ignition[1021]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:32:39.403298 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:32:39.429852 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:32:39.429852 ignition[1021]: INFO : umount: umount passed Dec 13 14:32:39.429852 ignition[1021]: INFO : Ignition finished successfully Dec 13 14:32:39.405945 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:32:39.406107 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:32:39.435437 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:32:39.435580 systemd[1]: Stopped ignition-mount.service. Dec 13 14:32:39.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.450429 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:32:39.452802 systemd[1]: Stopped ignition-disks.service. Dec 13 14:32:39.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.456176 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:32:39.458205 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:32:39.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.461521 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:32:39.463498 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:32:39.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.465671 systemd[1]: Stopped target network.target. Dec 13 14:32:39.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.469399 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:32:39.469467 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:32:39.471764 systemd[1]: Stopped target paths.target. Dec 13 14:32:39.473754 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:32:39.478978 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:32:39.489581 systemd[1]: Stopped target slices.target. Dec 13 14:32:39.493278 systemd[1]: Stopped target sockets.target. Dec 13 14:32:39.497071 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:32:39.497127 systemd[1]: Closed iscsid.socket. Dec 13 14:32:39.502326 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:32:39.502370 systemd[1]: Closed iscsiuio.socket. Dec 13 14:32:39.507895 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:32:39.510222 systemd[1]: Stopped ignition-setup.service. Dec 13 14:32:39.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.514257 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:32:39.516448 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:32:39.518606 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:32:39.519750 systemd-networkd[830]: eth0: DHCPv6 lease lost Dec 13 14:32:39.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.523112 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:32:39.523222 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:32:39.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.527047 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:32:39.527136 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:32:39.539000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:32:39.539000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:32:39.533753 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:32:39.533866 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:32:39.541383 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:32:39.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.541427 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:32:39.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.545079 systemd[1]: Stopping network-cleanup.service... Dec 13 14:32:39.549337 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:32:39.551147 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:32:39.555913 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:32:39.555990 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:32:39.559874 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:32:39.559937 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:32:39.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.576725 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:32:39.581106 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:32:39.595438 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:32:39.595621 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:32:39.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.601894 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:32:39.601965 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:32:39.606745 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:32:39.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.606798 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:32:39.610839 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:32:39.610908 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:32:39.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.615415 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:32:39.615477 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:32:39.623596 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:32:39.637055 kernel: hv_netvsc 7c1e5235-f0d7-7c1e-5235-f0d77c1e5235 eth0: Data path switched from VF: enP41321s1 Dec 13 14:32:39.627751 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:32:39.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.642902 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:32:39.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.645251 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:32:39.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.645364 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:32:39.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.648220 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:32:39.648299 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:32:39.652826 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:32:39.652903 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:32:39.656885 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:32:39.657476 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:32:39.657579 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:32:39.660522 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:32:39.660649 systemd[1]: Stopped network-cleanup.service. Dec 13 14:32:39.839996 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:32:39.840135 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:32:39.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.847897 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:32:39.852590 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:32:39.855187 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:32:39.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:39.860402 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:32:39.873009 systemd[1]: Switching root. Dec 13 14:32:39.903501 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 14:32:39.903656 iscsid[837]: iscsid shutting down. Dec 13 14:32:39.905948 systemd-journald[183]: Journal stopped Dec 13 14:32:45.156524 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:32:45.156556 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:32:45.156568 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:32:45.156578 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:32:45.156586 kernel: SELinux: policy capability open_perms=1 Dec 13 14:32:45.156598 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:32:45.156607 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:32:45.156621 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:32:45.156629 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:32:45.156640 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:32:45.156650 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:32:45.156662 systemd[1]: Successfully loaded SELinux policy in 126.276ms. Dec 13 14:32:45.156674 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.443ms. Dec 13 14:32:45.156701 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:32:45.156719 systemd[1]: Detected virtualization microsoft. Dec 13 14:32:45.156728 systemd[1]: Detected architecture x86-64. Dec 13 14:32:45.156738 systemd[1]: Detected first boot. Dec 13 14:32:45.156749 systemd[1]: Hostname set to . Dec 13 14:32:45.156758 systemd[1]: Initializing machine ID from random generator. Dec 13 14:32:45.156772 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:32:45.156785 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:32:45.156796 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:32:45.156807 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:32:45.156820 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:32:45.156830 kernel: kauditd_printk_skb: 55 callbacks suppressed Dec 13 14:32:45.156840 kernel: audit: type=1334 audit(1734100364.608:90): prog-id=12 op=LOAD Dec 13 14:32:45.156855 kernel: audit: type=1334 audit(1734100364.608:91): prog-id=3 op=UNLOAD Dec 13 14:32:45.156864 kernel: audit: type=1334 audit(1734100364.613:92): prog-id=13 op=LOAD Dec 13 14:32:45.156872 kernel: audit: type=1334 audit(1734100364.618:93): prog-id=14 op=LOAD Dec 13 14:32:45.156883 kernel: audit: type=1334 audit(1734100364.618:94): prog-id=4 op=UNLOAD Dec 13 14:32:45.156892 kernel: audit: type=1334 audit(1734100364.618:95): prog-id=5 op=UNLOAD Dec 13 14:32:45.156903 kernel: audit: type=1334 audit(1734100364.623:96): prog-id=15 op=LOAD Dec 13 14:32:45.156912 kernel: audit: type=1334 audit(1734100364.623:97): prog-id=12 op=UNLOAD Dec 13 14:32:45.156922 kernel: audit: type=1334 audit(1734100364.628:98): prog-id=16 op=LOAD Dec 13 14:32:45.156934 kernel: audit: type=1334 audit(1734100364.633:99): prog-id=17 op=LOAD Dec 13 14:32:45.156944 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:32:45.156955 systemd[1]: Stopped iscsiuio.service. Dec 13 14:32:45.156964 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:32:45.156976 systemd[1]: Stopped iscsid.service. Dec 13 14:32:45.156985 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:32:45.157001 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:32:45.157016 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:32:45.157027 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:32:45.157038 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:32:45.157049 systemd[1]: Created slice system-getty.slice. Dec 13 14:32:45.157061 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:32:45.157074 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:32:45.157084 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:32:45.157096 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:32:45.157107 systemd[1]: Created slice user.slice. Dec 13 14:32:45.157122 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:32:45.157131 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:32:45.157144 systemd[1]: Set up automount boot.automount. Dec 13 14:32:45.157153 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:32:45.157165 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:32:45.157175 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:32:45.157187 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:32:45.157197 systemd[1]: Reached target integritysetup.target. Dec 13 14:32:45.157212 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:32:45.157227 systemd[1]: Reached target remote-fs.target. Dec 13 14:32:45.157238 systemd[1]: Reached target slices.target. Dec 13 14:32:45.157251 systemd[1]: Reached target swap.target. Dec 13 14:32:45.157267 systemd[1]: Reached target torcx.target. Dec 13 14:32:45.157281 systemd[1]: Reached target veritysetup.target. Dec 13 14:32:45.157299 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:32:45.157313 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:32:45.157332 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:32:45.157346 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:32:45.157360 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:32:45.157374 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:32:45.157393 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:32:45.157405 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:32:45.157422 systemd[1]: Mounting media.mount... Dec 13 14:32:45.157432 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:45.157441 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:32:45.157451 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:32:45.157461 systemd[1]: Mounting tmp.mount... Dec 13 14:32:45.157470 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:32:45.157480 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:45.158473 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:32:45.158506 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:32:45.158517 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:45.158530 systemd[1]: Starting modprobe@drm.service... Dec 13 14:32:45.158545 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:32:45.158555 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:32:45.158568 systemd[1]: Starting modprobe@loop.service... Dec 13 14:32:45.158582 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:32:45.158592 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:32:45.158606 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:32:45.158621 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:32:45.158632 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:32:45.158645 systemd[1]: Stopped systemd-journald.service. Dec 13 14:32:45.158656 systemd[1]: Starting systemd-journald.service... Dec 13 14:32:45.158668 kernel: loop: module loaded Dec 13 14:32:45.158679 kernel: fuse: init (API version 7.34) Dec 13 14:32:45.158732 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:32:45.158745 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:32:45.158756 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:32:45.158771 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:32:45.158783 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:32:45.158794 systemd[1]: Stopped verity-setup.service. Dec 13 14:32:45.158807 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:45.158820 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:32:45.158830 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:32:45.158843 systemd[1]: Mounted media.mount. Dec 13 14:32:45.158856 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:32:45.158866 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:32:45.158885 systemd[1]: Mounted tmp.mount. Dec 13 14:32:45.158898 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:32:45.158908 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:32:45.158920 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:32:45.158939 systemd-journald[1148]: Journal started Dec 13 14:32:45.159005 systemd-journald[1148]: Runtime Journal (/run/log/journal/19da73936a0146f29166eaf15b1ccb56) is 8.0M, max 159.0M, 151.0M free. Dec 13 14:32:40.533000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:32:40.770000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:32:40.776000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:32:40.776000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:32:40.776000 audit: BPF prog-id=10 op=LOAD Dec 13 14:32:40.776000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:32:40.776000 audit: BPF prog-id=11 op=LOAD Dec 13 14:32:40.776000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:32:41.144000 audit[1055]: AVC avc: denied { associate } for pid=1055 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:32:41.144000 audit[1055]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1038 pid=1055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:41.144000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:32:41.154000 audit[1055]: AVC avc: denied { associate } for pid=1055 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:32:41.154000 audit[1055]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1038 pid=1055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:41.154000 audit: CWD cwd="/" Dec 13 14:32:41.154000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:41.154000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:41.154000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:32:44.608000 audit: BPF prog-id=12 op=LOAD Dec 13 14:32:44.608000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:32:44.613000 audit: BPF prog-id=13 op=LOAD Dec 13 14:32:44.618000 audit: BPF prog-id=14 op=LOAD Dec 13 14:32:44.618000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:32:44.618000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:32:44.623000 audit: BPF prog-id=15 op=LOAD Dec 13 14:32:44.623000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:32:44.628000 audit: BPF prog-id=16 op=LOAD Dec 13 14:32:44.633000 audit: BPF prog-id=17 op=LOAD Dec 13 14:32:44.633000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:32:44.633000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:32:44.642000 audit: BPF prog-id=18 op=LOAD Dec 13 14:32:44.642000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:32:44.647000 audit: BPF prog-id=19 op=LOAD Dec 13 14:32:44.652000 audit: BPF prog-id=20 op=LOAD Dec 13 14:32:44.652000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:32:44.652000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:32:44.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:44.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:44.675000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:32:44.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:44.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:44.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.036000 audit: BPF prog-id=21 op=LOAD Dec 13 14:32:45.037000 audit: BPF prog-id=22 op=LOAD Dec 13 14:32:45.037000 audit: BPF prog-id=23 op=LOAD Dec 13 14:32:45.037000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:32:45.037000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:32:45.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.147000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:32:45.147000 audit[1148]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc03f683f0 a2=4000 a3=7ffc03f6848c items=0 ppid=1 pid=1148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:45.147000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:32:45.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:41.136174 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:32:44.607788 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:32:41.136500 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:32:44.654132 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:32:41.136527 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:32:41.136571 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:32:41.136581 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:32:41.136630 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:32:41.136645 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:32:41.136915 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:32:41.136970 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:32:41.136987 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:32:41.140965 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:32:41.141000 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:32:41.141020 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:32:41.141035 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:32:41.141053 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:32:41.141067 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:41Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:32:44.058431 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:44Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:32:45.172801 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:32:45.172849 systemd[1]: Started systemd-journald.service. Dec 13 14:32:45.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:44.058748 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:44Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:32:44.058897 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:44Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:32:44.059104 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:44Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:32:44.059167 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:44Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:32:44.059231 /usr/lib/systemd/system-generators/torcx-generator[1055]: time="2024-12-13T14:32:44Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:32:45.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.176149 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:45.176399 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:45.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.179494 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:32:45.179895 systemd[1]: Finished modprobe@drm.service. Dec 13 14:32:45.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.182773 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:32:45.182985 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:32:45.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.186076 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:32:45.186312 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:32:45.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.188898 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:32:45.189122 systemd[1]: Finished modprobe@loop.service. Dec 13 14:32:45.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.191853 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:32:45.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.195827 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:32:45.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.198786 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:32:45.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.201869 systemd[1]: Reached target network-pre.target. Dec 13 14:32:45.206166 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:32:45.214185 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:32:45.220794 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:32:45.230319 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:32:45.234644 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:32:45.237044 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:32:45.238558 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:32:45.241161 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:32:45.243055 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:32:45.248671 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:32:45.254459 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:32:45.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.256955 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:32:45.259345 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:32:45.262962 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:32:45.296222 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:32:45.311590 systemd-journald[1148]: Time spent on flushing to /var/log/journal/19da73936a0146f29166eaf15b1ccb56 is 29.603ms for 1137 entries. Dec 13 14:32:45.311590 systemd-journald[1148]: System Journal (/var/log/journal/19da73936a0146f29166eaf15b1ccb56) is 8.0M, max 2.6G, 2.6G free. Dec 13 14:32:45.385349 systemd-journald[1148]: Received client request to flush runtime journal. Dec 13 14:32:45.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.322882 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:32:45.325970 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:32:45.334073 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:32:45.387728 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:32:45.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:45.484958 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:32:45.489463 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:32:45.629516 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:32:45.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:46.108499 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:32:46.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:46.111000 audit: BPF prog-id=24 op=LOAD Dec 13 14:32:46.111000 audit: BPF prog-id=25 op=LOAD Dec 13 14:32:46.111000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:32:46.111000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:32:46.113576 systemd[1]: Starting systemd-udevd.service... Dec 13 14:32:46.134355 systemd-udevd[1183]: Using default interface naming scheme 'v252'. Dec 13 14:32:46.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:46.239000 audit: BPF prog-id=26 op=LOAD Dec 13 14:32:46.235930 systemd[1]: Started systemd-udevd.service. Dec 13 14:32:46.241292 systemd[1]: Starting systemd-networkd.service... Dec 13 14:32:46.285973 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:32:46.301000 audit: BPF prog-id=27 op=LOAD Dec 13 14:32:46.301000 audit: BPF prog-id=28 op=LOAD Dec 13 14:32:46.301000 audit: BPF prog-id=29 op=LOAD Dec 13 14:32:46.303795 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:32:46.358718 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:32:46.361786 systemd[1]: Started systemd-userdbd.service. Dec 13 14:32:46.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:46.407741 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 14:32:46.397000 audit[1197]: AVC avc: denied { confidentiality } for pid=1197 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:32:46.414725 kernel: hv_vmbus: registering driver hv_balloon Dec 13 14:32:46.422467 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 14:32:46.422630 kernel: hv_vmbus: registering driver hv_utils Dec 13 14:32:46.440558 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 14:32:46.440750 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 14:32:46.448292 kernel: Console: switching to colour dummy device 80x25 Dec 13 14:32:46.471144 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 14:32:46.471350 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 14:32:46.472086 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 14:32:46.650521 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 14:32:46.650640 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:32:46.397000 audit[1197]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=558b061adde0 a1=f884 a2=7f2c90baabc5 a3=5 items=12 ppid=1183 pid=1197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:46.397000 audit: CWD cwd="/" Dec 13 14:32:46.397000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:46.397000 audit: PATH item=1 name=(null) inode=14911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:46.397000 audit: PATH item=2 name=(null) inode=14911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:46.397000 audit: PATH item=3 name=(null) inode=14912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:46.397000 audit: PATH item=4 name=(null) inode=14911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:46.397000 audit: PATH item=5 name=(null) inode=14913 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:46.397000 audit: PATH item=6 name=(null) inode=14911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:46.397000 audit: PATH item=7 name=(null) inode=14914 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:46.397000 audit: PATH item=8 name=(null) inode=14911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:46.397000 audit: PATH item=9 name=(null) inode=14915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:46.397000 audit: PATH item=10 name=(null) inode=14911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:46.397000 audit: PATH item=11 name=(null) inode=14916 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:46.397000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:32:46.723351 systemd-networkd[1195]: lo: Link UP Dec 13 14:32:46.723811 systemd-networkd[1195]: lo: Gained carrier Dec 13 14:32:46.724675 systemd-networkd[1195]: Enumeration completed Dec 13 14:32:46.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:46.726595 systemd[1]: Started systemd-networkd.service. Dec 13 14:32:46.731876 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:32:46.955689 systemd-networkd[1195]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:32:46.987941 kernel: mlx5_core a169:00:02.0 enP41321s1: Link up Dec 13 14:32:47.006990 kernel: hv_netvsc 7c1e5235-f0d7-7c1e-5235-f0d77c1e5235 eth0: Data path switched to VF: enP41321s1 Dec 13 14:32:47.007734 systemd-networkd[1195]: enP41321s1: Link UP Dec 13 14:32:47.007921 systemd-networkd[1195]: eth0: Link UP Dec 13 14:32:47.007932 systemd-networkd[1195]: eth0: Gained carrier Dec 13 14:32:47.014235 systemd-networkd[1195]: enP41321s1: Gained carrier Dec 13 14:32:47.030197 systemd-networkd[1195]: eth0: DHCPv4 address 10.200.8.26/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:32:47.046953 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1186) Dec 13 14:32:47.102135 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Dec 13 14:32:47.121749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:32:47.407407 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:32:47.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:47.411973 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:32:48.158792 lvm[1261]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:32:48.189623 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:32:48.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:48.192653 systemd[1]: Reached target cryptsetup.target. Dec 13 14:32:48.196420 systemd[1]: Starting lvm2-activation.service... Dec 13 14:32:48.203785 lvm[1262]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:32:48.225599 systemd[1]: Finished lvm2-activation.service. Dec 13 14:32:48.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:48.228201 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:32:48.230365 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:32:48.230401 systemd[1]: Reached target local-fs.target. Dec 13 14:32:48.232560 systemd[1]: Reached target machines.target. Dec 13 14:32:48.236220 systemd[1]: Starting ldconfig.service... Dec 13 14:32:48.238551 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:48.238674 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:48.240167 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:32:48.243708 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:32:48.247913 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:32:48.252223 systemd[1]: Starting systemd-sysext.service... Dec 13 14:32:48.459247 systemd-networkd[1195]: eth0: Gained IPv6LL Dec 13 14:32:48.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:48.465514 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:32:48.482462 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1264 (bootctl) Dec 13 14:32:48.484719 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:32:48.676267 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:32:49.015171 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:32:49.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:49.167598 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:32:49.167856 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:32:49.330973 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 14:32:51.985934 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:32:52.018971 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 14:32:52.024368 (sd-sysext)[1278]: Using extensions 'kubernetes'. Dec 13 14:32:52.024916 (sd-sysext)[1278]: Merged extensions into '/usr'. Dec 13 14:32:52.043713 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:52.045884 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:32:52.047915 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:52.052613 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:52.055515 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:32:52.060047 systemd[1]: Starting modprobe@loop.service... Dec 13 14:32:52.062058 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:52.062672 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:52.062990 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:52.066683 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:32:52.087716 kernel: kauditd_printk_skb: 84 callbacks suppressed Dec 13 14:32:52.087864 kernel: audit: type=1130 audit(1734100372.068:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.068377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:52.068527 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:52.070132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:32:52.070261 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:32:52.070922 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:32:52.071049 systemd[1]: Finished modprobe@loop.service. Dec 13 14:32:52.072896 systemd[1]: Finished systemd-sysext.service. Dec 13 14:32:52.075713 systemd[1]: Starting ensure-sysext.service... Dec 13 14:32:52.076178 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:32:52.076289 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:32:52.084125 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:32:52.097126 kernel: audit: type=1131 audit(1734100372.068:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.097894 systemd[1]: Reloading. Dec 13 14:32:52.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.124014 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:32:52.130961 kernel: audit: type=1130 audit(1734100372.069:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.131106 kernel: audit: type=1131 audit(1734100372.069:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.131135 kernel: audit: type=1130 audit(1734100372.070:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.157974 kernel: audit: type=1131 audit(1734100372.070:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.170956 kernel: audit: type=1130 audit(1734100372.072:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.216322 /usr/lib/systemd/system-generators/torcx-generator[1304]: time="2024-12-13T14:32:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:32:52.216457 /usr/lib/systemd/system-generators/torcx-generator[1304]: time="2024-12-13T14:32:52Z" level=info msg="torcx already run" Dec 13 14:32:52.218935 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:32:52.298111 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:32:52.298141 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:32:52.315436 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:32:52.363884 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:32:52.396066 kernel: audit: type=1334 audit(1734100372.385:174): prog-id=30 op=LOAD Dec 13 14:32:52.396263 kernel: audit: type=1334 audit(1734100372.385:175): prog-id=26 op=UNLOAD Dec 13 14:32:52.385000 audit: BPF prog-id=30 op=LOAD Dec 13 14:32:52.385000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:32:52.401038 kernel: audit: type=1334 audit(1734100372.390:176): prog-id=31 op=LOAD Dec 13 14:32:52.390000 audit: BPF prog-id=31 op=LOAD Dec 13 14:32:52.390000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:32:52.395000 audit: BPF prog-id=32 op=LOAD Dec 13 14:32:52.395000 audit: BPF prog-id=33 op=LOAD Dec 13 14:32:52.395000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:32:52.395000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:32:52.400000 audit: BPF prog-id=34 op=LOAD Dec 13 14:32:52.400000 audit: BPF prog-id=35 op=LOAD Dec 13 14:32:52.400000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:32:52.400000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:32:52.401000 audit: BPF prog-id=36 op=LOAD Dec 13 14:32:52.401000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:32:52.401000 audit: BPF prog-id=37 op=LOAD Dec 13 14:32:52.401000 audit: BPF prog-id=38 op=LOAD Dec 13 14:32:52.401000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:32:52.401000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:32:52.421166 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:52.421509 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:52.423459 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:52.427180 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:32:52.430970 systemd[1]: Starting modprobe@loop.service... Dec 13 14:32:52.433102 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:52.433341 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:52.433504 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:52.434417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:52.434613 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:52.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.437594 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:32:52.437775 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:32:52.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.440710 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:32:52.440876 systemd[1]: Finished modprobe@loop.service. Dec 13 14:32:52.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.445293 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:52.445557 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:52.447541 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:52.451652 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:32:52.455569 systemd[1]: Starting modprobe@loop.service... Dec 13 14:32:52.457620 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:52.457860 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:52.458104 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:52.459247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:52.459431 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:52.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.462523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:32:52.462688 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:32:52.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.465518 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:32:52.465677 systemd[1]: Finished modprobe@loop.service. Dec 13 14:32:52.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.471159 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:52.471508 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:52.473363 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:52.477470 systemd[1]: Starting modprobe@drm.service... Dec 13 14:32:52.481151 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:32:52.485162 systemd[1]: Starting modprobe@loop.service... Dec 13 14:32:52.487324 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:52.487542 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:52.487733 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:52.488734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:52.489073 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:52.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.491957 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:32:52.492134 systemd[1]: Finished modprobe@drm.service. Dec 13 14:32:52.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.494754 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:32:52.495120 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:32:52.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.498078 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:32:52.498237 systemd[1]: Finished modprobe@loop.service. Dec 13 14:32:52.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:52.501208 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:32:52.501350 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:32:52.502864 systemd[1]: Finished ensure-sysext.service. Dec 13 14:32:52.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:53.722221 systemd-fsck[1272]: fsck.fat 4.2 (2021-01-31) Dec 13 14:32:53.722221 systemd-fsck[1272]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 14:32:53.725679 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:32:53.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:53.732001 systemd[1]: Mounting boot.mount... Dec 13 14:32:54.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:54.066846 systemd[1]: Mounted boot.mount. Dec 13 14:32:54.081570 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:32:56.620494 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:32:56.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:56.625829 systemd[1]: Starting audit-rules.service... Dec 13 14:32:56.630081 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:32:56.634089 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:32:56.636000 audit: BPF prog-id=39 op=LOAD Dec 13 14:32:56.639288 systemd[1]: Starting systemd-resolved.service... Dec 13 14:32:56.642000 audit: BPF prog-id=40 op=LOAD Dec 13 14:32:56.644588 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:32:56.648573 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:32:56.668000 audit[1387]: SYSTEM_BOOT pid=1387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:32:56.672729 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:32:56.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:56.978842 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:32:56.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:56.981587 systemd[1]: Reached target time-set.target. Dec 13 14:32:57.307150 kernel: kauditd_printk_skb: 44 callbacks suppressed Dec 13 14:32:57.307367 kernel: audit: type=1130 audit(1734100377.265:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:57.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:57.263087 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:32:57.266167 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:32:57.309394 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:32:57.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:57.316782 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:32:57.322172 systemd-resolved[1385]: Positive Trust Anchors: Dec 13 14:32:57.322625 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:32:57.322723 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:32:57.327939 kernel: audit: type=1130 audit(1734100377.311:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:57.328996 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:32:57.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:57.345174 kernel: audit: type=1130 audit(1734100377.330:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:57.522006 systemd-timesyncd[1386]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Dec 13 14:32:57.522210 systemd-timesyncd[1386]: Initial clock synchronization to Fri 2024-12-13 14:32:57.522236 UTC. Dec 13 14:32:57.866508 systemd-resolved[1385]: Using system hostname 'ci-3510.3.6-a-dd62f2eb18'. Dec 13 14:32:57.870042 systemd[1]: Started systemd-resolved.service. Dec 13 14:32:57.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:57.872978 systemd[1]: Reached target network.target. Dec 13 14:32:57.886954 kernel: audit: type=1130 audit(1734100377.871:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:57.888510 systemd[1]: Reached target network-online.target. Dec 13 14:32:57.890825 systemd[1]: Reached target nss-lookup.target. Dec 13 14:32:58.167000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:32:58.168467 augenrules[1403]: No rules Dec 13 14:32:58.174829 systemd[1]: Finished audit-rules.service. Dec 13 14:32:58.167000 audit[1403]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffaad2f340 a2=420 a3=0 items=0 ppid=1382 pid=1403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:58.194541 kernel: audit: type=1305 audit(1734100378.167:225): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:32:58.194694 kernel: audit: type=1300 audit(1734100378.167:225): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffaad2f340 a2=420 a3=0 items=0 ppid=1382 pid=1403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:58.167000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:32:58.204784 kernel: audit: type=1327 audit(1734100378.167:225): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:33:04.855082 ldconfig[1263]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:33:04.865620 systemd[1]: Finished ldconfig.service. Dec 13 14:33:04.870122 systemd[1]: Starting systemd-update-done.service... Dec 13 14:33:04.883131 systemd[1]: Finished systemd-update-done.service. Dec 13 14:33:04.885691 systemd[1]: Reached target sysinit.target. Dec 13 14:33:04.887887 systemd[1]: Started motdgen.path. Dec 13 14:33:04.889876 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:33:04.892962 systemd[1]: Started logrotate.timer. Dec 13 14:33:04.894727 systemd[1]: Started mdadm.timer. Dec 13 14:33:04.896320 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:33:04.898441 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:33:04.898583 systemd[1]: Reached target paths.target. Dec 13 14:33:04.900659 systemd[1]: Reached target timers.target. Dec 13 14:33:04.903201 systemd[1]: Listening on dbus.socket. Dec 13 14:33:04.906423 systemd[1]: Starting docker.socket... Dec 13 14:33:04.911166 systemd[1]: Listening on sshd.socket. Dec 13 14:33:04.913507 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:04.914100 systemd[1]: Listening on docker.socket. Dec 13 14:33:04.916139 systemd[1]: Reached target sockets.target. Dec 13 14:33:04.918007 systemd[1]: Reached target basic.target. Dec 13 14:33:04.920044 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:33:04.920081 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:33:04.921489 systemd[1]: Starting containerd.service... Dec 13 14:33:04.924940 systemd[1]: Starting dbus.service... Dec 13 14:33:04.928045 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:33:04.931830 systemd[1]: Starting extend-filesystems.service... Dec 13 14:33:04.934216 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:33:04.961894 systemd[1]: Starting kubelet.service... Dec 13 14:33:04.968104 systemd[1]: Starting motdgen.service... Dec 13 14:33:04.972512 systemd[1]: Started nvidia.service. Dec 13 14:33:04.976619 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:33:04.982838 systemd[1]: Starting sshd-keygen.service... Dec 13 14:33:05.009993 jq[1413]: false Dec 13 14:33:04.990235 systemd[1]: Starting systemd-logind.service... Dec 13 14:33:04.992475 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:04.992642 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:33:04.993305 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:33:05.022781 jq[1426]: true Dec 13 14:33:04.994256 systemd[1]: Starting update-engine.service... Dec 13 14:33:04.998465 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:33:05.010697 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:33:05.010948 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:33:05.045130 jq[1429]: true Dec 13 14:33:05.012642 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:33:05.012847 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:33:05.081326 extend-filesystems[1414]: Found loop1 Dec 13 14:33:05.081326 extend-filesystems[1414]: Found sda Dec 13 14:33:05.081326 extend-filesystems[1414]: Found sda1 Dec 13 14:33:05.081326 extend-filesystems[1414]: Found sda2 Dec 13 14:33:05.081326 extend-filesystems[1414]: Found sda3 Dec 13 14:33:05.081326 extend-filesystems[1414]: Found usr Dec 13 14:33:05.081326 extend-filesystems[1414]: Found sda4 Dec 13 14:33:05.081326 extend-filesystems[1414]: Found sda6 Dec 13 14:33:05.081326 extend-filesystems[1414]: Found sda7 Dec 13 14:33:05.081326 extend-filesystems[1414]: Found sda9 Dec 13 14:33:05.081326 extend-filesystems[1414]: Checking size of /dev/sda9 Dec 13 14:33:05.096668 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:33:05.119619 dbus-daemon[1412]: [system] SELinux support is enabled Dec 13 14:33:05.096927 systemd[1]: Finished motdgen.service. Dec 13 14:33:05.119890 systemd[1]: Started dbus.service. Dec 13 14:33:05.126170 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:33:05.126202 systemd[1]: Reached target system-config.target. Dec 13 14:33:05.129357 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:33:05.129391 systemd[1]: Reached target user-config.target. Dec 13 14:33:05.165798 extend-filesystems[1414]: Old size kept for /dev/sda9 Dec 13 14:33:05.165798 extend-filesystems[1414]: Found sr0 Dec 13 14:33:05.164288 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:33:05.164543 systemd[1]: Finished extend-filesystems.service. Dec 13 14:33:05.202244 bash[1462]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:33:05.204264 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:33:05.228050 env[1432]: time="2024-12-13T14:33:05.227964908Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:33:05.286833 env[1432]: time="2024-12-13T14:33:05.286765892Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:33:05.288411 env[1432]: time="2024-12-13T14:33:05.288363419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:05.290698 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:33:05.295160 env[1432]: time="2024-12-13T14:33:05.295102732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:33:05.295312 env[1432]: time="2024-12-13T14:33:05.295296135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:05.295774 env[1432]: time="2024-12-13T14:33:05.295745543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:33:05.295964 env[1432]: time="2024-12-13T14:33:05.295946646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:05.296987 env[1432]: time="2024-12-13T14:33:05.296953963Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:33:05.297109 env[1432]: time="2024-12-13T14:33:05.297093965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:05.297290 env[1432]: time="2024-12-13T14:33:05.297275068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:05.297719 env[1432]: time="2024-12-13T14:33:05.297695375Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:05.298808 env[1432]: time="2024-12-13T14:33:05.298776893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:33:05.299360 env[1432]: time="2024-12-13T14:33:05.299337703Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:33:05.301800 env[1432]: time="2024-12-13T14:33:05.301771343Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:33:05.301937 env[1432]: time="2024-12-13T14:33:05.301887945Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:33:05.306108 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:33:05.306673 systemd-logind[1424]: New seat seat0. Dec 13 14:33:05.309087 systemd[1]: Started systemd-logind.service. Dec 13 14:33:05.321761 env[1432]: time="2024-12-13T14:33:05.319195035Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:33:05.321761 env[1432]: time="2024-12-13T14:33:05.319255936Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:33:05.321761 env[1432]: time="2024-12-13T14:33:05.319293237Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:33:05.321761 env[1432]: time="2024-12-13T14:33:05.319349838Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:33:05.321761 env[1432]: time="2024-12-13T14:33:05.319373638Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:33:05.321761 env[1432]: time="2024-12-13T14:33:05.319396338Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:33:05.321761 env[1432]: time="2024-12-13T14:33:05.319419239Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:33:05.321761 env[1432]: time="2024-12-13T14:33:05.319444639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:33:05.321761 env[1432]: time="2024-12-13T14:33:05.319466240Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:33:05.321761 env[1432]: time="2024-12-13T14:33:05.319488940Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:33:05.321761 env[1432]: time="2024-12-13T14:33:05.319509440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:33:05.321761 env[1432]: time="2024-12-13T14:33:05.319531641Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:33:05.321761 env[1432]: time="2024-12-13T14:33:05.319698544Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:33:05.321761 env[1432]: time="2024-12-13T14:33:05.319810045Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.320247453Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.320293453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.320316854Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.320483657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.320526257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.320545958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.320565758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.320603359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.320626959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.320660760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.320682260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.320711660Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.321029666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.321070966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:33:05.322329 env[1432]: time="2024-12-13T14:33:05.321092367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:33:05.324434 env[1432]: time="2024-12-13T14:33:05.321112867Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:33:05.324434 env[1432]: time="2024-12-13T14:33:05.321148268Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:33:05.324434 env[1432]: time="2024-12-13T14:33:05.321168568Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:33:05.324434 env[1432]: time="2024-12-13T14:33:05.321197069Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:33:05.324434 env[1432]: time="2024-12-13T14:33:05.321256470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:33:05.324627 env[1432]: time="2024-12-13T14:33:05.321612076Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:33:05.324627 env[1432]: time="2024-12-13T14:33:05.321703077Z" level=info msg="Connect containerd service" Dec 13 14:33:05.324627 env[1432]: time="2024-12-13T14:33:05.323136601Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:33:05.364877 env[1432]: time="2024-12-13T14:33:05.325135235Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:33:05.364877 env[1432]: time="2024-12-13T14:33:05.325252536Z" level=info msg="Start subscribing containerd event" Dec 13 14:33:05.364877 env[1432]: time="2024-12-13T14:33:05.325752745Z" level=info msg="Start recovering state" Dec 13 14:33:05.364877 env[1432]: time="2024-12-13T14:33:05.325849646Z" level=info msg="Start event monitor" Dec 13 14:33:05.364877 env[1432]: time="2024-12-13T14:33:05.325877647Z" level=info msg="Start snapshots syncer" Dec 13 14:33:05.364877 env[1432]: time="2024-12-13T14:33:05.325899647Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:33:05.364877 env[1432]: time="2024-12-13T14:33:05.325941548Z" level=info msg="Start streaming server" Dec 13 14:33:05.364877 env[1432]: time="2024-12-13T14:33:05.326592259Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:33:05.364877 env[1432]: time="2024-12-13T14:33:05.326677160Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:33:05.364877 env[1432]: time="2024-12-13T14:33:05.344037051Z" level=info msg="containerd successfully booted in 0.120966s" Dec 13 14:33:05.326889 systemd[1]: Started containerd.service. Dec 13 14:33:05.439341 update_engine[1425]: I1213 14:33:05.438655 1425 main.cc:92] Flatcar Update Engine starting Dec 13 14:33:05.462447 systemd[1]: Started update-engine.service. Dec 13 14:33:05.468193 systemd[1]: Started locksmithd.service. Dec 13 14:33:05.472014 update_engine[1425]: I1213 14:33:05.471606 1425 update_check_scheduler.cc:74] Next update check in 3m27s Dec 13 14:33:06.171709 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:33:06.568226 systemd[1]: Started kubelet.service. Dec 13 14:33:07.327250 kubelet[1525]: E1213 14:33:07.327187 1525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:33:07.329920 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:33:07.330099 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:33:07.330452 systemd[1]: kubelet.service: Consumed 1.176s CPU time. Dec 13 14:33:07.670277 sshd_keygen[1435]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:33:07.693941 systemd[1]: Finished sshd-keygen.service. Dec 13 14:33:07.699272 systemd[1]: Starting issuegen.service... Dec 13 14:33:07.703471 systemd[1]: Started waagent.service. Dec 13 14:33:07.711311 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:33:07.711536 systemd[1]: Finished issuegen.service. Dec 13 14:33:07.715953 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:33:07.823932 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:33:07.829221 systemd[1]: Started getty@tty1.service. Dec 13 14:33:07.834074 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:33:07.836812 systemd[1]: Reached target getty.target. Dec 13 14:33:07.839050 systemd[1]: Reached target multi-user.target. Dec 13 14:33:07.843403 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:33:07.855634 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:33:07.856072 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:33:07.862070 systemd[1]: Startup finished in 493ms (firmware) + 7.831s (loader) + 1.015s (kernel) + 7.557s (initrd) + 27.343s (userspace) = 44.241s. Dec 13 14:33:09.024972 login[1544]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Dec 13 14:33:09.026967 login[1545]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:33:09.324563 systemd[1]: Created slice user-500.slice. Dec 13 14:33:09.326835 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:33:09.329986 systemd-logind[1424]: New session 1 of user core. Dec 13 14:33:09.377039 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:33:09.379686 systemd[1]: Starting user@500.service... Dec 13 14:33:09.385051 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:10.025390 login[1544]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:33:10.031299 systemd-logind[1424]: New session 2 of user core. Dec 13 14:33:10.060237 systemd[1548]: Queued start job for default target default.target. Dec 13 14:33:10.061224 systemd[1548]: Reached target paths.target. Dec 13 14:33:10.061256 systemd[1548]: Reached target sockets.target. Dec 13 14:33:10.061274 systemd[1548]: Reached target timers.target. Dec 13 14:33:10.061289 systemd[1548]: Reached target basic.target. Dec 13 14:33:10.061355 systemd[1548]: Reached target default.target. Dec 13 14:33:10.061399 systemd[1548]: Startup finished in 667ms. Dec 13 14:33:10.061888 systemd[1]: Started user@500.service. Dec 13 14:33:10.063462 systemd[1]: Started session-1.scope. Dec 13 14:33:10.064368 systemd[1]: Started session-2.scope. Dec 13 14:33:17.581277 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:33:17.581603 systemd[1]: Stopped kubelet.service. Dec 13 14:33:17.581681 systemd[1]: kubelet.service: Consumed 1.176s CPU time. Dec 13 14:33:17.584127 systemd[1]: Starting kubelet.service... Dec 13 14:33:23.933792 systemd[1]: Started kubelet.service. Dec 13 14:33:24.050812 kubelet[1574]: E1213 14:33:24.050749 1574 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:33:24.055047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:33:24.055225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:33:24.506661 waagent[1539]: 2024-12-13T14:33:24.506516Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 14:33:24.528162 waagent[1539]: 2024-12-13T14:33:24.528021Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 14:33:24.531233 waagent[1539]: 2024-12-13T14:33:24.531129Z INFO Daemon Daemon Python: 3.9.16 Dec 13 14:33:24.534305 waagent[1539]: 2024-12-13T14:33:24.534200Z INFO Daemon Daemon Run daemon Dec 13 14:33:24.537530 waagent[1539]: 2024-12-13T14:33:24.536873Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 14:33:24.552141 waagent[1539]: 2024-12-13T14:33:24.551988Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:33:24.559836 waagent[1539]: 2024-12-13T14:33:24.559696Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:33:24.589951 waagent[1539]: 2024-12-13T14:33:24.560363Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:33:24.589951 waagent[1539]: 2024-12-13T14:33:24.561773Z INFO Daemon Daemon Using waagent for provisioning Dec 13 14:33:24.589951 waagent[1539]: 2024-12-13T14:33:24.563315Z INFO Daemon Daemon Activate resource disk Dec 13 14:33:24.589951 waagent[1539]: 2024-12-13T14:33:24.564611Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 14:33:24.589951 waagent[1539]: 2024-12-13T14:33:24.572989Z INFO Daemon Daemon Found device: None Dec 13 14:33:24.589951 waagent[1539]: 2024-12-13T14:33:24.573925Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 14:33:24.589951 waagent[1539]: 2024-12-13T14:33:24.575186Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 14:33:24.589951 waagent[1539]: 2024-12-13T14:33:24.577045Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:33:24.589951 waagent[1539]: 2024-12-13T14:33:24.577840Z INFO Daemon Daemon Running default provisioning handler Dec 13 14:33:24.592736 waagent[1539]: 2024-12-13T14:33:24.592567Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:33:24.602525 waagent[1539]: 2024-12-13T14:33:24.602359Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:33:24.607715 waagent[1539]: 2024-12-13T14:33:24.607599Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:33:24.612213 waagent[1539]: 2024-12-13T14:33:24.607974Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 14:33:24.675945 waagent[1539]: 2024-12-13T14:33:24.674605Z INFO Daemon Daemon Successfully mounted dvd Dec 13 14:33:24.704542 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 14:33:24.714749 waagent[1539]: 2024-12-13T14:33:24.714585Z INFO Daemon Daemon Detect protocol endpoint Dec 13 14:33:24.731194 waagent[1539]: 2024-12-13T14:33:24.715270Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:33:24.731194 waagent[1539]: 2024-12-13T14:33:24.716684Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 14:33:24.731194 waagent[1539]: 2024-12-13T14:33:24.717736Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 14:33:24.731194 waagent[1539]: 2024-12-13T14:33:24.719007Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 14:33:24.731194 waagent[1539]: 2024-12-13T14:33:24.719785Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 14:33:24.756643 waagent[1539]: 2024-12-13T14:33:24.756554Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 14:33:24.760788 waagent[1539]: 2024-12-13T14:33:24.760660Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 14:33:24.763680 waagent[1539]: 2024-12-13T14:33:24.763611Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 14:33:25.021138 waagent[1539]: 2024-12-13T14:33:25.020861Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 14:33:25.035034 waagent[1539]: 2024-12-13T14:33:25.034901Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 14:33:25.038620 waagent[1539]: 2024-12-13T14:33:25.038528Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 14:33:25.143409 waagent[1539]: 2024-12-13T14:33:25.143244Z INFO Daemon Daemon Found private key matching thumbprint 2BD80AD54A4D91F912FC17A97E677DAF735267FD Dec 13 14:33:25.155096 waagent[1539]: 2024-12-13T14:33:25.143839Z INFO Daemon Daemon Certificate with thumbprint CC6DFE9CB805B3C8941A2D8D3055313A9F4F9158 has no matching private key. Dec 13 14:33:25.155096 waagent[1539]: 2024-12-13T14:33:25.145050Z INFO Daemon Daemon Fetch goal state completed Dec 13 14:33:25.169901 waagent[1539]: 2024-12-13T14:33:25.169803Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 24dae8c3-8725-4fdd-9a31-cad4e7e1b7af New eTag: 3710730107091992684] Dec 13 14:33:25.177901 waagent[1539]: 2024-12-13T14:33:25.170890Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:33:25.184435 waagent[1539]: 2024-12-13T14:33:25.184321Z INFO Daemon Daemon Starting provisioning Dec 13 14:33:25.191311 waagent[1539]: 2024-12-13T14:33:25.184772Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 14:33:25.191311 waagent[1539]: 2024-12-13T14:33:25.185761Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-dd62f2eb18] Dec 13 14:33:25.195160 waagent[1539]: 2024-12-13T14:33:25.195002Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-dd62f2eb18] Dec 13 14:33:25.203082 waagent[1539]: 2024-12-13T14:33:25.195864Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 14:33:25.203082 waagent[1539]: 2024-12-13T14:33:25.197169Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 14:33:25.213167 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 14:33:25.213448 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 14:33:25.213542 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 14:33:25.213951 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:33:25.217986 systemd-networkd[1195]: eth0: DHCPv6 lease lost Dec 13 14:33:25.220098 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:33:25.220289 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:33:25.223554 systemd[1]: Starting systemd-networkd.service... Dec 13 14:33:25.258809 systemd-networkd[1600]: enP41321s1: Link UP Dec 13 14:33:25.258824 systemd-networkd[1600]: enP41321s1: Gained carrier Dec 13 14:33:25.260344 systemd-networkd[1600]: eth0: Link UP Dec 13 14:33:25.260355 systemd-networkd[1600]: eth0: Gained carrier Dec 13 14:33:25.260845 systemd-networkd[1600]: lo: Link UP Dec 13 14:33:25.260855 systemd-networkd[1600]: lo: Gained carrier Dec 13 14:33:25.261241 systemd-networkd[1600]: eth0: Gained IPv6LL Dec 13 14:33:25.261582 systemd-networkd[1600]: Enumeration completed Dec 13 14:33:25.261766 systemd[1]: Started systemd-networkd.service. Dec 13 14:33:25.264803 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:33:25.270448 waagent[1539]: 2024-12-13T14:33:25.267079Z INFO Daemon Daemon Create user account if not exists Dec 13 14:33:25.271477 waagent[1539]: 2024-12-13T14:33:25.271304Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 14:33:25.274162 systemd-networkd[1600]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:33:25.275815 waagent[1539]: 2024-12-13T14:33:25.275689Z INFO Daemon Daemon Configure sudoer Dec 13 14:33:25.280933 waagent[1539]: 2024-12-13T14:33:25.276695Z INFO Daemon Daemon Configure sshd Dec 13 14:33:25.280933 waagent[1539]: 2024-12-13T14:33:25.277309Z INFO Daemon Daemon Deploy ssh public key. Dec 13 14:33:25.311122 systemd-networkd[1600]: eth0: DHCPv4 address 10.200.8.26/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:33:25.315769 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:33:26.403527 waagent[1539]: 2024-12-13T14:33:26.403405Z INFO Daemon Daemon Provisioning complete Dec 13 14:33:26.417403 waagent[1539]: 2024-12-13T14:33:26.417301Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 14:33:26.425022 waagent[1539]: 2024-12-13T14:33:26.418031Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 14:33:26.425022 waagent[1539]: 2024-12-13T14:33:26.420251Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 14:33:26.720657 waagent[1609]: 2024-12-13T14:33:26.720425Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 14:33:26.721473 waagent[1609]: 2024-12-13T14:33:26.721397Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:33:26.721631 waagent[1609]: 2024-12-13T14:33:26.721577Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:33:26.734052 waagent[1609]: 2024-12-13T14:33:26.733962Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 14:33:26.734248 waagent[1609]: 2024-12-13T14:33:26.734199Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 14:33:26.803608 waagent[1609]: 2024-12-13T14:33:26.803438Z INFO ExtHandler ExtHandler Found private key matching thumbprint 2BD80AD54A4D91F912FC17A97E677DAF735267FD Dec 13 14:33:26.803882 waagent[1609]: 2024-12-13T14:33:26.803807Z INFO ExtHandler ExtHandler Certificate with thumbprint CC6DFE9CB805B3C8941A2D8D3055313A9F4F9158 has no matching private key. Dec 13 14:33:26.804172 waagent[1609]: 2024-12-13T14:33:26.804117Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 14:33:26.818548 waagent[1609]: 2024-12-13T14:33:26.818469Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: bd269d02-15ae-4f37-9f8d-2975fe82063b New eTag: 3710730107091992684] Dec 13 14:33:26.819220 waagent[1609]: 2024-12-13T14:33:26.819154Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:33:26.855390 waagent[1609]: 2024-12-13T14:33:26.855208Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:33:26.868983 waagent[1609]: 2024-12-13T14:33:26.868841Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1609 Dec 13 14:33:26.873035 waagent[1609]: 2024-12-13T14:33:26.872930Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:33:26.874459 waagent[1609]: 2024-12-13T14:33:26.874382Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:33:26.949671 waagent[1609]: 2024-12-13T14:33:26.949588Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:33:26.950195 waagent[1609]: 2024-12-13T14:33:26.950112Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:33:26.960074 waagent[1609]: 2024-12-13T14:33:26.960011Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:33:26.960661 waagent[1609]: 2024-12-13T14:33:26.960590Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:33:26.961876 waagent[1609]: 2024-12-13T14:33:26.961805Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 14:33:26.963312 waagent[1609]: 2024-12-13T14:33:26.963250Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:33:26.964082 waagent[1609]: 2024-12-13T14:33:26.964028Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:33:26.964291 waagent[1609]: 2024-12-13T14:33:26.964235Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:33:26.964706 waagent[1609]: 2024-12-13T14:33:26.964653Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:33:26.964861 waagent[1609]: 2024-12-13T14:33:26.964814Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:33:26.965318 waagent[1609]: 2024-12-13T14:33:26.965257Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:33:26.965487 waagent[1609]: 2024-12-13T14:33:26.965421Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:33:26.966064 waagent[1609]: 2024-12-13T14:33:26.966011Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:33:26.966149 waagent[1609]: 2024-12-13T14:33:26.966091Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:33:26.967196 waagent[1609]: 2024-12-13T14:33:26.967143Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:33:26.967602 waagent[1609]: 2024-12-13T14:33:26.967545Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:33:26.967602 waagent[1609]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:33:26.967602 waagent[1609]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:33:26.967602 waagent[1609]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:33:26.967602 waagent[1609]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:33:26.967602 waagent[1609]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:33:26.967602 waagent[1609]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:33:26.967937 waagent[1609]: 2024-12-13T14:33:26.967635Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:33:26.967937 waagent[1609]: 2024-12-13T14:33:26.967782Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:33:26.969004 waagent[1609]: 2024-12-13T14:33:26.968941Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:33:26.972505 waagent[1609]: 2024-12-13T14:33:26.972402Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:33:26.973340 waagent[1609]: 2024-12-13T14:33:26.973265Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:33:26.989391 waagent[1609]: 2024-12-13T14:33:26.989298Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 14:33:26.990697 waagent[1609]: 2024-12-13T14:33:26.990624Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:33:26.991924 waagent[1609]: 2024-12-13T14:33:26.991847Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 14:33:27.007063 waagent[1609]: 2024-12-13T14:33:27.006878Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1600' Dec 13 14:33:27.020327 waagent[1609]: 2024-12-13T14:33:27.020173Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:33:27.020327 waagent[1609]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:33:27.020327 waagent[1609]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:33:27.020327 waagent[1609]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:f0:d7 brd ff:ff:ff:ff:ff:ff Dec 13 14:33:27.020327 waagent[1609]: 3: enP41321s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:f0:d7 brd ff:ff:ff:ff:ff:ff\ altname enP41321p0s2 Dec 13 14:33:27.020327 waagent[1609]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:33:27.020327 waagent[1609]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:33:27.020327 waagent[1609]: 2: eth0 inet 10.200.8.26/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:33:27.020327 waagent[1609]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:33:27.020327 waagent[1609]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:33:27.020327 waagent[1609]: 2: eth0 inet6 fe80::7e1e:52ff:fe35:f0d7/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:33:27.045102 waagent[1609]: 2024-12-13T14:33:27.045003Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 14:33:27.260928 waagent[1609]: 2024-12-13T14:33:27.260653Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Dec 13 14:33:27.264959 waagent[1609]: 2024-12-13T14:33:27.264805Z INFO EnvHandler ExtHandler Firewall rules: Dec 13 14:33:27.264959 waagent[1609]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:33:27.264959 waagent[1609]: pkts bytes target prot opt in out source destination Dec 13 14:33:27.264959 waagent[1609]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:33:27.264959 waagent[1609]: pkts bytes target prot opt in out source destination Dec 13 14:33:27.264959 waagent[1609]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:33:27.264959 waagent[1609]: pkts bytes target prot opt in out source destination Dec 13 14:33:27.264959 waagent[1609]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:33:27.264959 waagent[1609]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:33:27.266689 waagent[1609]: 2024-12-13T14:33:27.266624Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 14:33:27.393062 waagent[1609]: 2024-12-13T14:33:27.392970Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 14:33:28.426251 waagent[1539]: 2024-12-13T14:33:28.426036Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 14:33:28.433079 waagent[1539]: 2024-12-13T14:33:28.432977Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 14:33:29.565353 waagent[1648]: 2024-12-13T14:33:29.565213Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 14:33:29.566234 waagent[1648]: 2024-12-13T14:33:29.566157Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 14:33:29.566386 waagent[1648]: 2024-12-13T14:33:29.566333Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 14:33:29.566546 waagent[1648]: 2024-12-13T14:33:29.566497Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 13 14:33:29.577397 waagent[1648]: 2024-12-13T14:33:29.577248Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:33:29.577899 waagent[1648]: 2024-12-13T14:33:29.577830Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:33:29.578099 waagent[1648]: 2024-12-13T14:33:29.578045Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:33:29.591897 waagent[1648]: 2024-12-13T14:33:29.591786Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 14:33:29.601550 waagent[1648]: 2024-12-13T14:33:29.601470Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 14:33:29.602736 waagent[1648]: 2024-12-13T14:33:29.602665Z INFO ExtHandler Dec 13 14:33:29.602917 waagent[1648]: 2024-12-13T14:33:29.602851Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2ed40608-d2d2-4b33-aa38-912e0a262423 eTag: 3710730107091992684 source: Fabric] Dec 13 14:33:29.603660 waagent[1648]: 2024-12-13T14:33:29.603604Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 14:33:29.604805 waagent[1648]: 2024-12-13T14:33:29.604746Z INFO ExtHandler Dec 13 14:33:29.604961 waagent[1648]: 2024-12-13T14:33:29.604891Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 14:33:29.612818 waagent[1648]: 2024-12-13T14:33:29.612749Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 14:33:29.613443 waagent[1648]: 2024-12-13T14:33:29.613391Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:33:29.636573 waagent[1648]: 2024-12-13T14:33:29.636467Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 14:33:29.713794 waagent[1648]: 2024-12-13T14:33:29.713628Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CC6DFE9CB805B3C8941A2D8D3055313A9F4F9158', 'hasPrivateKey': False} Dec 13 14:33:29.715000 waagent[1648]: 2024-12-13T14:33:29.714924Z INFO ExtHandler Downloaded certificate {'thumbprint': '2BD80AD54A4D91F912FC17A97E677DAF735267FD', 'hasPrivateKey': True} Dec 13 14:33:29.716066 waagent[1648]: 2024-12-13T14:33:29.716006Z INFO ExtHandler Fetch goal state completed Dec 13 14:33:29.738534 waagent[1648]: 2024-12-13T14:33:29.738364Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 14:33:29.752462 waagent[1648]: 2024-12-13T14:33:29.752322Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1648 Dec 13 14:33:29.755804 waagent[1648]: 2024-12-13T14:33:29.755704Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:33:29.756995 waagent[1648]: 2024-12-13T14:33:29.756929Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 14:33:29.757356 waagent[1648]: 2024-12-13T14:33:29.757298Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 14:33:29.759453 waagent[1648]: 2024-12-13T14:33:29.759397Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:33:29.765499 waagent[1648]: 2024-12-13T14:33:29.765427Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:33:29.765976 waagent[1648]: 2024-12-13T14:33:29.765891Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:33:29.776097 waagent[1648]: 2024-12-13T14:33:29.776030Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:33:29.776787 waagent[1648]: 2024-12-13T14:33:29.776720Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:33:29.792017 waagent[1648]: 2024-12-13T14:33:29.791851Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Dec 13 14:33:29.795844 waagent[1648]: 2024-12-13T14:33:29.795703Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Dec 13 14:33:29.797108 waagent[1648]: 2024-12-13T14:33:29.797018Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 14:33:29.798842 waagent[1648]: 2024-12-13T14:33:29.798774Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:33:29.799252 waagent[1648]: 2024-12-13T14:33:29.799190Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:33:29.799434 waagent[1648]: 2024-12-13T14:33:29.799384Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:33:29.800350 waagent[1648]: 2024-12-13T14:33:29.800289Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:33:29.800819 waagent[1648]: 2024-12-13T14:33:29.800762Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:33:29.802560 waagent[1648]: 2024-12-13T14:33:29.802409Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:33:29.802633 waagent[1648]: 2024-12-13T14:33:29.802566Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:33:29.802888 waagent[1648]: 2024-12-13T14:33:29.802837Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:33:29.802985 waagent[1648]: 2024-12-13T14:33:29.802922Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:33:29.802985 waagent[1648]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:33:29.802985 waagent[1648]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:33:29.802985 waagent[1648]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:33:29.802985 waagent[1648]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:33:29.802985 waagent[1648]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:33:29.802985 waagent[1648]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:33:29.803549 waagent[1648]: 2024-12-13T14:33:29.803497Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:33:29.806412 waagent[1648]: 2024-12-13T14:33:29.806290Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:33:29.806635 waagent[1648]: 2024-12-13T14:33:29.806575Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:33:29.806792 waagent[1648]: 2024-12-13T14:33:29.806738Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:33:29.808852 waagent[1648]: 2024-12-13T14:33:29.808787Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:33:29.809122 waagent[1648]: 2024-12-13T14:33:29.809066Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:33:29.809552 waagent[1648]: 2024-12-13T14:33:29.809499Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:33:29.826356 waagent[1648]: 2024-12-13T14:33:29.826162Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 14:33:29.826890 waagent[1648]: 2024-12-13T14:33:29.826821Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:33:29.826890 waagent[1648]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:33:29.826890 waagent[1648]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:33:29.826890 waagent[1648]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:f0:d7 brd ff:ff:ff:ff:ff:ff Dec 13 14:33:29.826890 waagent[1648]: 3: enP41321s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:35:f0:d7 brd ff:ff:ff:ff:ff:ff\ altname enP41321p0s2 Dec 13 14:33:29.826890 waagent[1648]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:33:29.826890 waagent[1648]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:33:29.826890 waagent[1648]: 2: eth0 inet 10.200.8.26/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:33:29.826890 waagent[1648]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:33:29.826890 waagent[1648]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:33:29.826890 waagent[1648]: 2: eth0 inet6 fe80::7e1e:52ff:fe35:f0d7/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:33:29.876035 waagent[1648]: 2024-12-13T14:33:29.875947Z INFO ExtHandler ExtHandler Dec 13 14:33:29.880658 waagent[1648]: 2024-12-13T14:33:29.880426Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b9637d70-d112-4323-bc77-d58ff7dc769b correlation 5782dc8e-ebf6-47e6-bd87-980266514844 created: 2024-12-13T14:32:14.148865Z] Dec 13 14:33:29.888161 waagent[1648]: 2024-12-13T14:33:29.888089Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 14:33:29.900622 waagent[1648]: 2024-12-13T14:33:29.900536Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 24 ms] Dec 13 14:33:29.927138 waagent[1648]: 2024-12-13T14:33:29.927050Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 14:33:29.927138 waagent[1648]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:33:29.927138 waagent[1648]: pkts bytes target prot opt in out source destination Dec 13 14:33:29.927138 waagent[1648]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:33:29.927138 waagent[1648]: pkts bytes target prot opt in out source destination Dec 13 14:33:29.927138 waagent[1648]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:33:29.927138 waagent[1648]: pkts bytes target prot opt in out source destination Dec 13 14:33:29.927138 waagent[1648]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:33:29.927138 waagent[1648]: 152 20102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:33:29.927138 waagent[1648]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:33:29.933831 waagent[1648]: 2024-12-13T14:33:29.933753Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 14:33:29.951468 waagent[1648]: 2024-12-13T14:33:29.951369Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E57ADD72-A293-4209-8EE2-FF5731FE392E;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 14:33:34.092076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:33:34.092425 systemd[1]: Stopped kubelet.service. Dec 13 14:33:34.094794 systemd[1]: Starting kubelet.service... Dec 13 14:33:34.194030 systemd[1]: Started kubelet.service. Dec 13 14:33:34.778358 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 14:33:34.787593 kubelet[1692]: E1213 14:33:34.787529 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:33:34.789482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:33:34.789659 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:33:44.842152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:33:44.842489 systemd[1]: Stopped kubelet.service. Dec 13 14:33:44.845063 systemd[1]: Starting kubelet.service... Dec 13 14:33:45.180257 systemd[1]: Started kubelet.service. Dec 13 14:33:45.550919 kubelet[1701]: E1213 14:33:45.550748 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:33:45.552766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:33:45.552961 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:33:50.972715 update_engine[1425]: I1213 14:33:50.972600 1425 update_attempter.cc:509] Updating boot flags... Dec 13 14:33:51.190230 systemd[1]: Created slice system-sshd.slice. Dec 13 14:33:51.192951 systemd[1]: Started sshd@0-10.200.8.26:22-10.200.16.10:60500.service. Dec 13 14:33:51.973077 sshd[1749]: Accepted publickey for core from 10.200.16.10 port 60500 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:51.975167 sshd[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:51.980057 systemd-logind[1424]: New session 3 of user core. Dec 13 14:33:51.981439 systemd[1]: Started session-3.scope. Dec 13 14:33:52.589085 systemd[1]: Started sshd@1-10.200.8.26:22-10.200.16.10:60504.service. Dec 13 14:33:53.302128 sshd[1754]: Accepted publickey for core from 10.200.16.10 port 60504 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:53.304286 sshd[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:53.310952 systemd[1]: Started session-4.scope. Dec 13 14:33:53.311677 systemd-logind[1424]: New session 4 of user core. Dec 13 14:33:53.806285 sshd[1754]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:53.810186 systemd[1]: sshd@1-10.200.8.26:22-10.200.16.10:60504.service: Deactivated successfully. Dec 13 14:33:53.811283 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:33:53.812025 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:33:53.812844 systemd-logind[1424]: Removed session 4. Dec 13 14:33:53.925365 systemd[1]: Started sshd@2-10.200.8.26:22-10.200.16.10:60512.service. Dec 13 14:33:54.643485 sshd[1761]: Accepted publickey for core from 10.200.16.10 port 60512 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:54.645414 sshd[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:54.650856 systemd[1]: Started session-5.scope. Dec 13 14:33:54.651565 systemd-logind[1424]: New session 5 of user core. Dec 13 14:33:55.141504 sshd[1761]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:55.145437 systemd[1]: sshd@2-10.200.8.26:22-10.200.16.10:60512.service: Deactivated successfully. Dec 13 14:33:55.146472 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:33:55.147171 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:33:55.148056 systemd-logind[1424]: Removed session 5. Dec 13 14:33:55.260702 systemd[1]: Started sshd@3-10.200.8.26:22-10.200.16.10:60528.service. Dec 13 14:33:55.592330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 14:33:55.592649 systemd[1]: Stopped kubelet.service. Dec 13 14:33:55.595250 systemd[1]: Starting kubelet.service... Dec 13 14:33:55.936178 systemd[1]: Started kubelet.service. Dec 13 14:33:55.975762 sshd[1767]: Accepted publickey for core from 10.200.16.10 port 60528 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:55.977602 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:55.983199 systemd[1]: Started session-6.scope. Dec 13 14:33:55.983856 systemd-logind[1424]: New session 6 of user core. Dec 13 14:33:56.292190 kubelet[1773]: E1213 14:33:56.292010 1773 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:33:56.294266 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:33:56.294452 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:33:56.481207 sshd[1767]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:56.484936 systemd[1]: sshd@3-10.200.8.26:22-10.200.16.10:60528.service: Deactivated successfully. Dec 13 14:33:56.486025 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:33:56.486812 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:33:56.487818 systemd-logind[1424]: Removed session 6. Dec 13 14:33:56.600751 systemd[1]: Started sshd@4-10.200.8.26:22-10.200.16.10:60538.service. Dec 13 14:33:57.315563 sshd[1783]: Accepted publickey for core from 10.200.16.10 port 60538 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:57.317680 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:57.324200 systemd[1]: Started session-7.scope. Dec 13 14:33:57.325209 systemd-logind[1424]: New session 7 of user core. Dec 13 14:33:57.773832 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:33:57.774171 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:33:57.789689 systemd[1]: Starting coreos-metadata.service... Dec 13 14:33:57.838505 coreos-metadata[1790]: Dec 13 14:33:57.838 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 14:33:57.841760 coreos-metadata[1790]: Dec 13 14:33:57.841 INFO Fetch successful Dec 13 14:33:57.841974 coreos-metadata[1790]: Dec 13 14:33:57.841 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 14:33:57.844127 coreos-metadata[1790]: Dec 13 14:33:57.844 INFO Fetch successful Dec 13 14:33:57.844279 coreos-metadata[1790]: Dec 13 14:33:57.844 INFO Fetching http://168.63.129.16/machine/d0b00b57-39e9-4038-a6a8-797312afca02/36ffcf9a%2D5853%2D4134%2D9a7b%2D939612b58ff2.%5Fci%2D3510.3.6%2Da%2Ddd62f2eb18?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 14:33:57.846318 coreos-metadata[1790]: Dec 13 14:33:57.846 INFO Fetch successful Dec 13 14:33:57.887030 coreos-metadata[1790]: Dec 13 14:33:57.886 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 14:33:57.899070 coreos-metadata[1790]: Dec 13 14:33:57.899 INFO Fetch successful Dec 13 14:33:57.910268 systemd[1]: Finished coreos-metadata.service. Dec 13 14:33:58.406824 systemd[1]: Stopped kubelet.service. Dec 13 14:33:58.410514 systemd[1]: Starting kubelet.service... Dec 13 14:33:58.459745 systemd[1]: Reloading. Dec 13 14:33:58.600996 /usr/lib/systemd/system-generators/torcx-generator[1845]: time="2024-12-13T14:33:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:33:58.601521 /usr/lib/systemd/system-generators/torcx-generator[1845]: time="2024-12-13T14:33:58Z" level=info msg="torcx already run" Dec 13 14:33:58.680730 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:33:58.680752 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:33:58.698229 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:33:58.799301 systemd[1]: Started kubelet.service. Dec 13 14:33:58.801748 systemd[1]: Stopping kubelet.service... Dec 13 14:33:58.802130 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:33:58.802374 systemd[1]: Stopped kubelet.service. Dec 13 14:33:58.804573 systemd[1]: Starting kubelet.service... Dec 13 14:33:59.064697 systemd[1]: Started kubelet.service. Dec 13 14:33:59.741020 kubelet[1913]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:33:59.741020 kubelet[1913]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:33:59.741020 kubelet[1913]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:33:59.742413 kubelet[1913]: I1213 14:33:59.742354 1913 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:34:00.179451 kubelet[1913]: I1213 14:34:00.179396 1913 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 14:34:00.179451 kubelet[1913]: I1213 14:34:00.179436 1913 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:34:00.179865 kubelet[1913]: I1213 14:34:00.179843 1913 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 14:34:00.207438 kubelet[1913]: I1213 14:34:00.206817 1913 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:34:00.218818 kubelet[1913]: E1213 14:34:00.218735 1913 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 14:34:00.218818 kubelet[1913]: I1213 14:34:00.218813 1913 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 14:34:00.224374 kubelet[1913]: I1213 14:34:00.224342 1913 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:34:00.224579 kubelet[1913]: I1213 14:34:00.224476 1913 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 14:34:00.224674 kubelet[1913]: I1213 14:34:00.224629 1913 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:34:00.224928 kubelet[1913]: I1213 14:34:00.224674 1913 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.8.26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 14:34:00.225113 kubelet[1913]: I1213 14:34:00.224956 1913 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:34:00.225113 kubelet[1913]: I1213 14:34:00.224971 1913 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 14:34:00.225201 kubelet[1913]: I1213 14:34:00.225127 1913 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:34:00.230199 kubelet[1913]: I1213 14:34:00.230167 1913 kubelet.go:408] "Attempting to sync node with API server" Dec 13 14:34:00.230315 kubelet[1913]: I1213 14:34:00.230209 1913 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:34:00.230315 kubelet[1913]: I1213 14:34:00.230256 1913 kubelet.go:314] "Adding apiserver pod source" Dec 13 14:34:00.230315 kubelet[1913]: I1213 14:34:00.230276 1913 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:34:00.230544 kubelet[1913]: E1213 14:34:00.230527 1913 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:00.230636 kubelet[1913]: E1213 14:34:00.230627 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:00.238927 kubelet[1913]: I1213 14:34:00.238887 1913 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:34:00.247254 kubelet[1913]: I1213 14:34:00.247220 1913 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:34:00.247558 kubelet[1913]: W1213 14:34:00.247540 1913 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:34:00.248401 kubelet[1913]: I1213 14:34:00.248373 1913 server.go:1269] "Started kubelet" Dec 13 14:34:00.256380 kubelet[1913]: I1213 14:34:00.256338 1913 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:34:00.257272 kubelet[1913]: I1213 14:34:00.257205 1913 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:34:00.257763 kubelet[1913]: I1213 14:34:00.257738 1913 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:34:00.258071 kubelet[1913]: I1213 14:34:00.258052 1913 server.go:460] "Adding debug handlers to kubelet server" Dec 13 14:34:00.261334 kubelet[1913]: E1213 14:34:00.261312 1913 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:34:00.268011 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:34:00.268170 kubelet[1913]: I1213 14:34:00.268153 1913 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:34:00.268728 kubelet[1913]: I1213 14:34:00.268707 1913 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 14:34:00.271393 kubelet[1913]: E1213 14:34:00.271366 1913 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.8.26\" not found" Dec 13 14:34:00.271540 kubelet[1913]: I1213 14:34:00.271528 1913 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 14:34:00.272038 kubelet[1913]: I1213 14:34:00.272019 1913 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 14:34:00.272206 kubelet[1913]: I1213 14:34:00.272195 1913 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:34:00.273373 kubelet[1913]: I1213 14:34:00.273355 1913 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:34:00.273690 kubelet[1913]: I1213 14:34:00.273650 1913 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:34:00.276802 kubelet[1913]: I1213 14:34:00.276779 1913 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:34:00.283124 kubelet[1913]: E1213 14:34:00.283092 1913 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.26\" not found" node="10.200.8.26" Dec 13 14:34:00.289396 kubelet[1913]: I1213 14:34:00.289376 1913 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:34:00.289579 kubelet[1913]: I1213 14:34:00.289559 1913 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:34:00.289669 kubelet[1913]: I1213 14:34:00.289588 1913 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:34:00.295150 kubelet[1913]: I1213 14:34:00.295124 1913 policy_none.go:49] "None policy: Start" Dec 13 14:34:00.296038 kubelet[1913]: I1213 14:34:00.296017 1913 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:34:00.296130 kubelet[1913]: I1213 14:34:00.296042 1913 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:34:00.307396 systemd[1]: Created slice kubepods.slice. Dec 13 14:34:00.311144 kubelet[1913]: I1213 14:34:00.310250 1913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:34:00.312020 kubelet[1913]: I1213 14:34:00.311992 1913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:34:00.312122 kubelet[1913]: I1213 14:34:00.312028 1913 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:34:00.312122 kubelet[1913]: I1213 14:34:00.312055 1913 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 14:34:00.312122 kubelet[1913]: E1213 14:34:00.312100 1913 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:34:00.319517 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:34:00.322545 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:34:00.328617 kubelet[1913]: I1213 14:34:00.328592 1913 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:34:00.328889 kubelet[1913]: I1213 14:34:00.328878 1913 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 14:34:00.329038 kubelet[1913]: I1213 14:34:00.328989 1913 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:34:00.330445 kubelet[1913]: I1213 14:34:00.330427 1913 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:34:00.335299 kubelet[1913]: E1213 14:34:00.335269 1913 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.26\" not found" Dec 13 14:34:00.431369 kubelet[1913]: I1213 14:34:00.431208 1913 kubelet_node_status.go:72] "Attempting to register node" node="10.200.8.26" Dec 13 14:34:00.440937 kubelet[1913]: I1213 14:34:00.440883 1913 kubelet_node_status.go:75] "Successfully registered node" node="10.200.8.26" Dec 13 14:34:00.554546 kubelet[1913]: I1213 14:34:00.554502 1913 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:34:00.555035 env[1432]: time="2024-12-13T14:34:00.554965849Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:34:00.555564 kubelet[1913]: I1213 14:34:00.555249 1913 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:34:00.604850 sudo[1786]: pam_unix(sudo:session): session closed for user root Dec 13 14:34:00.724889 sshd[1783]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:00.729508 systemd[1]: sshd@4-10.200.8.26:22-10.200.16.10:60538.service: Deactivated successfully. Dec 13 14:34:00.730661 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:34:00.731481 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:34:00.732437 systemd-logind[1424]: Removed session 7. Dec 13 14:34:01.182426 kubelet[1913]: I1213 14:34:01.182366 1913 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:34:01.183066 kubelet[1913]: W1213 14:34:01.182709 1913 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:34:01.183221 kubelet[1913]: W1213 14:34:01.183186 1913 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:34:01.183292 kubelet[1913]: W1213 14:34:01.183261 1913 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:34:01.231010 kubelet[1913]: I1213 14:34:01.230893 1913 apiserver.go:52] "Watching apiserver" Dec 13 14:34:01.231320 kubelet[1913]: E1213 14:34:01.230892 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:01.245136 systemd[1]: Created slice kubepods-burstable-podc679962a_ec7d_419d_89d2_5161175b6777.slice. Dec 13 14:34:01.253307 systemd[1]: Created slice kubepods-besteffort-podd4d12969_4cea_4d90_8df5_86da15ac3d7b.slice. Dec 13 14:34:01.273566 kubelet[1913]: I1213 14:34:01.273529 1913 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 14:34:01.275472 kubelet[1913]: I1213 14:34:01.275435 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4d12969-4cea-4d90-8df5-86da15ac3d7b-xtables-lock\") pod \"kube-proxy-d5hhl\" (UID: \"d4d12969-4cea-4d90-8df5-86da15ac3d7b\") " pod="kube-system/kube-proxy-d5hhl" Dec 13 14:34:01.275596 kubelet[1913]: I1213 14:34:01.275474 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4d12969-4cea-4d90-8df5-86da15ac3d7b-lib-modules\") pod \"kube-proxy-d5hhl\" (UID: \"d4d12969-4cea-4d90-8df5-86da15ac3d7b\") " pod="kube-system/kube-proxy-d5hhl" Dec 13 14:34:01.275596 kubelet[1913]: I1213 14:34:01.275503 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtv2m\" (UniqueName: \"kubernetes.io/projected/d4d12969-4cea-4d90-8df5-86da15ac3d7b-kube-api-access-wtv2m\") pod \"kube-proxy-d5hhl\" (UID: \"d4d12969-4cea-4d90-8df5-86da15ac3d7b\") " pod="kube-system/kube-proxy-d5hhl" Dec 13 14:34:01.275596 kubelet[1913]: I1213 14:34:01.275527 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-cilium-run\") pod \"cilium-96qf7\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " pod="kube-system/cilium-96qf7" Dec 13 14:34:01.275596 kubelet[1913]: I1213 14:34:01.275549 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-lib-modules\") pod \"cilium-96qf7\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " pod="kube-system/cilium-96qf7" Dec 13 14:34:01.275596 kubelet[1913]: I1213 14:34:01.275574 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-xtables-lock\") pod \"cilium-96qf7\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " pod="kube-system/cilium-96qf7" Dec 13 14:34:01.275596 kubelet[1913]: I1213 14:34:01.275595 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c679962a-ec7d-419d-89d2-5161175b6777-cilium-config-path\") pod \"cilium-96qf7\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " pod="kube-system/cilium-96qf7" Dec 13 14:34:01.275839 kubelet[1913]: I1213 14:34:01.275621 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-host-proc-sys-kernel\") pod \"cilium-96qf7\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " pod="kube-system/cilium-96qf7" Dec 13 14:34:01.275839 kubelet[1913]: I1213 14:34:01.275663 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-bpf-maps\") pod \"cilium-96qf7\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " pod="kube-system/cilium-96qf7" Dec 13 14:34:01.275839 kubelet[1913]: I1213 14:34:01.275687 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-hostproc\") pod \"cilium-96qf7\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " pod="kube-system/cilium-96qf7" Dec 13 14:34:01.275839 kubelet[1913]: I1213 14:34:01.275708 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-cni-path\") pod \"cilium-96qf7\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " pod="kube-system/cilium-96qf7" Dec 13 14:34:01.275839 kubelet[1913]: I1213 14:34:01.275731 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-etc-cni-netd\") pod \"cilium-96qf7\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " pod="kube-system/cilium-96qf7" Dec 13 14:34:01.275839 kubelet[1913]: I1213 14:34:01.275771 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c679962a-ec7d-419d-89d2-5161175b6777-clustermesh-secrets\") pod \"cilium-96qf7\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " pod="kube-system/cilium-96qf7" Dec 13 14:34:01.276088 kubelet[1913]: I1213 14:34:01.275798 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-host-proc-sys-net\") pod \"cilium-96qf7\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " pod="kube-system/cilium-96qf7" Dec 13 14:34:01.276088 kubelet[1913]: I1213 14:34:01.275821 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c679962a-ec7d-419d-89d2-5161175b6777-hubble-tls\") pod \"cilium-96qf7\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " pod="kube-system/cilium-96qf7" Dec 13 14:34:01.276088 kubelet[1913]: I1213 14:34:01.275845 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85vbg\" (UniqueName: \"kubernetes.io/projected/c679962a-ec7d-419d-89d2-5161175b6777-kube-api-access-85vbg\") pod \"cilium-96qf7\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " pod="kube-system/cilium-96qf7" Dec 13 14:34:01.276088 kubelet[1913]: I1213 14:34:01.275869 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-cilium-cgroup\") pod \"cilium-96qf7\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " pod="kube-system/cilium-96qf7" Dec 13 14:34:01.276088 kubelet[1913]: I1213 14:34:01.275894 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4d12969-4cea-4d90-8df5-86da15ac3d7b-kube-proxy\") pod \"kube-proxy-d5hhl\" (UID: \"d4d12969-4cea-4d90-8df5-86da15ac3d7b\") " pod="kube-system/kube-proxy-d5hhl" Dec 13 14:34:01.378544 kubelet[1913]: I1213 14:34:01.378459 1913 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 14:34:01.554008 env[1432]: time="2024-12-13T14:34:01.553157713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-96qf7,Uid:c679962a-ec7d-419d-89d2-5161175b6777,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:01.568620 env[1432]: time="2024-12-13T14:34:01.568567320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d5hhl,Uid:d4d12969-4cea-4d90-8df5-86da15ac3d7b,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:02.165538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount664750375.mount: Deactivated successfully. Dec 13 14:34:02.189378 env[1432]: time="2024-12-13T14:34:02.189319055Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:02.191759 env[1432]: time="2024-12-13T14:34:02.191715391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:02.201343 env[1432]: time="2024-12-13T14:34:02.201299837Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:02.205687 env[1432]: time="2024-12-13T14:34:02.205649603Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:02.208746 env[1432]: time="2024-12-13T14:34:02.208707750Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:02.213184 env[1432]: time="2024-12-13T14:34:02.213146517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:02.215639 env[1432]: time="2024-12-13T14:34:02.215599055Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:02.225557 env[1432]: time="2024-12-13T14:34:02.225472605Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:02.232003 kubelet[1913]: E1213 14:34:02.231902 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:02.284953 env[1432]: time="2024-12-13T14:34:02.283110882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:02.284953 env[1432]: time="2024-12-13T14:34:02.283154382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:02.284953 env[1432]: time="2024-12-13T14:34:02.283165482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:02.284953 env[1432]: time="2024-12-13T14:34:02.283386686Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddeeb13c067ae2f6fe0a399647dd3aeb9a0333285274c768f6972bf9d8b0a09d pid=1958 runtime=io.containerd.runc.v2 Dec 13 14:34:02.296513 env[1432]: time="2024-12-13T14:34:02.296401284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:02.296738 env[1432]: time="2024-12-13T14:34:02.296524086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:02.296738 env[1432]: time="2024-12-13T14:34:02.296549986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:02.296738 env[1432]: time="2024-12-13T14:34:02.296713589Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33 pid=1977 runtime=io.containerd.runc.v2 Dec 13 14:34:02.308847 systemd[1]: Started cri-containerd-ddeeb13c067ae2f6fe0a399647dd3aeb9a0333285274c768f6972bf9d8b0a09d.scope. Dec 13 14:34:02.345670 systemd[1]: Started cri-containerd-6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33.scope. Dec 13 14:34:02.361300 env[1432]: time="2024-12-13T14:34:02.361200370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d5hhl,Uid:d4d12969-4cea-4d90-8df5-86da15ac3d7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddeeb13c067ae2f6fe0a399647dd3aeb9a0333285274c768f6972bf9d8b0a09d\"" Dec 13 14:34:02.364758 env[1432]: time="2024-12-13T14:34:02.364713123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 14:34:02.399377 env[1432]: time="2024-12-13T14:34:02.399324850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-96qf7,Uid:c679962a-ec7d-419d-89d2-5161175b6777,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33\"" Dec 13 14:34:03.232643 kubelet[1913]: E1213 14:34:03.232577 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:03.682252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926443802.mount: Deactivated successfully. Dec 13 14:34:04.232967 kubelet[1913]: E1213 14:34:04.232857 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:04.320805 env[1432]: time="2024-12-13T14:34:04.320741392Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:04.327931 env[1432]: time="2024-12-13T14:34:04.327867594Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:04.332029 env[1432]: time="2024-12-13T14:34:04.331985554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:04.338752 env[1432]: time="2024-12-13T14:34:04.338709450Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:04.339130 env[1432]: time="2024-12-13T14:34:04.339096656Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 14:34:04.341536 env[1432]: time="2024-12-13T14:34:04.341258287Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:34:04.342187 env[1432]: time="2024-12-13T14:34:04.342153900Z" level=info msg="CreateContainer within sandbox \"ddeeb13c067ae2f6fe0a399647dd3aeb9a0333285274c768f6972bf9d8b0a09d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:34:04.396101 env[1432]: time="2024-12-13T14:34:04.396031375Z" level=info msg="CreateContainer within sandbox \"ddeeb13c067ae2f6fe0a399647dd3aeb9a0333285274c768f6972bf9d8b0a09d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"60baa1b4ed0129b6a8306718a9f2a139bd1d9699a101a3c9f98c45bd98b79a66\"" Dec 13 14:34:04.397074 env[1432]: time="2024-12-13T14:34:04.397029089Z" level=info msg="StartContainer for \"60baa1b4ed0129b6a8306718a9f2a139bd1d9699a101a3c9f98c45bd98b79a66\"" Dec 13 14:34:04.427219 systemd[1]: Started cri-containerd-60baa1b4ed0129b6a8306718a9f2a139bd1d9699a101a3c9f98c45bd98b79a66.scope. Dec 13 14:34:04.473007 env[1432]: time="2024-12-13T14:34:04.472936881Z" level=info msg="StartContainer for \"60baa1b4ed0129b6a8306718a9f2a139bd1d9699a101a3c9f98c45bd98b79a66\" returns successfully" Dec 13 14:34:04.682172 systemd[1]: run-containerd-runc-k8s.io-60baa1b4ed0129b6a8306718a9f2a139bd1d9699a101a3c9f98c45bd98b79a66-runc.X7EAJ6.mount: Deactivated successfully. Dec 13 14:34:05.233195 kubelet[1913]: E1213 14:34:05.233130 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:05.352427 kubelet[1913]: I1213 14:34:05.352315 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d5hhl" podStartSLOduration=3.375870725 podStartE2EDuration="5.352287488s" podCreationTimestamp="2024-12-13 14:34:00 +0000 UTC" firstStartedPulling="2024-12-13 14:34:02.36385801 +0000 UTC m=+3.290158505" lastFinishedPulling="2024-12-13 14:34:04.340274773 +0000 UTC m=+5.266575268" observedRunningTime="2024-12-13 14:34:05.352112186 +0000 UTC m=+6.278412781" watchObservedRunningTime="2024-12-13 14:34:05.352287488 +0000 UTC m=+6.278588083" Dec 13 14:34:06.233861 kubelet[1913]: E1213 14:34:06.233752 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:07.234693 kubelet[1913]: E1213 14:34:07.234544 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:08.235931 kubelet[1913]: E1213 14:34:08.235845 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:09.236424 kubelet[1913]: E1213 14:34:09.236350 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:10.237300 kubelet[1913]: E1213 14:34:10.237226 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:11.238354 kubelet[1913]: E1213 14:34:11.238261 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:12.239302 kubelet[1913]: E1213 14:34:12.239231 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:13.239607 kubelet[1913]: E1213 14:34:13.239520 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:14.240072 kubelet[1913]: E1213 14:34:14.239990 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:15.240736 kubelet[1913]: E1213 14:34:15.240686 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:15.542427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322799741.mount: Deactivated successfully. Dec 13 14:34:16.241196 kubelet[1913]: E1213 14:34:16.241136 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:17.242390 kubelet[1913]: E1213 14:34:17.242275 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:18.243380 kubelet[1913]: E1213 14:34:18.243320 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:18.283267 env[1432]: time="2024-12-13T14:34:18.283194688Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:18.290830 env[1432]: time="2024-12-13T14:34:18.290769862Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:18.296039 env[1432]: time="2024-12-13T14:34:18.295985814Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:18.296524 env[1432]: time="2024-12-13T14:34:18.296483019Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:34:18.299264 env[1432]: time="2024-12-13T14:34:18.299228646Z" level=info msg="CreateContainer within sandbox \"6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:34:18.337694 env[1432]: time="2024-12-13T14:34:18.337627523Z" level=info msg="CreateContainer within sandbox \"6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0\"" Dec 13 14:34:18.338571 env[1432]: time="2024-12-13T14:34:18.338533032Z" level=info msg="StartContainer for \"c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0\"" Dec 13 14:34:18.371549 systemd[1]: Started cri-containerd-c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0.scope. Dec 13 14:34:18.416306 env[1432]: time="2024-12-13T14:34:18.416227595Z" level=info msg="StartContainer for \"c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0\" returns successfully" Dec 13 14:34:18.424508 systemd[1]: cri-containerd-c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0.scope: Deactivated successfully. Dec 13 14:34:19.244610 kubelet[1913]: E1213 14:34:19.244538 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:19.321644 systemd[1]: run-containerd-runc-k8s.io-c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0-runc.4NEwzT.mount: Deactivated successfully. Dec 13 14:34:19.321780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0-rootfs.mount: Deactivated successfully. Dec 13 14:34:20.231494 kubelet[1913]: E1213 14:34:20.231418 1913 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:20.245994 kubelet[1913]: E1213 14:34:20.245899 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:21.246186 kubelet[1913]: E1213 14:34:21.246113 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:22.247024 kubelet[1913]: E1213 14:34:22.246942 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:22.822200 env[1432]: time="2024-12-13T14:34:22.822124803Z" level=info msg="shim disconnected" id=c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0 Dec 13 14:34:22.822200 env[1432]: time="2024-12-13T14:34:22.822187904Z" level=warning msg="cleaning up after shim disconnected" id=c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0 namespace=k8s.io Dec 13 14:34:22.822200 env[1432]: time="2024-12-13T14:34:22.822202904Z" level=info msg="cleaning up dead shim" Dec 13 14:34:22.835169 env[1432]: time="2024-12-13T14:34:22.835100618Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2256 runtime=io.containerd.runc.v2\n" Dec 13 14:34:23.247385 kubelet[1913]: E1213 14:34:23.247334 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:23.396356 env[1432]: time="2024-12-13T14:34:23.396304301Z" level=info msg="CreateContainer within sandbox \"6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:34:23.421523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4001095976.mount: Deactivated successfully. Dec 13 14:34:23.440976 env[1432]: time="2024-12-13T14:34:23.440924486Z" level=info msg="CreateContainer within sandbox \"6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f\"" Dec 13 14:34:23.441637 env[1432]: time="2024-12-13T14:34:23.441594692Z" level=info msg="StartContainer for \"c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f\"" Dec 13 14:34:23.468123 systemd[1]: Started cri-containerd-c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f.scope. Dec 13 14:34:23.508454 env[1432]: time="2024-12-13T14:34:23.507818764Z" level=info msg="StartContainer for \"c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f\" returns successfully" Dec 13 14:34:23.518163 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:34:23.518780 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:34:23.519058 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:34:23.525149 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:34:23.525542 systemd[1]: cri-containerd-c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f.scope: Deactivated successfully. Dec 13 14:34:23.534575 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:34:23.569368 env[1432]: time="2024-12-13T14:34:23.569307195Z" level=info msg="shim disconnected" id=c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f Dec 13 14:34:23.569368 env[1432]: time="2024-12-13T14:34:23.569364495Z" level=warning msg="cleaning up after shim disconnected" id=c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f namespace=k8s.io Dec 13 14:34:23.569368 env[1432]: time="2024-12-13T14:34:23.569376295Z" level=info msg="cleaning up dead shim" Dec 13 14:34:23.578420 env[1432]: time="2024-12-13T14:34:23.578376373Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2321 runtime=io.containerd.runc.v2\n" Dec 13 14:34:24.248386 kubelet[1913]: E1213 14:34:24.248325 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:24.400929 env[1432]: time="2024-12-13T14:34:24.400853489Z" level=info msg="CreateContainer within sandbox \"6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:34:24.418078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f-rootfs.mount: Deactivated successfully. Dec 13 14:34:24.443896 env[1432]: time="2024-12-13T14:34:24.443819250Z" level=info msg="CreateContainer within sandbox \"6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41\"" Dec 13 14:34:24.444821 env[1432]: time="2024-12-13T14:34:24.444705858Z" level=info msg="StartContainer for \"0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41\"" Dec 13 14:34:24.480222 systemd[1]: Started cri-containerd-0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41.scope. Dec 13 14:34:24.517159 systemd[1]: cri-containerd-0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41.scope: Deactivated successfully. Dec 13 14:34:24.526885 env[1432]: time="2024-12-13T14:34:24.526822549Z" level=info msg="StartContainer for \"0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41\" returns successfully" Dec 13 14:34:24.550509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41-rootfs.mount: Deactivated successfully. Dec 13 14:34:24.566511 env[1432]: time="2024-12-13T14:34:24.566448682Z" level=info msg="shim disconnected" id=0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41 Dec 13 14:34:24.566511 env[1432]: time="2024-12-13T14:34:24.566510183Z" level=warning msg="cleaning up after shim disconnected" id=0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41 namespace=k8s.io Dec 13 14:34:24.566511 env[1432]: time="2024-12-13T14:34:24.566522583Z" level=info msg="cleaning up dead shim" Dec 13 14:34:24.577245 env[1432]: time="2024-12-13T14:34:24.577184573Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2379 runtime=io.containerd.runc.v2\n" Dec 13 14:34:25.249029 kubelet[1913]: E1213 14:34:25.248963 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:25.405123 env[1432]: time="2024-12-13T14:34:25.405002857Z" level=info msg="CreateContainer within sandbox \"6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:34:25.454448 env[1432]: time="2024-12-13T14:34:25.454369762Z" level=info msg="CreateContainer within sandbox \"6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873\"" Dec 13 14:34:25.455281 env[1432]: time="2024-12-13T14:34:25.455236769Z" level=info msg="StartContainer for \"d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873\"" Dec 13 14:34:25.489367 systemd[1]: Started cri-containerd-d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873.scope. Dec 13 14:34:25.519668 systemd[1]: cri-containerd-d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873.scope: Deactivated successfully. Dec 13 14:34:25.527028 env[1432]: time="2024-12-13T14:34:25.526969458Z" level=info msg="StartContainer for \"d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873\" returns successfully" Dec 13 14:34:25.552339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873-rootfs.mount: Deactivated successfully. Dec 13 14:34:25.570160 env[1432]: time="2024-12-13T14:34:25.570090512Z" level=info msg="shim disconnected" id=d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873 Dec 13 14:34:25.570160 env[1432]: time="2024-12-13T14:34:25.570160412Z" level=warning msg="cleaning up after shim disconnected" id=d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873 namespace=k8s.io Dec 13 14:34:25.570160 env[1432]: time="2024-12-13T14:34:25.570175612Z" level=info msg="cleaning up dead shim" Dec 13 14:34:25.580757 env[1432]: time="2024-12-13T14:34:25.580690999Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2436 runtime=io.containerd.runc.v2\n" Dec 13 14:34:26.249977 kubelet[1913]: E1213 14:34:26.249874 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:26.409738 env[1432]: time="2024-12-13T14:34:26.409677418Z" level=info msg="CreateContainer within sandbox \"6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:34:26.439835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318994559.mount: Deactivated successfully. Dec 13 14:34:26.447259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360594562.mount: Deactivated successfully. Dec 13 14:34:26.459399 env[1432]: time="2024-12-13T14:34:26.459326815Z" level=info msg="CreateContainer within sandbox \"6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\"" Dec 13 14:34:26.460408 env[1432]: time="2024-12-13T14:34:26.460365524Z" level=info msg="StartContainer for \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\"" Dec 13 14:34:26.484072 systemd[1]: Started cri-containerd-b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670.scope. Dec 13 14:34:26.535139 env[1432]: time="2024-12-13T14:34:26.534552717Z" level=info msg="StartContainer for \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\" returns successfully" Dec 13 14:34:26.723398 kubelet[1913]: I1213 14:34:26.722131 1913 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 14:34:27.250946 kubelet[1913]: E1213 14:34:27.250829 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:27.356400 kernel: Initializing XFRM netlink socket Dec 13 14:34:27.436531 kubelet[1913]: I1213 14:34:27.436433 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-96qf7" podStartSLOduration=11.539400187 podStartE2EDuration="27.436405448s" podCreationTimestamp="2024-12-13 14:34:00 +0000 UTC" firstStartedPulling="2024-12-13 14:34:02.40068887 +0000 UTC m=+3.326989465" lastFinishedPulling="2024-12-13 14:34:18.297694231 +0000 UTC m=+19.223994726" observedRunningTime="2024-12-13 14:34:27.435938744 +0000 UTC m=+28.362239239" watchObservedRunningTime="2024-12-13 14:34:27.436405448 +0000 UTC m=+28.362706043" Dec 13 14:34:28.251494 kubelet[1913]: E1213 14:34:28.251418 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:29.009614 systemd[1]: Created slice kubepods-besteffort-pod2f6218f9_7f5f_474f_a022_2f273e3d4f37.slice. Dec 13 14:34:29.056804 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:34:29.057016 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:34:29.052351 systemd-networkd[1600]: cilium_host: Link UP Dec 13 14:34:29.052620 systemd-networkd[1600]: cilium_net: Link UP Dec 13 14:34:29.052807 systemd-networkd[1600]: cilium_net: Gained carrier Dec 13 14:34:29.058469 systemd-networkd[1600]: cilium_host: Gained carrier Dec 13 14:34:29.087410 kubelet[1913]: I1213 14:34:29.086647 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mtlb\" (UniqueName: \"kubernetes.io/projected/2f6218f9-7f5f-474f-a022-2f273e3d4f37-kube-api-access-6mtlb\") pod \"nginx-deployment-8587fbcb89-z8bsx\" (UID: \"2f6218f9-7f5f-474f-a022-2f273e3d4f37\") " pod="default/nginx-deployment-8587fbcb89-z8bsx" Dec 13 14:34:29.193799 systemd-networkd[1600]: cilium_vxlan: Link UP Dec 13 14:34:29.193811 systemd-networkd[1600]: cilium_vxlan: Gained carrier Dec 13 14:34:29.252376 kubelet[1913]: E1213 14:34:29.252310 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:29.314406 env[1432]: time="2024-12-13T14:34:29.314216600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-z8bsx,Uid:2f6218f9-7f5f-474f-a022-2f273e3d4f37,Namespace:default,Attempt:0,}" Dec 13 14:34:29.339207 systemd-networkd[1600]: cilium_host: Gained IPv6LL Dec 13 14:34:29.575252 kernel: NET: Registered PF_ALG protocol family Dec 13 14:34:29.708129 systemd-networkd[1600]: cilium_net: Gained IPv6LL Dec 13 14:34:30.253484 kubelet[1913]: E1213 14:34:30.253413 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:30.551035 systemd-networkd[1600]: lxc_health: Link UP Dec 13 14:34:30.561626 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:34:30.561342 systemd-networkd[1600]: lxc_health: Gained carrier Dec 13 14:34:30.895063 systemd-networkd[1600]: lxcfc394e244173: Link UP Dec 13 14:34:30.903942 kernel: eth0: renamed from tmp1a2f8 Dec 13 14:34:30.914961 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfc394e244173: link becomes ready Dec 13 14:34:30.919228 systemd-networkd[1600]: lxcfc394e244173: Gained carrier Dec 13 14:34:30.926105 systemd-networkd[1600]: cilium_vxlan: Gained IPv6LL Dec 13 14:34:31.254532 kubelet[1913]: E1213 14:34:31.254341 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:32.011203 systemd-networkd[1600]: lxc_health: Gained IPv6LL Dec 13 14:34:32.203179 systemd-networkd[1600]: lxcfc394e244173: Gained IPv6LL Dec 13 14:34:32.255029 kubelet[1913]: E1213 14:34:32.254958 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:33.256860 kubelet[1913]: E1213 14:34:33.256779 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:34.257765 kubelet[1913]: E1213 14:34:34.257677 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:34.989667 env[1432]: time="2024-12-13T14:34:34.989563989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:34.989667 env[1432]: time="2024-12-13T14:34:34.989617189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:34.989667 env[1432]: time="2024-12-13T14:34:34.989631489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:34.990498 env[1432]: time="2024-12-13T14:34:34.990411494Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a2f8bac5ef75ae613709a5f5bba774d57845d3d8302c6ae3a7095ed8b3db2ba pid=2962 runtime=io.containerd.runc.v2 Dec 13 14:34:35.020284 systemd[1]: run-containerd-runc-k8s.io-1a2f8bac5ef75ae613709a5f5bba774d57845d3d8302c6ae3a7095ed8b3db2ba-runc.fVGkIo.mount: Deactivated successfully. Dec 13 14:34:35.024377 systemd[1]: Started cri-containerd-1a2f8bac5ef75ae613709a5f5bba774d57845d3d8302c6ae3a7095ed8b3db2ba.scope. Dec 13 14:34:35.072041 env[1432]: time="2024-12-13T14:34:35.071460117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-z8bsx,Uid:2f6218f9-7f5f-474f-a022-2f273e3d4f37,Namespace:default,Attempt:0,} returns sandbox id \"1a2f8bac5ef75ae613709a5f5bba774d57845d3d8302c6ae3a7095ed8b3db2ba\"" Dec 13 14:34:35.073867 env[1432]: time="2024-12-13T14:34:35.073733732Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:34:35.259306 kubelet[1913]: E1213 14:34:35.259136 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:36.260103 kubelet[1913]: E1213 14:34:36.259995 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:37.260662 kubelet[1913]: E1213 14:34:37.260592 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:38.062953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount285809160.mount: Deactivated successfully. Dec 13 14:34:38.261273 kubelet[1913]: E1213 14:34:38.261170 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:39.262013 kubelet[1913]: E1213 14:34:39.261954 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:39.568433 env[1432]: time="2024-12-13T14:34:39.568227355Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:39.573225 env[1432]: time="2024-12-13T14:34:39.573140583Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:39.578700 env[1432]: time="2024-12-13T14:34:39.578624416Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:39.582014 env[1432]: time="2024-12-13T14:34:39.581967835Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:39.582721 env[1432]: time="2024-12-13T14:34:39.582681339Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:34:39.585643 env[1432]: time="2024-12-13T14:34:39.585608957Z" level=info msg="CreateContainer within sandbox \"1a2f8bac5ef75ae613709a5f5bba774d57845d3d8302c6ae3a7095ed8b3db2ba\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:34:39.614441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636632309.mount: Deactivated successfully. Dec 13 14:34:39.624203 env[1432]: time="2024-12-13T14:34:39.624140083Z" level=info msg="CreateContainer within sandbox \"1a2f8bac5ef75ae613709a5f5bba774d57845d3d8302c6ae3a7095ed8b3db2ba\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d9298bd832f2b0fc31d59fb3b318001e7880cabf68e9dde96d003a4ded4b65ae\"" Dec 13 14:34:39.625142 env[1432]: time="2024-12-13T14:34:39.625102688Z" level=info msg="StartContainer for \"d9298bd832f2b0fc31d59fb3b318001e7880cabf68e9dde96d003a4ded4b65ae\"" Dec 13 14:34:39.650541 systemd[1]: Started cri-containerd-d9298bd832f2b0fc31d59fb3b318001e7880cabf68e9dde96d003a4ded4b65ae.scope. Dec 13 14:34:39.690980 env[1432]: time="2024-12-13T14:34:39.690896174Z" level=info msg="StartContainer for \"d9298bd832f2b0fc31d59fb3b318001e7880cabf68e9dde96d003a4ded4b65ae\" returns successfully" Dec 13 14:34:40.231439 kubelet[1913]: E1213 14:34:40.231367 1913 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:40.262302 kubelet[1913]: E1213 14:34:40.262225 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:40.454209 kubelet[1913]: I1213 14:34:40.454129 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-z8bsx" podStartSLOduration=7.943239975 podStartE2EDuration="12.454108194s" podCreationTimestamp="2024-12-13 14:34:28 +0000 UTC" firstStartedPulling="2024-12-13 14:34:35.073298329 +0000 UTC m=+35.999598824" lastFinishedPulling="2024-12-13 14:34:39.584166548 +0000 UTC m=+40.510467043" observedRunningTime="2024-12-13 14:34:40.45336749 +0000 UTC m=+41.379667985" watchObservedRunningTime="2024-12-13 14:34:40.454108194 +0000 UTC m=+41.380408689" Dec 13 14:34:41.262846 kubelet[1913]: E1213 14:34:41.262770 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:42.263365 kubelet[1913]: E1213 14:34:42.263305 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:43.265148 kubelet[1913]: E1213 14:34:43.265071 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:44.265359 kubelet[1913]: E1213 14:34:44.265282 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:45.265747 kubelet[1913]: E1213 14:34:45.265674 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:46.266505 kubelet[1913]: E1213 14:34:46.266431 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:47.267489 kubelet[1913]: E1213 14:34:47.267410 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:47.366196 systemd[1]: Created slice kubepods-besteffort-podbddff27c_adc3_4c07_81a7_f44bbd5e029d.slice. Dec 13 14:34:47.416217 kubelet[1913]: I1213 14:34:47.416123 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/bddff27c-adc3-4c07-81a7-f44bbd5e029d-data\") pod \"nfs-server-provisioner-0\" (UID: \"bddff27c-adc3-4c07-81a7-f44bbd5e029d\") " pod="default/nfs-server-provisioner-0" Dec 13 14:34:47.416217 kubelet[1913]: I1213 14:34:47.416227 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n55qx\" (UniqueName: \"kubernetes.io/projected/bddff27c-adc3-4c07-81a7-f44bbd5e029d-kube-api-access-n55qx\") pod \"nfs-server-provisioner-0\" (UID: \"bddff27c-adc3-4c07-81a7-f44bbd5e029d\") " pod="default/nfs-server-provisioner-0" Dec 13 14:34:47.670411 env[1432]: time="2024-12-13T14:34:47.670345137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:bddff27c-adc3-4c07-81a7-f44bbd5e029d,Namespace:default,Attempt:0,}" Dec 13 14:34:47.731390 systemd-networkd[1600]: lxcea2d786ca66d: Link UP Dec 13 14:34:47.738936 kernel: eth0: renamed from tmp4e3b1 Dec 13 14:34:47.753494 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:34:47.753672 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcea2d786ca66d: link becomes ready Dec 13 14:34:47.754377 systemd-networkd[1600]: lxcea2d786ca66d: Gained carrier Dec 13 14:34:47.905487 env[1432]: time="2024-12-13T14:34:47.905391996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:47.905487 env[1432]: time="2024-12-13T14:34:47.905441896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:47.905487 env[1432]: time="2024-12-13T14:34:47.905455096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:47.906101 env[1432]: time="2024-12-13T14:34:47.906022799Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e3b17b5cec3b33b54d63d5aa89512c4ea72b8c67787a7924c0bb415f64e33cc pid=3086 runtime=io.containerd.runc.v2 Dec 13 14:34:47.932715 systemd[1]: Started cri-containerd-4e3b17b5cec3b33b54d63d5aa89512c4ea72b8c67787a7924c0bb415f64e33cc.scope. Dec 13 14:34:47.982299 env[1432]: time="2024-12-13T14:34:47.982247475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:bddff27c-adc3-4c07-81a7-f44bbd5e029d,Namespace:default,Attempt:0,} returns sandbox id \"4e3b17b5cec3b33b54d63d5aa89512c4ea72b8c67787a7924c0bb415f64e33cc\"" Dec 13 14:34:47.984758 env[1432]: time="2024-12-13T14:34:47.984720887Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:34:48.268143 kubelet[1913]: E1213 14:34:48.267953 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:48.529557 systemd[1]: run-containerd-runc-k8s.io-4e3b17b5cec3b33b54d63d5aa89512c4ea72b8c67787a7924c0bb415f64e33cc-runc.GQBjPL.mount: Deactivated successfully. Dec 13 14:34:49.269230 kubelet[1913]: E1213 14:34:49.269155 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:49.547141 systemd-networkd[1600]: lxcea2d786ca66d: Gained IPv6LL Dec 13 14:34:50.269980 kubelet[1913]: E1213 14:34:50.269893 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:50.768953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2948591117.mount: Deactivated successfully. Dec 13 14:34:51.270932 kubelet[1913]: E1213 14:34:51.270799 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:52.271502 kubelet[1913]: E1213 14:34:52.271433 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:52.867660 env[1432]: time="2024-12-13T14:34:52.867582070Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:52.874073 env[1432]: time="2024-12-13T14:34:52.874018599Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:52.879192 env[1432]: time="2024-12-13T14:34:52.879144022Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:52.883322 env[1432]: time="2024-12-13T14:34:52.883272040Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:52.884084 env[1432]: time="2024-12-13T14:34:52.884045643Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:34:52.887131 env[1432]: time="2024-12-13T14:34:52.887099157Z" level=info msg="CreateContainer within sandbox \"4e3b17b5cec3b33b54d63d5aa89512c4ea72b8c67787a7924c0bb415f64e33cc\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:34:52.923965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1523491576.mount: Deactivated successfully. Dec 13 14:34:52.947526 env[1432]: time="2024-12-13T14:34:52.947455326Z" level=info msg="CreateContainer within sandbox \"4e3b17b5cec3b33b54d63d5aa89512c4ea72b8c67787a7924c0bb415f64e33cc\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"8e337645a85046e435c7d296854a89af7bada4d5e46fadfa5af605487cc2cbeb\"" Dec 13 14:34:52.948388 env[1432]: time="2024-12-13T14:34:52.948264930Z" level=info msg="StartContainer for \"8e337645a85046e435c7d296854a89af7bada4d5e46fadfa5af605487cc2cbeb\"" Dec 13 14:34:52.977786 systemd[1]: Started cri-containerd-8e337645a85046e435c7d296854a89af7bada4d5e46fadfa5af605487cc2cbeb.scope. Dec 13 14:34:53.012741 env[1432]: time="2024-12-13T14:34:53.012677716Z" level=info msg="StartContainer for \"8e337645a85046e435c7d296854a89af7bada4d5e46fadfa5af605487cc2cbeb\" returns successfully" Dec 13 14:34:53.272015 kubelet[1913]: E1213 14:34:53.271796 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:53.493882 kubelet[1913]: I1213 14:34:53.493805 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.5925707519999999 podStartE2EDuration="6.493784117s" podCreationTimestamp="2024-12-13 14:34:47 +0000 UTC" firstStartedPulling="2024-12-13 14:34:47.984104384 +0000 UTC m=+48.910404879" lastFinishedPulling="2024-12-13 14:34:52.885317749 +0000 UTC m=+53.811618244" observedRunningTime="2024-12-13 14:34:53.493280415 +0000 UTC m=+54.419580910" watchObservedRunningTime="2024-12-13 14:34:53.493784117 +0000 UTC m=+54.420084712" Dec 13 14:34:54.272945 kubelet[1913]: E1213 14:34:54.272854 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:55.273514 kubelet[1913]: E1213 14:34:55.273447 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:56.274187 kubelet[1913]: E1213 14:34:56.274119 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:57.274793 kubelet[1913]: E1213 14:34:57.274730 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:58.275901 kubelet[1913]: E1213 14:34:58.275830 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:59.276737 kubelet[1913]: E1213 14:34:59.276662 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:00.230806 kubelet[1913]: E1213 14:35:00.230722 1913 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:00.277545 kubelet[1913]: E1213 14:35:00.277496 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:01.278815 kubelet[1913]: E1213 14:35:01.278750 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:02.279469 kubelet[1913]: E1213 14:35:02.279403 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:02.299634 systemd[1]: Created slice kubepods-besteffort-podaf463f9a_e16b_43bf_bb58_55510e50a73a.slice. Dec 13 14:35:02.415073 kubelet[1913]: I1213 14:35:02.415013 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px6hr\" (UniqueName: \"kubernetes.io/projected/af463f9a-e16b-43bf-bb58-55510e50a73a-kube-api-access-px6hr\") pod \"test-pod-1\" (UID: \"af463f9a-e16b-43bf-bb58-55510e50a73a\") " pod="default/test-pod-1" Dec 13 14:35:02.415416 kubelet[1913]: I1213 14:35:02.415379 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-533216a5-1018-43fb-aed7-79d613cc325e\" (UniqueName: \"kubernetes.io/nfs/af463f9a-e16b-43bf-bb58-55510e50a73a-pvc-533216a5-1018-43fb-aed7-79d613cc325e\") pod \"test-pod-1\" (UID: \"af463f9a-e16b-43bf-bb58-55510e50a73a\") " pod="default/test-pod-1" Dec 13 14:35:02.566940 kernel: FS-Cache: Loaded Dec 13 14:35:02.624472 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:35:02.624719 kernel: RPC: Registered udp transport module. Dec 13 14:35:02.624749 kernel: RPC: Registered tcp transport module. Dec 13 14:35:02.630250 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:35:02.707958 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:35:02.888674 kernel: NFS: Registering the id_resolver key type Dec 13 14:35:02.888891 kernel: Key type id_resolver registered Dec 13 14:35:02.888932 kernel: Key type id_legacy registered Dec 13 14:35:03.015835 nfsidmap[3204]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-dd62f2eb18' Dec 13 14:35:03.055985 nfsidmap[3205]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-dd62f2eb18' Dec 13 14:35:03.206214 env[1432]: time="2024-12-13T14:35:03.206129736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:af463f9a-e16b-43bf-bb58-55510e50a73a,Namespace:default,Attempt:0,}" Dec 13 14:35:03.276349 systemd-networkd[1600]: lxc0bc9a1c0d2d7: Link UP Dec 13 14:35:03.279981 kubelet[1913]: E1213 14:35:03.279884 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:03.289053 kernel: eth0: renamed from tmp37c3b Dec 13 14:35:03.301916 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:35:03.302070 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0bc9a1c0d2d7: link becomes ready Dec 13 14:35:03.302646 systemd-networkd[1600]: lxc0bc9a1c0d2d7: Gained carrier Dec 13 14:35:03.510514 env[1432]: time="2024-12-13T14:35:03.510396244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:35:03.510514 env[1432]: time="2024-12-13T14:35:03.510449144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:35:03.510514 env[1432]: time="2024-12-13T14:35:03.510462844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:35:03.511085 env[1432]: time="2024-12-13T14:35:03.511020346Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37c3b7e27307816914b85a2875f6928863a28a956433129b7591e20ff4f8d77d pid=3234 runtime=io.containerd.runc.v2 Dec 13 14:35:03.542336 systemd[1]: run-containerd-runc-k8s.io-37c3b7e27307816914b85a2875f6928863a28a956433129b7591e20ff4f8d77d-runc.kHnImb.mount: Deactivated successfully. Dec 13 14:35:03.549108 systemd[1]: Started cri-containerd-37c3b7e27307816914b85a2875f6928863a28a956433129b7591e20ff4f8d77d.scope. Dec 13 14:35:03.599609 env[1432]: time="2024-12-13T14:35:03.599549868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:af463f9a-e16b-43bf-bb58-55510e50a73a,Namespace:default,Attempt:0,} returns sandbox id \"37c3b7e27307816914b85a2875f6928863a28a956433129b7591e20ff4f8d77d\"" Dec 13 14:35:03.601619 env[1432]: time="2024-12-13T14:35:03.601577076Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:35:03.926867 env[1432]: time="2024-12-13T14:35:03.926798860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:03.933174 env[1432]: time="2024-12-13T14:35:03.933123083Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:03.937725 env[1432]: time="2024-12-13T14:35:03.937652599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:03.942250 env[1432]: time="2024-12-13T14:35:03.942206216Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:03.942975 env[1432]: time="2024-12-13T14:35:03.942934318Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:35:03.946217 env[1432]: time="2024-12-13T14:35:03.946185030Z" level=info msg="CreateContainer within sandbox \"37c3b7e27307816914b85a2875f6928863a28a956433129b7591e20ff4f8d77d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:35:03.985792 env[1432]: time="2024-12-13T14:35:03.985725574Z" level=info msg="CreateContainer within sandbox \"37c3b7e27307816914b85a2875f6928863a28a956433129b7591e20ff4f8d77d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"dc68f995e3f68ee9a55e15e82a490485f6b5d9e97bc21f31fc90a9366d344065\"" Dec 13 14:35:03.986559 env[1432]: time="2024-12-13T14:35:03.986517877Z" level=info msg="StartContainer for \"dc68f995e3f68ee9a55e15e82a490485f6b5d9e97bc21f31fc90a9366d344065\"" Dec 13 14:35:04.007104 systemd[1]: Started cri-containerd-dc68f995e3f68ee9a55e15e82a490485f6b5d9e97bc21f31fc90a9366d344065.scope. Dec 13 14:35:04.056353 env[1432]: time="2024-12-13T14:35:04.056283028Z" level=info msg="StartContainer for \"dc68f995e3f68ee9a55e15e82a490485f6b5d9e97bc21f31fc90a9366d344065\" returns successfully" Dec 13 14:35:04.280895 kubelet[1913]: E1213 14:35:04.280714 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:04.512484 kubelet[1913]: I1213 14:35:04.512413 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.169403211 podStartE2EDuration="16.51239136s" podCreationTimestamp="2024-12-13 14:34:48 +0000 UTC" firstStartedPulling="2024-12-13 14:35:03.601189874 +0000 UTC m=+64.527490369" lastFinishedPulling="2024-12-13 14:35:03.944177923 +0000 UTC m=+64.870478518" observedRunningTime="2024-12-13 14:35:04.512223659 +0000 UTC m=+65.438524154" watchObservedRunningTime="2024-12-13 14:35:04.51239136 +0000 UTC m=+65.438691855" Dec 13 14:35:04.587141 systemd-networkd[1600]: lxc0bc9a1c0d2d7: Gained IPv6LL Dec 13 14:35:05.281052 kubelet[1913]: E1213 14:35:05.280928 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:06.281461 kubelet[1913]: E1213 14:35:06.281389 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:07.282713 kubelet[1913]: E1213 14:35:07.282639 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:08.283992 kubelet[1913]: E1213 14:35:08.283894 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:09.285108 kubelet[1913]: E1213 14:35:09.285032 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:10.286276 kubelet[1913]: E1213 14:35:10.286201 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:11.287200 kubelet[1913]: E1213 14:35:11.287124 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:12.028452 systemd[1]: run-containerd-runc-k8s.io-b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670-runc.j1ntrm.mount: Deactivated successfully. Dec 13 14:35:12.051237 env[1432]: time="2024-12-13T14:35:12.051152771Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:35:12.057218 env[1432]: time="2024-12-13T14:35:12.057166390Z" level=info msg="StopContainer for \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\" with timeout 2 (s)" Dec 13 14:35:12.057525 env[1432]: time="2024-12-13T14:35:12.057488091Z" level=info msg="Stop container \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\" with signal terminated" Dec 13 14:35:12.069054 systemd-networkd[1600]: lxc_health: Link DOWN Dec 13 14:35:12.069064 systemd-networkd[1600]: lxc_health: Lost carrier Dec 13 14:35:12.091342 systemd[1]: cri-containerd-b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670.scope: Deactivated successfully. Dec 13 14:35:12.091782 systemd[1]: cri-containerd-b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670.scope: Consumed 7.728s CPU time. Dec 13 14:35:12.121396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670-rootfs.mount: Deactivated successfully. Dec 13 14:35:12.366656 kubelet[1913]: E1213 14:35:12.287388 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:13.288080 kubelet[1913]: E1213 14:35:13.287994 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:14.069897 env[1432]: time="2024-12-13T14:35:14.069797189Z" level=info msg="Kill container \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\"" Dec 13 14:35:14.288764 kubelet[1913]: E1213 14:35:14.288688 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:15.217365 env[1432]: time="2024-12-13T14:35:15.217270898Z" level=info msg="shim disconnected" id=b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670 Dec 13 14:35:15.217365 env[1432]: time="2024-12-13T14:35:15.217353598Z" level=warning msg="cleaning up after shim disconnected" id=b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670 namespace=k8s.io Dec 13 14:35:15.217365 env[1432]: time="2024-12-13T14:35:15.217370298Z" level=info msg="cleaning up dead shim" Dec 13 14:35:15.228410 env[1432]: time="2024-12-13T14:35:15.228345531Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3372 runtime=io.containerd.runc.v2\n" Dec 13 14:35:15.235414 env[1432]: time="2024-12-13T14:35:15.235328552Z" level=info msg="StopContainer for \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\" returns successfully" Dec 13 14:35:15.236248 env[1432]: time="2024-12-13T14:35:15.236207755Z" level=info msg="StopPodSandbox for \"6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33\"" Dec 13 14:35:15.236401 env[1432]: time="2024-12-13T14:35:15.236289855Z" level=info msg="Container to stop \"d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:15.236401 env[1432]: time="2024-12-13T14:35:15.236311855Z" level=info msg="Container to stop \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:15.236401 env[1432]: time="2024-12-13T14:35:15.236327255Z" level=info msg="Container to stop \"c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:15.236401 env[1432]: time="2024-12-13T14:35:15.236342755Z" level=info msg="Container to stop \"0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:15.236401 env[1432]: time="2024-12-13T14:35:15.236358255Z" level=info msg="Container to stop \"c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:15.239192 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33-shm.mount: Deactivated successfully. Dec 13 14:35:15.249444 systemd[1]: cri-containerd-6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33.scope: Deactivated successfully. Dec 13 14:35:15.281193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33-rootfs.mount: Deactivated successfully. Dec 13 14:35:15.289343 kubelet[1913]: E1213 14:35:15.289276 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:15.291080 env[1432]: time="2024-12-13T14:35:15.291018421Z" level=info msg="shim disconnected" id=6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33 Dec 13 14:35:15.291816 env[1432]: time="2024-12-13T14:35:15.291786423Z" level=warning msg="cleaning up after shim disconnected" id=6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33 namespace=k8s.io Dec 13 14:35:15.291990 env[1432]: time="2024-12-13T14:35:15.291972023Z" level=info msg="cleaning up dead shim" Dec 13 14:35:15.302973 env[1432]: time="2024-12-13T14:35:15.302900257Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3403 runtime=io.containerd.runc.v2\n" Dec 13 14:35:15.303437 env[1432]: time="2024-12-13T14:35:15.303403358Z" level=info msg="TearDown network for sandbox \"6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33\" successfully" Dec 13 14:35:15.303538 env[1432]: time="2024-12-13T14:35:15.303435658Z" level=info msg="StopPodSandbox for \"6ffcd256373bff6ca0fc050ec475c9f24ee2f1c24a31ea609267b63b19772b33\" returns successfully" Dec 13 14:35:15.367833 kubelet[1913]: E1213 14:35:15.367752 1913 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:35:15.400612 kubelet[1913]: I1213 14:35:15.400535 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-xtables-lock\") pod \"c679962a-ec7d-419d-89d2-5161175b6777\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " Dec 13 14:35:15.400612 kubelet[1913]: I1213 14:35:15.400626 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c679962a-ec7d-419d-89d2-5161175b6777-cilium-config-path\") pod \"c679962a-ec7d-419d-89d2-5161175b6777\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " Dec 13 14:35:15.401045 kubelet[1913]: I1213 14:35:15.400658 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-hostproc\") pod \"c679962a-ec7d-419d-89d2-5161175b6777\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " Dec 13 14:35:15.401045 kubelet[1913]: I1213 14:35:15.400685 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-etc-cni-netd\") pod \"c679962a-ec7d-419d-89d2-5161175b6777\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " Dec 13 14:35:15.401045 kubelet[1913]: I1213 14:35:15.400708 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-host-proc-sys-net\") pod \"c679962a-ec7d-419d-89d2-5161175b6777\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " Dec 13 14:35:15.401045 kubelet[1913]: I1213 14:35:15.400736 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-cilium-run\") pod \"c679962a-ec7d-419d-89d2-5161175b6777\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " Dec 13 14:35:15.401045 kubelet[1913]: I1213 14:35:15.400763 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85vbg\" (UniqueName: \"kubernetes.io/projected/c679962a-ec7d-419d-89d2-5161175b6777-kube-api-access-85vbg\") pod \"c679962a-ec7d-419d-89d2-5161175b6777\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " Dec 13 14:35:15.401045 kubelet[1913]: I1213 14:35:15.400791 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-host-proc-sys-kernel\") pod \"c679962a-ec7d-419d-89d2-5161175b6777\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " Dec 13 14:35:15.401356 kubelet[1913]: I1213 14:35:15.400818 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c679962a-ec7d-419d-89d2-5161175b6777-clustermesh-secrets\") pod \"c679962a-ec7d-419d-89d2-5161175b6777\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " Dec 13 14:35:15.401356 kubelet[1913]: I1213 14:35:15.400846 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c679962a-ec7d-419d-89d2-5161175b6777-hubble-tls\") pod \"c679962a-ec7d-419d-89d2-5161175b6777\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " Dec 13 14:35:15.401356 kubelet[1913]: I1213 14:35:15.400873 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-lib-modules\") pod \"c679962a-ec7d-419d-89d2-5161175b6777\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " Dec 13 14:35:15.401356 kubelet[1913]: I1213 14:35:15.400897 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-bpf-maps\") pod \"c679962a-ec7d-419d-89d2-5161175b6777\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " Dec 13 14:35:15.401356 kubelet[1913]: I1213 14:35:15.400963 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-cni-path\") pod \"c679962a-ec7d-419d-89d2-5161175b6777\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " Dec 13 14:35:15.401356 kubelet[1913]: I1213 14:35:15.400992 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-cilium-cgroup\") pod \"c679962a-ec7d-419d-89d2-5161175b6777\" (UID: \"c679962a-ec7d-419d-89d2-5161175b6777\") " Dec 13 14:35:15.401804 kubelet[1913]: I1213 14:35:15.401116 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c679962a-ec7d-419d-89d2-5161175b6777" (UID: "c679962a-ec7d-419d-89d2-5161175b6777"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:15.401804 kubelet[1913]: I1213 14:35:15.401178 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c679962a-ec7d-419d-89d2-5161175b6777" (UID: "c679962a-ec7d-419d-89d2-5161175b6777"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:15.402037 kubelet[1913]: I1213 14:35:15.402000 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c679962a-ec7d-419d-89d2-5161175b6777" (UID: "c679962a-ec7d-419d-89d2-5161175b6777"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:15.402174 kubelet[1913]: I1213 14:35:15.402152 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-hostproc" (OuterVolumeSpecName: "hostproc") pod "c679962a-ec7d-419d-89d2-5161175b6777" (UID: "c679962a-ec7d-419d-89d2-5161175b6777"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:15.402284 kubelet[1913]: I1213 14:35:15.402268 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c679962a-ec7d-419d-89d2-5161175b6777" (UID: "c679962a-ec7d-419d-89d2-5161175b6777"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:15.402393 kubelet[1913]: I1213 14:35:15.402375 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c679962a-ec7d-419d-89d2-5161175b6777" (UID: "c679962a-ec7d-419d-89d2-5161175b6777"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:15.402495 kubelet[1913]: I1213 14:35:15.402479 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c679962a-ec7d-419d-89d2-5161175b6777" (UID: "c679962a-ec7d-419d-89d2-5161175b6777"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:15.404792 kubelet[1913]: I1213 14:35:15.404740 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c679962a-ec7d-419d-89d2-5161175b6777-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c679962a-ec7d-419d-89d2-5161175b6777" (UID: "c679962a-ec7d-419d-89d2-5161175b6777"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:35:15.414380 kubelet[1913]: I1213 14:35:15.409230 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c679962a-ec7d-419d-89d2-5161175b6777-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c679962a-ec7d-419d-89d2-5161175b6777" (UID: "c679962a-ec7d-419d-89d2-5161175b6777"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:35:15.414380 kubelet[1913]: I1213 14:35:15.414197 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c679962a-ec7d-419d-89d2-5161175b6777-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c679962a-ec7d-419d-89d2-5161175b6777" (UID: "c679962a-ec7d-419d-89d2-5161175b6777"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:35:15.414380 kubelet[1913]: I1213 14:35:15.414268 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c679962a-ec7d-419d-89d2-5161175b6777" (UID: "c679962a-ec7d-419d-89d2-5161175b6777"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:15.414380 kubelet[1913]: I1213 14:35:15.414293 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c679962a-ec7d-419d-89d2-5161175b6777" (UID: "c679962a-ec7d-419d-89d2-5161175b6777"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:15.414380 kubelet[1913]: I1213 14:35:15.414312 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-cni-path" (OuterVolumeSpecName: "cni-path") pod "c679962a-ec7d-419d-89d2-5161175b6777" (UID: "c679962a-ec7d-419d-89d2-5161175b6777"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:15.411202 systemd[1]: var-lib-kubelet-pods-c679962a\x2dec7d\x2d419d\x2d89d2\x2d5161175b6777-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d85vbg.mount: Deactivated successfully. Dec 13 14:35:15.411369 systemd[1]: var-lib-kubelet-pods-c679962a\x2dec7d\x2d419d\x2d89d2\x2d5161175b6777-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:35:15.417123 kubelet[1913]: I1213 14:35:15.417047 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c679962a-ec7d-419d-89d2-5161175b6777-kube-api-access-85vbg" (OuterVolumeSpecName: "kube-api-access-85vbg") pod "c679962a-ec7d-419d-89d2-5161175b6777" (UID: "c679962a-ec7d-419d-89d2-5161175b6777"). InnerVolumeSpecName "kube-api-access-85vbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:35:15.417719 systemd[1]: var-lib-kubelet-pods-c679962a\x2dec7d\x2d419d\x2d89d2\x2d5161175b6777-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:35:15.501529 kubelet[1913]: I1213 14:35:15.501268 1913 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c679962a-ec7d-419d-89d2-5161175b6777-cilium-config-path\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:15.501529 kubelet[1913]: I1213 14:35:15.501395 1913 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-hostproc\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:15.501529 kubelet[1913]: I1213 14:35:15.501415 1913 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-etc-cni-netd\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:15.501529 kubelet[1913]: I1213 14:35:15.501430 1913 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-host-proc-sys-net\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:15.501529 kubelet[1913]: I1213 14:35:15.501443 1913 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-xtables-lock\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:15.501529 kubelet[1913]: I1213 14:35:15.501456 1913 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-cilium-run\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:15.501529 kubelet[1913]: I1213 14:35:15.501469 1913 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-85vbg\" (UniqueName: \"kubernetes.io/projected/c679962a-ec7d-419d-89d2-5161175b6777-kube-api-access-85vbg\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:15.501529 kubelet[1913]: I1213 14:35:15.501486 1913 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c679962a-ec7d-419d-89d2-5161175b6777-hubble-tls\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:15.502218 kubelet[1913]: I1213 14:35:15.501500 1913 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-host-proc-sys-kernel\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:15.502329 kubelet[1913]: I1213 14:35:15.502315 1913 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c679962a-ec7d-419d-89d2-5161175b6777-clustermesh-secrets\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:15.502418 kubelet[1913]: I1213 14:35:15.502405 1913 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-bpf-maps\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:15.502505 kubelet[1913]: I1213 14:35:15.502493 1913 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-cni-path\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:15.502596 kubelet[1913]: I1213 14:35:15.502584 1913 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-cilium-cgroup\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:15.502687 kubelet[1913]: I1213 14:35:15.502675 1913 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c679962a-ec7d-419d-89d2-5161175b6777-lib-modules\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:15.535847 kubelet[1913]: I1213 14:35:15.535808 1913 scope.go:117] "RemoveContainer" containerID="b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670" Dec 13 14:35:15.537946 env[1432]: time="2024-12-13T14:35:15.537725466Z" level=info msg="RemoveContainer for \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\"" Dec 13 14:35:15.541931 systemd[1]: Removed slice kubepods-burstable-podc679962a_ec7d_419d_89d2_5161175b6777.slice. Dec 13 14:35:15.542082 systemd[1]: kubepods-burstable-podc679962a_ec7d_419d_89d2_5161175b6777.slice: Consumed 7.841s CPU time. Dec 13 14:35:15.545816 env[1432]: time="2024-12-13T14:35:15.545767391Z" level=info msg="RemoveContainer for \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\" returns successfully" Dec 13 14:35:15.546300 kubelet[1913]: I1213 14:35:15.546125 1913 scope.go:117] "RemoveContainer" containerID="d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873" Dec 13 14:35:15.547764 env[1432]: time="2024-12-13T14:35:15.547372595Z" level=info msg="RemoveContainer for \"d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873\"" Dec 13 14:35:15.556982 env[1432]: time="2024-12-13T14:35:15.556846324Z" level=info msg="RemoveContainer for \"d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873\" returns successfully" Dec 13 14:35:15.557223 kubelet[1913]: I1213 14:35:15.557193 1913 scope.go:117] "RemoveContainer" containerID="0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41" Dec 13 14:35:15.558836 env[1432]: time="2024-12-13T14:35:15.558802130Z" level=info msg="RemoveContainer for \"0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41\"" Dec 13 14:35:15.567213 env[1432]: time="2024-12-13T14:35:15.567167455Z" level=info msg="RemoveContainer for \"0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41\" returns successfully" Dec 13 14:35:15.567479 kubelet[1913]: I1213 14:35:15.567446 1913 scope.go:117] "RemoveContainer" containerID="c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f" Dec 13 14:35:15.568623 env[1432]: time="2024-12-13T14:35:15.568590460Z" level=info msg="RemoveContainer for \"c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f\"" Dec 13 14:35:15.578377 env[1432]: time="2024-12-13T14:35:15.578325789Z" level=info msg="RemoveContainer for \"c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f\" returns successfully" Dec 13 14:35:15.578718 kubelet[1913]: I1213 14:35:15.578680 1913 scope.go:117] "RemoveContainer" containerID="c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0" Dec 13 14:35:15.580296 env[1432]: time="2024-12-13T14:35:15.579993094Z" level=info msg="RemoveContainer for \"c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0\"" Dec 13 14:35:15.587738 env[1432]: time="2024-12-13T14:35:15.587695517Z" level=info msg="RemoveContainer for \"c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0\" returns successfully" Dec 13 14:35:15.588016 kubelet[1913]: I1213 14:35:15.587975 1913 scope.go:117] "RemoveContainer" containerID="b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670" Dec 13 14:35:15.588332 env[1432]: time="2024-12-13T14:35:15.588233419Z" level=error msg="ContainerStatus for \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\": not found" Dec 13 14:35:15.588541 kubelet[1913]: E1213 14:35:15.588516 1913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\": not found" containerID="b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670" Dec 13 14:35:15.588671 kubelet[1913]: I1213 14:35:15.588560 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670"} err="failed to get container status \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\": rpc error: code = NotFound desc = an error occurred when try to find container \"b125d270e5aaad9b47709e3cfe78180130cb3622b2e07f8ecf5ac4d1f099b670\": not found" Dec 13 14:35:15.588742 kubelet[1913]: I1213 14:35:15.588680 1913 scope.go:117] "RemoveContainer" containerID="d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873" Dec 13 14:35:15.589007 env[1432]: time="2024-12-13T14:35:15.588953521Z" level=error msg="ContainerStatus for \"d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873\": not found" Dec 13 14:35:15.589165 kubelet[1913]: E1213 14:35:15.589145 1913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873\": not found" containerID="d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873" Dec 13 14:35:15.589240 kubelet[1913]: I1213 14:35:15.589184 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873"} err="failed to get container status \"d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873\": rpc error: code = NotFound desc = an error occurred when try to find container \"d69f310d98587881571d98351b6375a2686ad396d2722b04518a77480059c873\": not found" Dec 13 14:35:15.589240 kubelet[1913]: I1213 14:35:15.589210 1913 scope.go:117] "RemoveContainer" containerID="0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41" Dec 13 14:35:15.589441 env[1432]: time="2024-12-13T14:35:15.589393422Z" level=error msg="ContainerStatus for \"0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41\": not found" Dec 13 14:35:15.589619 kubelet[1913]: E1213 14:35:15.589599 1913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41\": not found" containerID="0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41" Dec 13 14:35:15.589699 kubelet[1913]: I1213 14:35:15.589622 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41"} err="failed to get container status \"0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41\": rpc error: code = NotFound desc = an error occurred when try to find container \"0032a9965a0dc0735025af287b702c29c23e2e3b53b37dc3d9a3b25b1ed45e41\": not found" Dec 13 14:35:15.589699 kubelet[1913]: I1213 14:35:15.589643 1913 scope.go:117] "RemoveContainer" containerID="c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f" Dec 13 14:35:15.589868 env[1432]: time="2024-12-13T14:35:15.589823624Z" level=error msg="ContainerStatus for \"c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f\": not found" Dec 13 14:35:15.590002 kubelet[1913]: E1213 14:35:15.589980 1913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f\": not found" containerID="c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f" Dec 13 14:35:15.590084 kubelet[1913]: I1213 14:35:15.590008 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f"} err="failed to get container status \"c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c31898ab4a10065f71570f8a5eabce0c3abce4a5ea8d1a00ee3fe13332b5459f\": not found" Dec 13 14:35:15.590084 kubelet[1913]: I1213 14:35:15.590030 1913 scope.go:117] "RemoveContainer" containerID="c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0" Dec 13 14:35:15.590310 env[1432]: time="2024-12-13T14:35:15.590261725Z" level=error msg="ContainerStatus for \"c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0\": not found" Dec 13 14:35:15.590424 kubelet[1913]: E1213 14:35:15.590400 1913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0\": not found" containerID="c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0" Dec 13 14:35:15.590506 kubelet[1913]: I1213 14:35:15.590428 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0"} err="failed to get container status \"c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4b4347ea6faf0b9b2684dc61abb609ca1bae92adc16c218b1cb1cbbc5f613a0\": not found" Dec 13 14:35:16.289962 kubelet[1913]: E1213 14:35:16.289865 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:16.292230 kubelet[1913]: E1213 14:35:16.292189 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c679962a-ec7d-419d-89d2-5161175b6777" containerName="mount-cgroup" Dec 13 14:35:16.292230 kubelet[1913]: E1213 14:35:16.292223 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c679962a-ec7d-419d-89d2-5161175b6777" containerName="clean-cilium-state" Dec 13 14:35:16.292450 kubelet[1913]: E1213 14:35:16.292258 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c679962a-ec7d-419d-89d2-5161175b6777" containerName="apply-sysctl-overwrites" Dec 13 14:35:16.292450 kubelet[1913]: E1213 14:35:16.292277 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c679962a-ec7d-419d-89d2-5161175b6777" containerName="mount-bpf-fs" Dec 13 14:35:16.292450 kubelet[1913]: E1213 14:35:16.292287 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c679962a-ec7d-419d-89d2-5161175b6777" containerName="cilium-agent" Dec 13 14:35:16.292450 kubelet[1913]: I1213 14:35:16.292331 1913 memory_manager.go:354] "RemoveStaleState removing state" podUID="c679962a-ec7d-419d-89d2-5161175b6777" containerName="cilium-agent" Dec 13 14:35:16.298142 systemd[1]: Created slice kubepods-besteffort-pod16499b7d_e01d_4393_b9ae_30a0cc68f6e8.slice. Dec 13 14:35:16.308389 kubelet[1913]: I1213 14:35:16.308342 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn887\" (UniqueName: \"kubernetes.io/projected/16499b7d-e01d-4393-b9ae-30a0cc68f6e8-kube-api-access-dn887\") pod \"cilium-operator-5d85765b45-nxh8q\" (UID: \"16499b7d-e01d-4393-b9ae-30a0cc68f6e8\") " pod="kube-system/cilium-operator-5d85765b45-nxh8q" Dec 13 14:35:16.308389 kubelet[1913]: I1213 14:35:16.308389 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16499b7d-e01d-4393-b9ae-30a0cc68f6e8-cilium-config-path\") pod \"cilium-operator-5d85765b45-nxh8q\" (UID: \"16499b7d-e01d-4393-b9ae-30a0cc68f6e8\") " pod="kube-system/cilium-operator-5d85765b45-nxh8q" Dec 13 14:35:16.315518 kubelet[1913]: I1213 14:35:16.315471 1913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c679962a-ec7d-419d-89d2-5161175b6777" path="/var/lib/kubelet/pods/c679962a-ec7d-419d-89d2-5161175b6777/volumes" Dec 13 14:35:16.333049 systemd[1]: Created slice kubepods-burstable-pod5ca6c2aa_88ca_45a2_b0f4_8226de3c64b5.slice. Dec 13 14:35:16.409600 kubelet[1913]: I1213 14:35:16.409533 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-bpf-maps\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.409600 kubelet[1913]: I1213 14:35:16.409613 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-cgroup\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.410020 kubelet[1913]: I1213 14:35:16.409674 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cni-path\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.410020 kubelet[1913]: I1213 14:35:16.409701 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-xtables-lock\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.410020 kubelet[1913]: I1213 14:35:16.409735 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-config-path\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.410020 kubelet[1913]: I1213 14:35:16.409761 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-host-proc-sys-kernel\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.410020 kubelet[1913]: I1213 14:35:16.409790 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtw5v\" (UniqueName: \"kubernetes.io/projected/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-kube-api-access-dtw5v\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.410020 kubelet[1913]: I1213 14:35:16.409838 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-run\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.410332 kubelet[1913]: I1213 14:35:16.409862 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-lib-modules\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.410332 kubelet[1913]: I1213 14:35:16.409887 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-clustermesh-secrets\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.410332 kubelet[1913]: I1213 14:35:16.409943 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-host-proc-sys-net\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.410332 kubelet[1913]: I1213 14:35:16.409977 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-hostproc\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.410332 kubelet[1913]: I1213 14:35:16.410006 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-etc-cni-netd\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.410332 kubelet[1913]: I1213 14:35:16.410032 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-ipsec-secrets\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.410605 kubelet[1913]: I1213 14:35:16.410061 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-hubble-tls\") pod \"cilium-w4zc9\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " pod="kube-system/cilium-w4zc9" Dec 13 14:35:16.602799 env[1432]: time="2024-12-13T14:35:16.602737560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-nxh8q,Uid:16499b7d-e01d-4393-b9ae-30a0cc68f6e8,Namespace:kube-system,Attempt:0,}" Dec 13 14:35:16.635758 env[1432]: time="2024-12-13T14:35:16.635628458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:35:16.635758 env[1432]: time="2024-12-13T14:35:16.635709258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:35:16.635758 env[1432]: time="2024-12-13T14:35:16.635725058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:35:16.636471 env[1432]: time="2024-12-13T14:35:16.636407561Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f68d39f78b94bef90cf9397e037cba881603ca7c3e79053bc592dfd4627d9de pid=3430 runtime=io.containerd.runc.v2 Dec 13 14:35:16.641185 env[1432]: time="2024-12-13T14:35:16.641131675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w4zc9,Uid:5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5,Namespace:kube-system,Attempt:0,}" Dec 13 14:35:16.654171 systemd[1]: Started cri-containerd-9f68d39f78b94bef90cf9397e037cba881603ca7c3e79053bc592dfd4627d9de.scope. Dec 13 14:35:16.692953 env[1432]: time="2024-12-13T14:35:16.692821429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:35:16.693241 env[1432]: time="2024-12-13T14:35:16.693207630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:35:16.693397 env[1432]: time="2024-12-13T14:35:16.693373030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:35:16.693788 env[1432]: time="2024-12-13T14:35:16.693737431Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635 pid=3467 runtime=io.containerd.runc.v2 Dec 13 14:35:16.713020 systemd[1]: Started cri-containerd-c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635.scope. Dec 13 14:35:16.744214 env[1432]: time="2024-12-13T14:35:16.744025381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-nxh8q,Uid:16499b7d-e01d-4393-b9ae-30a0cc68f6e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f68d39f78b94bef90cf9397e037cba881603ca7c3e79053bc592dfd4627d9de\"" Dec 13 14:35:16.753318 env[1432]: time="2024-12-13T14:35:16.753255309Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:35:16.767530 env[1432]: time="2024-12-13T14:35:16.767473851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w4zc9,Uid:5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635\"" Dec 13 14:35:16.770993 env[1432]: time="2024-12-13T14:35:16.770952962Z" level=info msg="CreateContainer within sandbox \"c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:35:16.817153 env[1432]: time="2024-12-13T14:35:16.817082999Z" level=info msg="CreateContainer within sandbox \"c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981\"" Dec 13 14:35:16.818201 env[1432]: time="2024-12-13T14:35:16.818163702Z" level=info msg="StartContainer for \"a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981\"" Dec 13 14:35:16.839522 systemd[1]: Started cri-containerd-a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981.scope. Dec 13 14:35:16.858720 systemd[1]: cri-containerd-a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981.scope: Deactivated successfully. Dec 13 14:35:16.921954 env[1432]: time="2024-12-13T14:35:16.921858411Z" level=info msg="shim disconnected" id=a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981 Dec 13 14:35:16.921954 env[1432]: time="2024-12-13T14:35:16.921952512Z" level=warning msg="cleaning up after shim disconnected" id=a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981 namespace=k8s.io Dec 13 14:35:16.921954 env[1432]: time="2024-12-13T14:35:16.921965312Z" level=info msg="cleaning up dead shim" Dec 13 14:35:16.933441 env[1432]: time="2024-12-13T14:35:16.933369146Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3532 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:35:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:35:16.934006 env[1432]: time="2024-12-13T14:35:16.933845647Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Dec 13 14:35:16.936050 env[1432]: time="2024-12-13T14:35:16.935984653Z" level=error msg="Failed to pipe stdout of container \"a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981\"" error="reading from a closed fifo" Dec 13 14:35:16.937095 env[1432]: time="2024-12-13T14:35:16.937047157Z" level=error msg="Failed to pipe stderr of container \"a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981\"" error="reading from a closed fifo" Dec 13 14:35:16.942745 env[1432]: time="2024-12-13T14:35:16.942668073Z" level=error msg="StartContainer for \"a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:35:16.943104 kubelet[1913]: E1213 14:35:16.943058 1913 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981" Dec 13 14:35:16.944602 kubelet[1913]: E1213 14:35:16.944566 1913 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 14:35:16.944602 kubelet[1913]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:35:16.944602 kubelet[1913]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:35:16.944602 kubelet[1913]: rm /hostbin/cilium-mount Dec 13 14:35:16.944840 kubelet[1913]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dtw5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-w4zc9_kube-system(5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:35:16.944840 kubelet[1913]: > logger="UnhandledError" Dec 13 14:35:16.945800 kubelet[1913]: E1213 14:35:16.945763 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w4zc9" podUID="5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" Dec 13 14:35:17.291670 kubelet[1913]: E1213 14:35:17.291468 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:17.552198 env[1432]: time="2024-12-13T14:35:17.552057768Z" level=info msg="CreateContainer within sandbox \"c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 14:35:17.580292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount97041549.mount: Deactivated successfully. Dec 13 14:35:17.609785 env[1432]: time="2024-12-13T14:35:17.609704237Z" level=info msg="CreateContainer within sandbox \"c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3\"" Dec 13 14:35:17.610991 env[1432]: time="2024-12-13T14:35:17.610944441Z" level=info msg="StartContainer for \"ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3\"" Dec 13 14:35:17.635557 systemd[1]: Started cri-containerd-ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3.scope. Dec 13 14:35:17.652786 systemd[1]: cri-containerd-ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3.scope: Deactivated successfully. Dec 13 14:35:17.677313 env[1432]: time="2024-12-13T14:35:17.677232436Z" level=info msg="shim disconnected" id=ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3 Dec 13 14:35:17.677313 env[1432]: time="2024-12-13T14:35:17.677302536Z" level=warning msg="cleaning up after shim disconnected" id=ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3 namespace=k8s.io Dec 13 14:35:17.677313 env[1432]: time="2024-12-13T14:35:17.677316036Z" level=info msg="cleaning up dead shim" Dec 13 14:35:17.687868 env[1432]: time="2024-12-13T14:35:17.687799867Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3571 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:35:17Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:35:17.688240 env[1432]: time="2024-12-13T14:35:17.688167568Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" Dec 13 14:35:17.689008 env[1432]: time="2024-12-13T14:35:17.688955770Z" level=error msg="Failed to pipe stdout of container \"ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3\"" error="reading from a closed fifo" Dec 13 14:35:17.689116 env[1432]: time="2024-12-13T14:35:17.688962870Z" level=error msg="Failed to pipe stderr of container \"ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3\"" error="reading from a closed fifo" Dec 13 14:35:17.693429 env[1432]: time="2024-12-13T14:35:17.693380083Z" level=error msg="StartContainer for \"ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:35:17.693745 kubelet[1913]: E1213 14:35:17.693683 1913 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3" Dec 13 14:35:17.694342 kubelet[1913]: E1213 14:35:17.693897 1913 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 14:35:17.694342 kubelet[1913]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:35:17.694342 kubelet[1913]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:35:17.694342 kubelet[1913]: rm /hostbin/cilium-mount Dec 13 14:35:17.694342 kubelet[1913]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dtw5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-w4zc9_kube-system(5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:35:17.694342 kubelet[1913]: > logger="UnhandledError" Dec 13 14:35:17.695112 kubelet[1913]: E1213 14:35:17.695077 1913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w4zc9" podUID="5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" Dec 13 14:35:18.292106 kubelet[1913]: E1213 14:35:18.292021 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:18.562361 kubelet[1913]: I1213 14:35:18.561845 1913 scope.go:117] "RemoveContainer" containerID="a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981" Dec 13 14:35:18.562775 env[1432]: time="2024-12-13T14:35:18.562687518Z" level=info msg="StopPodSandbox for \"c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635\"" Dec 13 14:35:18.562775 env[1432]: time="2024-12-13T14:35:18.562760118Z" level=info msg="Container to stop \"ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:18.562975 env[1432]: time="2024-12-13T14:35:18.562780318Z" level=info msg="Container to stop \"a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:18.564671 env[1432]: time="2024-12-13T14:35:18.564618023Z" level=info msg="RemoveContainer for \"a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981\"" Dec 13 14:35:18.568374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635-shm.mount: Deactivated successfully. Dec 13 14:35:18.574737 env[1432]: time="2024-12-13T14:35:18.574683952Z" level=info msg="RemoveContainer for \"a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981\" returns successfully" Dec 13 14:35:18.578401 systemd[1]: cri-containerd-c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635.scope: Deactivated successfully. Dec 13 14:35:18.609586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635-rootfs.mount: Deactivated successfully. Dec 13 14:35:18.626245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2178204320.mount: Deactivated successfully. Dec 13 14:35:18.627603 env[1432]: time="2024-12-13T14:35:18.627449806Z" level=info msg="shim disconnected" id=c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635 Dec 13 14:35:18.627603 env[1432]: time="2024-12-13T14:35:18.627516106Z" level=warning msg="cleaning up after shim disconnected" id=c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635 namespace=k8s.io Dec 13 14:35:18.627603 env[1432]: time="2024-12-13T14:35:18.627533906Z" level=info msg="cleaning up dead shim" Dec 13 14:35:18.638860 env[1432]: time="2024-12-13T14:35:18.638792138Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3603 runtime=io.containerd.runc.v2\n" Dec 13 14:35:18.639228 env[1432]: time="2024-12-13T14:35:18.639194840Z" level=info msg="TearDown network for sandbox \"c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635\" successfully" Dec 13 14:35:18.639329 env[1432]: time="2024-12-13T14:35:18.639228440Z" level=info msg="StopPodSandbox for \"c43eaff2d3a3e2aea8c99b03f1adfd59e3218019b7d0f01f357812131f1f5635\" returns successfully" Dec 13 14:35:18.731119 kubelet[1913]: I1213 14:35:18.731063 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-hubble-tls\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.731119 kubelet[1913]: I1213 14:35:18.731118 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-host-proc-sys-kernel\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.731442 kubelet[1913]: I1213 14:35:18.731144 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-host-proc-sys-net\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.731442 kubelet[1913]: I1213 14:35:18.731175 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-clustermesh-secrets\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.731442 kubelet[1913]: I1213 14:35:18.731197 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-ipsec-secrets\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.731442 kubelet[1913]: I1213 14:35:18.731221 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-xtables-lock\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.731442 kubelet[1913]: I1213 14:35:18.731247 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-config-path\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.731442 kubelet[1913]: I1213 14:35:18.731272 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtw5v\" (UniqueName: \"kubernetes.io/projected/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-kube-api-access-dtw5v\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.731442 kubelet[1913]: I1213 14:35:18.731294 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-bpf-maps\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.731442 kubelet[1913]: I1213 14:35:18.731313 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-lib-modules\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.731442 kubelet[1913]: I1213 14:35:18.731332 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-etc-cni-netd\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.731442 kubelet[1913]: I1213 14:35:18.731352 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-cgroup\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.731442 kubelet[1913]: I1213 14:35:18.731372 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cni-path\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.731442 kubelet[1913]: I1213 14:35:18.731392 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-hostproc\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.731442 kubelet[1913]: I1213 14:35:18.731413 1913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-run\") pod \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\" (UID: \"5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5\") " Dec 13 14:35:18.733042 kubelet[1913]: I1213 14:35:18.731505 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:18.734150 kubelet[1913]: I1213 14:35:18.733319 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:18.734150 kubelet[1913]: I1213 14:35:18.733378 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:18.734150 kubelet[1913]: I1213 14:35:18.733399 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:18.734150 kubelet[1913]: I1213 14:35:18.733417 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:18.734150 kubelet[1913]: I1213 14:35:18.733437 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cni-path" (OuterVolumeSpecName: "cni-path") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:18.734150 kubelet[1913]: I1213 14:35:18.733454 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-hostproc" (OuterVolumeSpecName: "hostproc") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:18.736084 kubelet[1913]: I1213 14:35:18.736046 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:18.736965 kubelet[1913]: I1213 14:35:18.736879 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:18.737102 kubelet[1913]: I1213 14:35:18.736928 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:18.737475 kubelet[1913]: I1213 14:35:18.737452 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:35:18.740682 kubelet[1913]: I1213 14:35:18.740654 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:35:18.741091 kubelet[1913]: I1213 14:35:18.740857 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-kube-api-access-dtw5v" (OuterVolumeSpecName: "kube-api-access-dtw5v") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "kube-api-access-dtw5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:35:18.741889 kubelet[1913]: I1213 14:35:18.741859 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:35:18.743498 kubelet[1913]: I1213 14:35:18.743464 1913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" (UID: "5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:35:18.832505 kubelet[1913]: I1213 14:35:18.832299 1913 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-cgroup\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:18.832505 kubelet[1913]: I1213 14:35:18.832359 1913 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cni-path\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:18.832505 kubelet[1913]: I1213 14:35:18.832376 1913 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-hostproc\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:18.832505 kubelet[1913]: I1213 14:35:18.832389 1913 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-run\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:18.832505 kubelet[1913]: I1213 14:35:18.832405 1913 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-etc-cni-netd\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:18.832505 kubelet[1913]: I1213 14:35:18.832419 1913 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-hubble-tls\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:18.832505 kubelet[1913]: I1213 14:35:18.832438 1913 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-host-proc-sys-kernel\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:18.832505 kubelet[1913]: I1213 14:35:18.832461 1913 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-host-proc-sys-net\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:18.833940 kubelet[1913]: I1213 14:35:18.832475 1913 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-clustermesh-secrets\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:18.834064 kubelet[1913]: I1213 14:35:18.834049 1913 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-ipsec-secrets\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:18.834148 kubelet[1913]: I1213 14:35:18.834138 1913 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-cilium-config-path\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:18.834233 kubelet[1913]: I1213 14:35:18.834217 1913 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dtw5v\" (UniqueName: \"kubernetes.io/projected/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-kube-api-access-dtw5v\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:18.834316 kubelet[1913]: I1213 14:35:18.834304 1913 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-bpf-maps\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:18.834404 kubelet[1913]: I1213 14:35:18.834393 1913 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-lib-modules\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:18.834487 kubelet[1913]: I1213 14:35:18.834476 1913 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5-xtables-lock\") on node \"10.200.8.26\" DevicePath \"\"" Dec 13 14:35:19.292605 kubelet[1913]: E1213 14:35:19.292532 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:19.427514 systemd[1]: var-lib-kubelet-pods-5ca6c2aa\x2d88ca\x2d45a2\x2db0f4\x2d8226de3c64b5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddtw5v.mount: Deactivated successfully. Dec 13 14:35:19.427654 systemd[1]: var-lib-kubelet-pods-5ca6c2aa\x2d88ca\x2d45a2\x2db0f4\x2d8226de3c64b5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:35:19.427731 systemd[1]: var-lib-kubelet-pods-5ca6c2aa\x2d88ca\x2d45a2\x2db0f4\x2d8226de3c64b5-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:35:19.427802 systemd[1]: var-lib-kubelet-pods-5ca6c2aa\x2d88ca\x2d45a2\x2db0f4\x2d8226de3c64b5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:35:19.565643 kubelet[1913]: I1213 14:35:19.565488 1913 scope.go:117] "RemoveContainer" containerID="ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3" Dec 13 14:35:19.567799 env[1432]: time="2024-12-13T14:35:19.567288011Z" level=info msg="RemoveContainer for \"ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3\"" Dec 13 14:35:19.571081 systemd[1]: Removed slice kubepods-burstable-pod5ca6c2aa_88ca_45a2_b0f4_8226de3c64b5.slice. Dec 13 14:35:19.576454 env[1432]: time="2024-12-13T14:35:19.576398737Z" level=info msg="RemoveContainer for \"ef1f67fb2ac68412737aec5e47c2a654f9470956d2759e5913a32d22500498f3\" returns successfully" Dec 13 14:35:19.609870 kubelet[1913]: E1213 14:35:19.609813 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" containerName="mount-cgroup" Dec 13 14:35:19.609870 kubelet[1913]: E1213 14:35:19.609849 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" containerName="mount-cgroup" Dec 13 14:35:19.609870 kubelet[1913]: I1213 14:35:19.609880 1913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" containerName="mount-cgroup" Dec 13 14:35:19.610246 kubelet[1913]: I1213 14:35:19.609968 1913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" containerName="mount-cgroup" Dec 13 14:35:19.616317 systemd[1]: Created slice kubepods-burstable-pod2c51d408_f68e_451a_939f_2b491a563100.slice. Dec 13 14:35:19.640768 kubelet[1913]: I1213 14:35:19.640703 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c51d408-f68e-451a-939f-2b491a563100-cilium-config-path\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.640768 kubelet[1913]: I1213 14:35:19.640763 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c51d408-f68e-451a-939f-2b491a563100-host-proc-sys-net\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.641139 kubelet[1913]: I1213 14:35:19.640789 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq5m4\" (UniqueName: \"kubernetes.io/projected/2c51d408-f68e-451a-939f-2b491a563100-kube-api-access-wq5m4\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.641139 kubelet[1913]: I1213 14:35:19.640814 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c51d408-f68e-451a-939f-2b491a563100-cilium-cgroup\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.641139 kubelet[1913]: I1213 14:35:19.640834 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c51d408-f68e-451a-939f-2b491a563100-cni-path\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.641139 kubelet[1913]: I1213 14:35:19.640853 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c51d408-f68e-451a-939f-2b491a563100-lib-modules\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.641139 kubelet[1913]: I1213 14:35:19.640871 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c51d408-f68e-451a-939f-2b491a563100-xtables-lock\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.641139 kubelet[1913]: I1213 14:35:19.640898 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c51d408-f68e-451a-939f-2b491a563100-hubble-tls\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.641139 kubelet[1913]: I1213 14:35:19.640951 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c51d408-f68e-451a-939f-2b491a563100-cilium-run\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.641139 kubelet[1913]: I1213 14:35:19.640974 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c51d408-f68e-451a-939f-2b491a563100-clustermesh-secrets\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.641139 kubelet[1913]: I1213 14:35:19.640998 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c51d408-f68e-451a-939f-2b491a563100-host-proc-sys-kernel\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.641139 kubelet[1913]: I1213 14:35:19.641049 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c51d408-f68e-451a-939f-2b491a563100-bpf-maps\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.641139 kubelet[1913]: I1213 14:35:19.641077 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c51d408-f68e-451a-939f-2b491a563100-hostproc\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.641139 kubelet[1913]: I1213 14:35:19.641096 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c51d408-f68e-451a-939f-2b491a563100-etc-cni-netd\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.641139 kubelet[1913]: I1213 14:35:19.641121 1913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2c51d408-f68e-451a-939f-2b491a563100-cilium-ipsec-secrets\") pod \"cilium-82z7z\" (UID: \"2c51d408-f68e-451a-939f-2b491a563100\") " pod="kube-system/cilium-82z7z" Dec 13 14:35:19.924254 env[1432]: time="2024-12-13T14:35:19.924182433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-82z7z,Uid:2c51d408-f68e-451a-939f-2b491a563100,Namespace:kube-system,Attempt:0,}" Dec 13 14:35:19.962601 env[1432]: time="2024-12-13T14:35:19.962495643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:35:19.962859 env[1432]: time="2024-12-13T14:35:19.962553443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:35:19.962859 env[1432]: time="2024-12-13T14:35:19.962568243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:35:19.962859 env[1432]: time="2024-12-13T14:35:19.962738944Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46c5bea7dc59d6b408c0e7a5fcd9b107dc0d0fa8f9a2f2c113e939fe5b213afc pid=3631 runtime=io.containerd.runc.v2 Dec 13 14:35:19.978747 systemd[1]: Started cri-containerd-46c5bea7dc59d6b408c0e7a5fcd9b107dc0d0fa8f9a2f2c113e939fe5b213afc.scope. Dec 13 14:35:20.012465 env[1432]: time="2024-12-13T14:35:20.012408586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-82z7z,Uid:2c51d408-f68e-451a-939f-2b491a563100,Namespace:kube-system,Attempt:0,} returns sandbox id \"46c5bea7dc59d6b408c0e7a5fcd9b107dc0d0fa8f9a2f2c113e939fe5b213afc\"" Dec 13 14:35:20.015846 env[1432]: time="2024-12-13T14:35:20.015798495Z" level=info msg="CreateContainer within sandbox \"46c5bea7dc59d6b408c0e7a5fcd9b107dc0d0fa8f9a2f2c113e939fe5b213afc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:35:20.030829 kubelet[1913]: W1213 14:35:20.030733 1913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ca6c2aa_88ca_45a2_b0f4_8226de3c64b5.slice/cri-containerd-a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981.scope WatchSource:0}: container "a3bf043dd27f7253262b330536eff58e611bdf5c3af2e6edb2e003fe9dd05981" in namespace "k8s.io": not found Dec 13 14:35:20.054631 env[1432]: time="2024-12-13T14:35:20.054556905Z" level=info msg="CreateContainer within sandbox \"46c5bea7dc59d6b408c0e7a5fcd9b107dc0d0fa8f9a2f2c113e939fe5b213afc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6bf0091b9c8b3e4798a9dd89c63b674a807bcc01e0ab84b06f27307fc1810c93\"" Dec 13 14:35:20.055398 env[1432]: time="2024-12-13T14:35:20.055315807Z" level=info msg="StartContainer for \"6bf0091b9c8b3e4798a9dd89c63b674a807bcc01e0ab84b06f27307fc1810c93\"" Dec 13 14:35:20.076614 systemd[1]: Started cri-containerd-6bf0091b9c8b3e4798a9dd89c63b674a807bcc01e0ab84b06f27307fc1810c93.scope. Dec 13 14:35:20.115708 env[1432]: time="2024-12-13T14:35:20.115636277Z" level=info msg="StartContainer for \"6bf0091b9c8b3e4798a9dd89c63b674a807bcc01e0ab84b06f27307fc1810c93\" returns successfully" Dec 13 14:35:20.123546 systemd[1]: cri-containerd-6bf0091b9c8b3e4798a9dd89c63b674a807bcc01e0ab84b06f27307fc1810c93.scope: Deactivated successfully. Dec 13 14:35:20.169665 env[1432]: time="2024-12-13T14:35:20.169586630Z" level=info msg="shim disconnected" id=6bf0091b9c8b3e4798a9dd89c63b674a807bcc01e0ab84b06f27307fc1810c93 Dec 13 14:35:20.169665 env[1432]: time="2024-12-13T14:35:20.169655230Z" level=warning msg="cleaning up after shim disconnected" id=6bf0091b9c8b3e4798a9dd89c63b674a807bcc01e0ab84b06f27307fc1810c93 namespace=k8s.io Dec 13 14:35:20.169665 env[1432]: time="2024-12-13T14:35:20.169668130Z" level=info msg="cleaning up dead shim" Dec 13 14:35:20.179919 env[1432]: time="2024-12-13T14:35:20.179166757Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3712 runtime=io.containerd.runc.v2\n" Dec 13 14:35:20.231405 kubelet[1913]: E1213 14:35:20.231326 1913 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:20.293373 kubelet[1913]: E1213 14:35:20.293297 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:20.316340 kubelet[1913]: I1213 14:35:20.316278 1913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5" path="/var/lib/kubelet/pods/5ca6c2aa-88ca-45a2-b0f4-8226de3c64b5/volumes" Dec 13 14:35:20.368858 kubelet[1913]: E1213 14:35:20.368773 1913 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:35:20.572593 env[1432]: time="2024-12-13T14:35:20.572090668Z" level=info msg="CreateContainer within sandbox \"46c5bea7dc59d6b408c0e7a5fcd9b107dc0d0fa8f9a2f2c113e939fe5b213afc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:35:20.616132 env[1432]: time="2024-12-13T14:35:20.616063392Z" level=info msg="CreateContainer within sandbox \"46c5bea7dc59d6b408c0e7a5fcd9b107dc0d0fa8f9a2f2c113e939fe5b213afc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bdf17db6d31390f742570dd9092620b02ba1dd3335828be16a2842278c364f42\"" Dec 13 14:35:20.616951 env[1432]: time="2024-12-13T14:35:20.616895494Z" level=info msg="StartContainer for \"bdf17db6d31390f742570dd9092620b02ba1dd3335828be16a2842278c364f42\"" Dec 13 14:35:20.646359 systemd[1]: Started cri-containerd-bdf17db6d31390f742570dd9092620b02ba1dd3335828be16a2842278c364f42.scope. Dec 13 14:35:20.694954 env[1432]: time="2024-12-13T14:35:20.694144113Z" level=info msg="StartContainer for \"bdf17db6d31390f742570dd9092620b02ba1dd3335828be16a2842278c364f42\" returns successfully" Dec 13 14:35:20.700668 systemd[1]: cri-containerd-bdf17db6d31390f742570dd9092620b02ba1dd3335828be16a2842278c364f42.scope: Deactivated successfully. Dec 13 14:35:20.739090 env[1432]: time="2024-12-13T14:35:20.739022240Z" level=info msg="shim disconnected" id=bdf17db6d31390f742570dd9092620b02ba1dd3335828be16a2842278c364f42 Dec 13 14:35:20.739090 env[1432]: time="2024-12-13T14:35:20.739086840Z" level=warning msg="cleaning up after shim disconnected" id=bdf17db6d31390f742570dd9092620b02ba1dd3335828be16a2842278c364f42 namespace=k8s.io Dec 13 14:35:20.739090 env[1432]: time="2024-12-13T14:35:20.739100140Z" level=info msg="cleaning up dead shim" Dec 13 14:35:20.749949 env[1432]: time="2024-12-13T14:35:20.749878770Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3775 runtime=io.containerd.runc.v2\n" Dec 13 14:35:21.294459 kubelet[1913]: E1213 14:35:21.294385 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:21.428601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdf17db6d31390f742570dd9092620b02ba1dd3335828be16a2842278c364f42-rootfs.mount: Deactivated successfully. Dec 13 14:35:21.580218 env[1432]: time="2024-12-13T14:35:21.579846096Z" level=info msg="CreateContainer within sandbox \"46c5bea7dc59d6b408c0e7a5fcd9b107dc0d0fa8f9a2f2c113e939fe5b213afc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:35:21.638319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1471388552.mount: Deactivated successfully. Dec 13 14:35:21.654682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1732286267.mount: Deactivated successfully. Dec 13 14:35:21.673587 env[1432]: time="2024-12-13T14:35:21.673504857Z" level=info msg="CreateContainer within sandbox \"46c5bea7dc59d6b408c0e7a5fcd9b107dc0d0fa8f9a2f2c113e939fe5b213afc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"96e2cfa1e443d52ec2a2947fc27e304f603cd18ba13d63aacb78ac280b9f25d5\"" Dec 13 14:35:21.674851 env[1432]: time="2024-12-13T14:35:21.674793661Z" level=info msg="StartContainer for \"96e2cfa1e443d52ec2a2947fc27e304f603cd18ba13d63aacb78ac280b9f25d5\"" Dec 13 14:35:21.708804 systemd[1]: Started cri-containerd-96e2cfa1e443d52ec2a2947fc27e304f603cd18ba13d63aacb78ac280b9f25d5.scope. Dec 13 14:35:21.763520 systemd[1]: cri-containerd-96e2cfa1e443d52ec2a2947fc27e304f603cd18ba13d63aacb78ac280b9f25d5.scope: Deactivated successfully. Dec 13 14:35:21.766966 env[1432]: time="2024-12-13T14:35:21.766843218Z" level=info msg="StartContainer for \"96e2cfa1e443d52ec2a2947fc27e304f603cd18ba13d63aacb78ac280b9f25d5\" returns successfully" Dec 13 14:35:22.072559 kubelet[1913]: I1213 14:35:21.836782 1913 setters.go:600] "Node became not ready" node="10.200.8.26" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:35:21Z","lastTransitionTime":"2024-12-13T14:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:35:22.084720 env[1432]: time="2024-12-13T14:35:22.084647802Z" level=info msg="shim disconnected" id=96e2cfa1e443d52ec2a2947fc27e304f603cd18ba13d63aacb78ac280b9f25d5 Dec 13 14:35:22.084720 env[1432]: time="2024-12-13T14:35:22.084717803Z" level=warning msg="cleaning up after shim disconnected" id=96e2cfa1e443d52ec2a2947fc27e304f603cd18ba13d63aacb78ac280b9f25d5 namespace=k8s.io Dec 13 14:35:22.084720 env[1432]: time="2024-12-13T14:35:22.084730103Z" level=info msg="cleaning up dead shim" Dec 13 14:35:22.109285 env[1432]: time="2024-12-13T14:35:22.109209970Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3833 runtime=io.containerd.runc.v2\n" Dec 13 14:35:22.295140 kubelet[1913]: E1213 14:35:22.295076 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:22.333687 env[1432]: time="2024-12-13T14:35:22.333517489Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:22.340256 env[1432]: time="2024-12-13T14:35:22.340191107Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:22.344013 env[1432]: time="2024-12-13T14:35:22.343963917Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:22.344617 env[1432]: time="2024-12-13T14:35:22.344577119Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:35:22.347105 env[1432]: time="2024-12-13T14:35:22.347072726Z" level=info msg="CreateContainer within sandbox \"9f68d39f78b94bef90cf9397e037cba881603ca7c3e79053bc592dfd4627d9de\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:35:22.377219 env[1432]: time="2024-12-13T14:35:22.377149909Z" level=info msg="CreateContainer within sandbox \"9f68d39f78b94bef90cf9397e037cba881603ca7c3e79053bc592dfd4627d9de\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6e9ff68f5616c3a0c57d6bdeeb9ae983585ca2a61833d6a20c0fb6804bc144b9\"" Dec 13 14:35:22.378275 env[1432]: time="2024-12-13T14:35:22.378231912Z" level=info msg="StartContainer for \"6e9ff68f5616c3a0c57d6bdeeb9ae983585ca2a61833d6a20c0fb6804bc144b9\"" Dec 13 14:35:22.399001 systemd[1]: Started cri-containerd-6e9ff68f5616c3a0c57d6bdeeb9ae983585ca2a61833d6a20c0fb6804bc144b9.scope. Dec 13 14:35:22.445998 env[1432]: time="2024-12-13T14:35:22.445874498Z" level=info msg="StartContainer for \"6e9ff68f5616c3a0c57d6bdeeb9ae983585ca2a61833d6a20c0fb6804bc144b9\" returns successfully" Dec 13 14:35:22.585494 env[1432]: time="2024-12-13T14:35:22.585092382Z" level=info msg="CreateContainer within sandbox \"46c5bea7dc59d6b408c0e7a5fcd9b107dc0d0fa8f9a2f2c113e939fe5b213afc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:35:22.619838 kubelet[1913]: I1213 14:35:22.619759 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-nxh8q" podStartSLOduration=1.02575106 podStartE2EDuration="6.619729478s" podCreationTimestamp="2024-12-13 14:35:16 +0000 UTC" firstStartedPulling="2024-12-13 14:35:16.751626104 +0000 UTC m=+77.677926599" lastFinishedPulling="2024-12-13 14:35:22.345604422 +0000 UTC m=+83.271905017" observedRunningTime="2024-12-13 14:35:22.600194124 +0000 UTC m=+83.526494619" watchObservedRunningTime="2024-12-13 14:35:22.619729478 +0000 UTC m=+83.546029973" Dec 13 14:35:22.628261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904812254.mount: Deactivated successfully. Dec 13 14:35:22.631452 env[1432]: time="2024-12-13T14:35:22.631405210Z" level=info msg="CreateContainer within sandbox \"46c5bea7dc59d6b408c0e7a5fcd9b107dc0d0fa8f9a2f2c113e939fe5b213afc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"72f8acda6d89d284758792574b3596e21edfa04da19c92e5d7efa9eb26896161\"" Dec 13 14:35:22.632641 env[1432]: time="2024-12-13T14:35:22.632607913Z" level=info msg="StartContainer for \"72f8acda6d89d284758792574b3596e21edfa04da19c92e5d7efa9eb26896161\"" Dec 13 14:35:22.657825 systemd[1]: Started cri-containerd-72f8acda6d89d284758792574b3596e21edfa04da19c92e5d7efa9eb26896161.scope. Dec 13 14:35:22.695447 systemd[1]: cri-containerd-72f8acda6d89d284758792574b3596e21edfa04da19c92e5d7efa9eb26896161.scope: Deactivated successfully. Dec 13 14:35:22.699784 env[1432]: time="2024-12-13T14:35:22.699633998Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c51d408_f68e_451a_939f_2b491a563100.slice/cri-containerd-72f8acda6d89d284758792574b3596e21edfa04da19c92e5d7efa9eb26896161.scope/memory.events\": no such file or directory" Dec 13 14:35:22.705248 env[1432]: time="2024-12-13T14:35:22.705192013Z" level=info msg="StartContainer for \"72f8acda6d89d284758792574b3596e21edfa04da19c92e5d7efa9eb26896161\" returns successfully" Dec 13 14:35:22.887275 env[1432]: time="2024-12-13T14:35:22.887203715Z" level=info msg="shim disconnected" id=72f8acda6d89d284758792574b3596e21edfa04da19c92e5d7efa9eb26896161 Dec 13 14:35:22.887275 env[1432]: time="2024-12-13T14:35:22.887271015Z" level=warning msg="cleaning up after shim disconnected" id=72f8acda6d89d284758792574b3596e21edfa04da19c92e5d7efa9eb26896161 namespace=k8s.io Dec 13 14:35:22.887275 env[1432]: time="2024-12-13T14:35:22.887284215Z" level=info msg="cleaning up dead shim" Dec 13 14:35:22.897120 env[1432]: time="2024-12-13T14:35:22.897051442Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3926 runtime=io.containerd.runc.v2\n" Dec 13 14:35:23.295654 kubelet[1913]: E1213 14:35:23.295469 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:23.428255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72f8acda6d89d284758792574b3596e21edfa04da19c92e5d7efa9eb26896161-rootfs.mount: Deactivated successfully. Dec 13 14:35:23.591769 env[1432]: time="2024-12-13T14:35:23.591697038Z" level=info msg="CreateContainer within sandbox \"46c5bea7dc59d6b408c0e7a5fcd9b107dc0d0fa8f9a2f2c113e939fe5b213afc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:35:23.622016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2299273127.mount: Deactivated successfully. Dec 13 14:35:23.632897 env[1432]: time="2024-12-13T14:35:23.632831350Z" level=info msg="CreateContainer within sandbox \"46c5bea7dc59d6b408c0e7a5fcd9b107dc0d0fa8f9a2f2c113e939fe5b213afc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6e21a08700c5b5e60f251f945515f7d45688f0ffd9909a16db1d32ab13b4a018\"" Dec 13 14:35:23.634112 env[1432]: time="2024-12-13T14:35:23.634069053Z" level=info msg="StartContainer for \"6e21a08700c5b5e60f251f945515f7d45688f0ffd9909a16db1d32ab13b4a018\"" Dec 13 14:35:23.669007 systemd[1]: Started cri-containerd-6e21a08700c5b5e60f251f945515f7d45688f0ffd9909a16db1d32ab13b4a018.scope. Dec 13 14:35:23.718228 env[1432]: time="2024-12-13T14:35:23.718152682Z" level=info msg="StartContainer for \"6e21a08700c5b5e60f251f945515f7d45688f0ffd9909a16db1d32ab13b4a018\" returns successfully" Dec 13 14:35:24.065122 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:35:24.295978 kubelet[1913]: E1213 14:35:24.295926 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:24.428112 systemd[1]: run-containerd-runc-k8s.io-6e21a08700c5b5e60f251f945515f7d45688f0ffd9909a16db1d32ab13b4a018-runc.w256Jo.mount: Deactivated successfully. Dec 13 14:35:24.613314 kubelet[1913]: I1213 14:35:24.613221 1913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-82z7z" podStartSLOduration=5.6131971 podStartE2EDuration="5.6131971s" podCreationTimestamp="2024-12-13 14:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:35:24.6130468 +0000 UTC m=+85.539347395" watchObservedRunningTime="2024-12-13 14:35:24.6131971 +0000 UTC m=+85.539497595" Dec 13 14:35:25.296983 kubelet[1913]: E1213 14:35:25.296919 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:26.297411 kubelet[1913]: E1213 14:35:26.297351 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:26.828194 systemd-networkd[1600]: lxc_health: Link UP Dec 13 14:35:26.848948 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:35:26.849149 systemd-networkd[1600]: lxc_health: Gained carrier Dec 13 14:35:27.298200 kubelet[1913]: E1213 14:35:27.298127 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:27.407581 systemd[1]: run-containerd-runc-k8s.io-6e21a08700c5b5e60f251f945515f7d45688f0ffd9909a16db1d32ab13b4a018-runc.8S8UDI.mount: Deactivated successfully. Dec 13 14:35:28.075172 systemd-networkd[1600]: lxc_health: Gained IPv6LL Dec 13 14:35:28.299237 kubelet[1913]: E1213 14:35:28.299166 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:29.299780 kubelet[1913]: E1213 14:35:29.299707 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:30.301595 kubelet[1913]: E1213 14:35:30.301512 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:31.302338 kubelet[1913]: E1213 14:35:31.302274 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:32.303679 kubelet[1913]: E1213 14:35:32.303594 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:33.304547 kubelet[1913]: E1213 14:35:33.304470 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:34.305099 kubelet[1913]: E1213 14:35:34.305021 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:35.306083 kubelet[1913]: E1213 14:35:35.306011 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:36.306426 kubelet[1913]: E1213 14:35:36.306358 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:37.306649 kubelet[1913]: E1213 14:35:37.306583 1913 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"