Dec 13 02:06:19.045010 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:06:19.045041 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:06:19.045056 kernel: BIOS-provided physical RAM map: Dec 13 02:06:19.045066 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 02:06:19.045075 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 02:06:19.045086 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 02:06:19.045102 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 02:06:19.045114 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 02:06:19.045126 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 02:06:19.045138 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 02:06:19.045149 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 02:06:19.045159 kernel: printk: bootconsole [earlyser0] enabled Dec 13 02:06:19.045171 kernel: NX (Execute Disable) protection: active Dec 13 02:06:19.045181 kernel: efi: EFI v2.70 by Microsoft Dec 13 02:06:19.045199 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Dec 13 02:06:19.045212 kernel: random: crng init done Dec 13 02:06:19.045224 kernel: SMBIOS 3.1.0 present. Dec 13 02:06:19.045236 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 02:06:19.045248 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 02:06:19.045261 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 02:06:19.045273 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Dec 13 02:06:19.045284 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 02:06:19.045299 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 02:06:19.045310 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 02:06:19.045322 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 02:06:19.045334 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 02:06:19.045346 kernel: tsc: Detected 2593.904 MHz processor Dec 13 02:06:19.045358 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:06:19.045370 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:06:19.045382 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 02:06:19.045394 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:06:19.045406 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 02:06:19.045420 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 02:06:19.045432 kernel: Using GB pages for direct mapping Dec 13 02:06:19.045467 kernel: Secure boot disabled Dec 13 02:06:19.045478 kernel: ACPI: Early table checksum verification disabled Dec 13 02:06:19.045489 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 02:06:19.045501 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:06:19.045512 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:06:19.045524 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 02:06:19.045545 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 02:06:19.045558 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:06:19.045570 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:06:19.045583 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:06:19.045596 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:06:19.045609 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:06:19.045624 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:06:19.045637 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 02:06:19.045650 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 02:06:19.045663 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 02:06:19.045675 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 02:06:19.045688 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 02:06:19.045701 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 02:06:19.045714 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 02:06:19.045729 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 02:06:19.045740 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 02:06:19.045753 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 02:06:19.045765 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 02:06:19.045777 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:06:19.045790 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:06:19.045802 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 02:06:19.045815 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 02:06:19.045828 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 02:06:19.045843 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 02:06:19.045856 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 02:06:19.045868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 02:06:19.045881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 02:06:19.045894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 02:06:19.045906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 02:06:19.045919 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 02:06:19.045932 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 02:06:19.045944 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 02:06:19.045959 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 02:06:19.045972 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 02:06:19.045985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 02:06:19.045997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 02:06:19.046010 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 02:06:19.046023 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 02:06:19.046036 kernel: Zone ranges: Dec 13 02:06:19.046048 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:06:19.046061 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 02:06:19.046076 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 02:06:19.046088 kernel: Movable zone start for each node Dec 13 02:06:19.046101 kernel: Early memory node ranges Dec 13 02:06:19.046113 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 02:06:19.046126 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 02:06:19.046139 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 02:06:19.046151 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 02:06:19.046164 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 02:06:19.046177 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:06:19.046192 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 02:06:19.046204 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 02:06:19.046217 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 02:06:19.046229 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 02:06:19.046242 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:06:19.046255 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:06:19.046268 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:06:19.046281 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 02:06:19.046293 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:06:19.046308 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 02:06:19.046320 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 02:06:19.046333 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:06:19.046346 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:06:19.046359 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:06:19.046371 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:06:19.046384 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:06:19.046396 kernel: Hyper-V: PV spinlocks enabled Dec 13 02:06:19.046409 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:06:19.046428 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 02:06:19.046465 kernel: Policy zone: Normal Dec 13 02:06:19.046480 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:06:19.046493 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:06:19.046506 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 02:06:19.046519 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 02:06:19.046532 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:06:19.046545 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 308056K reserved, 0K cma-reserved) Dec 13 02:06:19.046560 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:06:19.046574 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:06:19.046595 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:06:19.046611 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:06:19.046625 kernel: rcu: RCU event tracing is enabled. Dec 13 02:06:19.046639 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:06:19.046653 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:06:19.046666 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:06:19.046680 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:06:19.046693 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:06:19.046707 kernel: Using NULL legacy PIC Dec 13 02:06:19.046723 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 02:06:19.046736 kernel: Console: colour dummy device 80x25 Dec 13 02:06:19.046750 kernel: printk: console [tty1] enabled Dec 13 02:06:19.046763 kernel: printk: console [ttyS0] enabled Dec 13 02:06:19.046776 kernel: printk: bootconsole [earlyser0] disabled Dec 13 02:06:19.046792 kernel: ACPI: Core revision 20210730 Dec 13 02:06:19.046805 kernel: Failed to register legacy timer interrupt Dec 13 02:06:19.046819 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:06:19.046832 kernel: Hyper-V: Using IPI hypercalls Dec 13 02:06:19.046846 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Dec 13 02:06:19.046859 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 02:06:19.046873 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 02:06:19.046886 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:06:19.046899 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:06:19.046913 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:06:19.046928 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:06:19.046942 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 02:06:19.046956 kernel: RETBleed: Vulnerable Dec 13 02:06:19.046969 kernel: Speculative Store Bypass: Vulnerable Dec 13 02:06:19.046982 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:06:19.046995 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:06:19.047009 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 02:06:19.047022 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:06:19.047035 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:06:19.047048 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:06:19.047064 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 02:06:19.047077 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 02:06:19.047090 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 02:06:19.047104 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:06:19.047117 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 02:06:19.047130 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 02:06:19.047142 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 02:06:19.047156 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 02:06:19.047169 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:06:19.047183 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:06:19.047196 kernel: LSM: Security Framework initializing Dec 13 02:06:19.047210 kernel: SELinux: Initializing. Dec 13 02:06:19.047226 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:06:19.047240 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:06:19.047253 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 02:06:19.047267 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 02:06:19.047280 kernel: signal: max sigframe size: 3632 Dec 13 02:06:19.047294 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:06:19.047308 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:06:19.047322 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:06:19.047335 kernel: x86: Booting SMP configuration: Dec 13 02:06:19.047348 kernel: .... node #0, CPUs: #1 Dec 13 02:06:19.047365 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 02:06:19.047380 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:06:19.047393 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:06:19.047406 kernel: smpboot: Max logical packages: 1 Dec 13 02:06:19.047420 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Dec 13 02:06:19.047446 kernel: devtmpfs: initialized Dec 13 02:06:19.047459 kernel: x86/mm: Memory block size: 128MB Dec 13 02:06:19.047473 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 02:06:19.047489 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:06:19.047503 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:06:19.047516 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:06:19.047530 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:06:19.047544 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:06:19.047557 kernel: audit: type=2000 audit(1734055578.025:1): state=initialized audit_enabled=0 res=1 Dec 13 02:06:19.047570 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:06:19.047584 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:06:19.047598 kernel: cpuidle: using governor menu Dec 13 02:06:19.047614 kernel: ACPI: bus type PCI registered Dec 13 02:06:19.047628 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:06:19.047641 kernel: dca service started, version 1.12.1 Dec 13 02:06:19.047655 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:06:19.047669 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:06:19.047682 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:06:19.047696 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:06:19.047709 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:06:19.047723 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:06:19.047739 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:06:19.047753 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:06:19.047766 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:06:19.047778 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:06:19.047792 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 02:06:19.047805 kernel: ACPI: Interpreter enabled Dec 13 02:06:19.047819 kernel: ACPI: PM: (supports S0 S5) Dec 13 02:06:19.047833 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:06:19.047847 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:06:19.047864 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 02:06:19.047877 kernel: iommu: Default domain type: Translated Dec 13 02:06:19.047891 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:06:19.047904 kernel: vgaarb: loaded Dec 13 02:06:19.047918 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:06:19.047932 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Dec 13 02:06:19.047945 kernel: PTP clock support registered Dec 13 02:06:19.047959 kernel: Registered efivars operations Dec 13 02:06:19.047972 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:06:19.047986 kernel: PCI: System does not support PCI Dec 13 02:06:19.048002 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 02:06:19.048015 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:06:19.048029 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:06:19.048043 kernel: pnp: PnP ACPI init Dec 13 02:06:19.048057 kernel: pnp: PnP ACPI: found 3 devices Dec 13 02:06:19.048070 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:06:19.048084 kernel: NET: Registered PF_INET protocol family Dec 13 02:06:19.048098 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:06:19.048113 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 02:06:19.048127 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:06:19.048141 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:06:19.048154 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 02:06:19.048169 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 02:06:19.048187 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:06:19.048200 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:06:19.048214 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:06:19.048228 kernel: NET: Registered PF_XDP protocol family Dec 13 02:06:19.048245 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:06:19.048259 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 02:06:19.048272 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Dec 13 02:06:19.048286 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:06:19.048300 kernel: Initialise system trusted keyrings Dec 13 02:06:19.048313 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 02:06:19.048327 kernel: Key type asymmetric registered Dec 13 02:06:19.048340 kernel: Asymmetric key parser 'x509' registered Dec 13 02:06:19.048354 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:06:19.048370 kernel: io scheduler mq-deadline registered Dec 13 02:06:19.048384 kernel: io scheduler kyber registered Dec 13 02:06:19.048397 kernel: io scheduler bfq registered Dec 13 02:06:19.048411 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:06:19.048425 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:06:19.048455 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:06:19.048468 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 02:06:19.048478 kernel: i8042: PNP: No PS/2 controller found. Dec 13 02:06:19.048645 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 02:06:19.048764 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T02:06:18 UTC (1734055578) Dec 13 02:06:19.048871 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 02:06:19.048888 kernel: fail to initialize ptp_kvm Dec 13 02:06:19.048902 kernel: intel_pstate: CPU model not supported Dec 13 02:06:19.048915 kernel: efifb: probing for efifb Dec 13 02:06:19.048929 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 02:06:19.048943 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 02:06:19.048957 kernel: efifb: scrolling: redraw Dec 13 02:06:19.048973 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 02:06:19.048987 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 02:06:19.049001 kernel: fb0: EFI VGA frame buffer device Dec 13 02:06:19.049014 kernel: pstore: Registered efi as persistent store backend Dec 13 02:06:19.049028 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:06:19.049041 kernel: Segment Routing with IPv6 Dec 13 02:06:19.049055 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:06:19.049068 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:06:19.049082 kernel: Key type dns_resolver registered Dec 13 02:06:19.049098 kernel: IPI shorthand broadcast: enabled Dec 13 02:06:19.049111 kernel: sched_clock: Marking stable (849332800, 26597000)->(1126213300, -250283500) Dec 13 02:06:19.049125 kernel: registered taskstats version 1 Dec 13 02:06:19.049138 kernel: Loading compiled-in X.509 certificates Dec 13 02:06:19.049152 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:06:19.049166 kernel: Key type .fscrypt registered Dec 13 02:06:19.049179 kernel: Key type fscrypt-provisioning registered Dec 13 02:06:19.049193 kernel: pstore: Using crash dump compression: deflate Dec 13 02:06:19.049209 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:06:19.049222 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:06:19.049236 kernel: ima: No architecture policies found Dec 13 02:06:19.049250 kernel: clk: Disabling unused clocks Dec 13 02:06:19.049263 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:06:19.049277 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:06:19.049291 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:06:19.049305 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:06:19.049319 kernel: Run /init as init process Dec 13 02:06:19.049332 kernel: with arguments: Dec 13 02:06:19.049348 kernel: /init Dec 13 02:06:19.049361 kernel: with environment: Dec 13 02:06:19.049375 kernel: HOME=/ Dec 13 02:06:19.049388 kernel: TERM=linux Dec 13 02:06:19.049401 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:06:19.049417 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:06:19.049450 systemd[1]: Detected virtualization microsoft. Dec 13 02:06:19.049466 systemd[1]: Detected architecture x86-64. Dec 13 02:06:19.049479 systemd[1]: Running in initrd. Dec 13 02:06:19.049491 systemd[1]: No hostname configured, using default hostname. Dec 13 02:06:19.049505 systemd[1]: Hostname set to <localhost>. Dec 13 02:06:19.049519 systemd[1]: Initializing machine ID from random generator. Dec 13 02:06:19.049531 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:06:19.049542 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:06:19.049555 systemd[1]: Reached target cryptsetup.target. Dec 13 02:06:19.049568 systemd[1]: Reached target paths.target. Dec 13 02:06:19.049583 systemd[1]: Reached target slices.target. Dec 13 02:06:19.049594 systemd[1]: Reached target swap.target. Dec 13 02:06:19.049606 systemd[1]: Reached target timers.target. Dec 13 02:06:19.049620 systemd[1]: Listening on iscsid.socket. Dec 13 02:06:19.049631 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:06:19.049648 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:06:19.049666 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:06:19.049681 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:06:19.049692 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:06:19.049704 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:06:19.049717 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:06:19.049729 systemd[1]: Reached target sockets.target. Dec 13 02:06:19.049740 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:06:19.049752 systemd[1]: Finished network-cleanup.service. Dec 13 02:06:19.049765 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:06:19.049778 systemd[1]: Starting systemd-journald.service... Dec 13 02:06:19.049795 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:06:19.049807 systemd[1]: Starting systemd-resolved.service... Dec 13 02:06:19.049825 systemd-journald[183]: Journal started Dec 13 02:06:19.049891 systemd-journald[183]: Runtime Journal (/run/log/journal/4135a3debe624316bcf072529c66be10) is 8.0M, max 159.0M, 151.0M free. Dec 13 02:06:19.049685 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 02:06:19.060457 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:06:19.075731 systemd[1]: Started systemd-journald.service. Dec 13 02:06:19.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.089449 kernel: audit: type=1130 audit(1734055579.075:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.089614 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:06:19.095079 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:06:19.100125 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:06:19.106836 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:06:19.112872 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:06:19.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.131449 kernel: audit: type=1130 audit(1734055579.094:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.131802 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:06:19.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.170955 kernel: audit: type=1130 audit(1734055579.099:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.171018 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:06:19.171038 kernel: audit: type=1130 audit(1734055579.105:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.167139 systemd-resolved[185]: Positive Trust Anchors: Dec 13 02:06:19.189338 kernel: Bridge firewalling registered Dec 13 02:06:19.167148 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:06:19.167183 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:06:19.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.173367 systemd-resolved[185]: Defaulting to hostname 'linux'. Dec 13 02:06:19.300106 kernel: audit: type=1130 audit(1734055579.137:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.300162 kernel: audit: type=1130 audit(1734055579.176:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.300181 kernel: audit: type=1130 audit(1734055579.182:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.300214 kernel: SCSI subsystem initialized Dec 13 02:06:19.300236 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:06:19.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.300356 dracut-cmdline[200]: dracut-dracut-053 Dec 13 02:06:19.300356 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:06:19.333607 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:06:19.333637 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:06:19.333653 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:06:19.174500 systemd[1]: Started systemd-resolved.service. Dec 13 02:06:19.177040 systemd[1]: Reached target nss-lookup.target. Dec 13 02:06:19.370229 kernel: audit: type=1130 audit(1734055579.345:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.370260 kernel: iscsi: registered transport (tcp) Dec 13 02:06:19.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.179690 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:06:19.183547 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:06:19.189063 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 02:06:19.321799 systemd-modules-load[184]: Inserted module 'dm_multipath' Dec 13 02:06:19.335784 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:06:19.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.358886 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:06:19.401997 kernel: audit: type=1130 audit(1734055579.386:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.383729 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:06:19.417395 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:06:19.417475 kernel: QLogic iSCSI HBA Driver Dec 13 02:06:19.446722 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:06:19.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.450553 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:06:19.503460 kernel: raid6: avx512x4 gen() 18679 MB/s Dec 13 02:06:19.523454 kernel: raid6: avx512x4 xor() 8381 MB/s Dec 13 02:06:19.543447 kernel: raid6: avx512x2 gen() 18541 MB/s Dec 13 02:06:19.564449 kernel: raid6: avx512x2 xor() 29763 MB/s Dec 13 02:06:19.584444 kernel: raid6: avx512x1 gen() 18561 MB/s Dec 13 02:06:19.604443 kernel: raid6: avx512x1 xor() 26846 MB/s Dec 13 02:06:19.625447 kernel: raid6: avx2x4 gen() 18536 MB/s Dec 13 02:06:19.645445 kernel: raid6: avx2x4 xor() 7462 MB/s Dec 13 02:06:19.665444 kernel: raid6: avx2x2 gen() 18626 MB/s Dec 13 02:06:19.685447 kernel: raid6: avx2x2 xor() 22153 MB/s Dec 13 02:06:19.705460 kernel: raid6: avx2x1 gen() 14100 MB/s Dec 13 02:06:19.725445 kernel: raid6: avx2x1 xor() 19447 MB/s Dec 13 02:06:19.746449 kernel: raid6: sse2x4 gen() 11730 MB/s Dec 13 02:06:19.766445 kernel: raid6: sse2x4 xor() 7311 MB/s Dec 13 02:06:19.786443 kernel: raid6: sse2x2 gen() 12672 MB/s Dec 13 02:06:19.807451 kernel: raid6: sse2x2 xor() 7420 MB/s Dec 13 02:06:19.827445 kernel: raid6: sse2x1 gen() 11603 MB/s Dec 13 02:06:19.851698 kernel: raid6: sse2x1 xor() 5876 MB/s Dec 13 02:06:19.851722 kernel: raid6: using algorithm avx512x4 gen() 18679 MB/s Dec 13 02:06:19.851735 kernel: raid6: .... xor() 8381 MB/s, rmw enabled Dec 13 02:06:19.858670 kernel: raid6: using avx512x2 recovery algorithm Dec 13 02:06:19.875463 kernel: xor: automatically using best checksumming function avx Dec 13 02:06:19.972465 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:06:19.980363 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:06:19.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:19.984000 audit: BPF prog-id=7 op=LOAD Dec 13 02:06:19.984000 audit: BPF prog-id=8 op=LOAD Dec 13 02:06:19.985367 systemd[1]: Starting systemd-udevd.service... Dec 13 02:06:20.001301 systemd-udevd[382]: Using default interface naming scheme 'v252'. Dec 13 02:06:20.008531 systemd[1]: Started systemd-udevd.service. Dec 13 02:06:20.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:20.015049 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:06:20.033375 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Dec 13 02:06:20.066005 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:06:20.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:20.071786 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:06:20.106070 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:06:20.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:20.157467 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:06:20.167455 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 02:06:20.176455 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:06:20.182462 kernel: AES CTR mode by8 optimization enabled Dec 13 02:06:20.193463 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 02:06:20.205459 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 02:06:20.217454 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 02:06:20.223465 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 02:06:20.237457 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 02:06:20.237514 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 02:06:20.254199 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 02:06:20.254267 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 02:06:20.273155 kernel: scsi host0: storvsc_host_t Dec 13 02:06:20.273369 kernel: scsi host1: storvsc_host_t Dec 13 02:06:20.273394 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 02:06:20.284455 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 02:06:20.309450 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 02:06:20.327339 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 02:06:20.327358 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 02:06:20.344189 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 02:06:20.344312 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 02:06:20.344418 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 02:06:20.344604 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 02:06:20.344753 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 02:06:20.344916 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:06:20.344935 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 02:06:20.392460 kernel: hv_netvsc 7c1e5234-0720-7c1e-5234-07207c1e5234 eth0: VF slot 1 added Dec 13 02:06:20.402457 kernel: hv_vmbus: registering driver hv_pci Dec 13 02:06:20.410459 kernel: hv_pci 57017610-d27d-4da3-9eae-8bfc16649dd8: PCI VMBus probing: Using version 0x10004 Dec 13 02:06:20.494404 kernel: hv_pci 57017610-d27d-4da3-9eae-8bfc16649dd8: PCI host bridge to bus d27d:00 Dec 13 02:06:20.494588 kernel: pci_bus d27d:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 02:06:20.494749 kernel: pci_bus d27d:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 02:06:20.494894 kernel: pci d27d:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 02:06:20.495078 kernel: pci d27d:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 02:06:20.495221 kernel: pci d27d:00:02.0: enabling Extended Tags Dec 13 02:06:20.495317 kernel: pci d27d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d27d:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 02:06:20.495410 kernel: pci_bus d27d:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 02:06:20.495519 kernel: pci d27d:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 02:06:20.589463 kernel: mlx5_core d27d:00:02.0: firmware version: 14.30.5000 Dec 13 02:06:20.842002 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (449) Dec 13 02:06:20.842038 kernel: mlx5_core d27d:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 02:06:20.842218 kernel: mlx5_core d27d:00:02.0: Supported tc offload range - chains: 1, prios: 1 Dec 13 02:06:20.842368 kernel: mlx5_core d27d:00:02.0: mlx5e_tc_post_act_init:40:(pid 189): firmware level support is missing Dec 13 02:06:20.842548 kernel: hv_netvsc 7c1e5234-0720-7c1e-5234-07207c1e5234 eth0: VF registering: eth1 Dec 13 02:06:20.842697 kernel: mlx5_core d27d:00:02.0 eth1: joined to eth0 Dec 13 02:06:20.661349 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:06:20.714745 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:06:20.853401 kernel: mlx5_core d27d:00:02.0 enP53885s1: renamed from eth1 Dec 13 02:06:20.826282 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:06:20.879142 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:06:20.882500 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:06:20.889299 systemd[1]: Starting disk-uuid.service... Dec 13 02:06:21.916465 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:06:21.917662 disk-uuid[562]: The operation has completed successfully. Dec 13 02:06:22.000420 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:06:22.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:22.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:22.000562 systemd[1]: Finished disk-uuid.service. Dec 13 02:06:22.018765 systemd[1]: Starting verity-setup.service... Dec 13 02:06:22.065455 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:06:22.596280 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:06:22.603724 systemd[1]: Finished verity-setup.service. Dec 13 02:06:22.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:22.611209 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:06:22.700470 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:06:22.700547 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:06:22.706083 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:06:22.706881 systemd[1]: Starting ignition-setup.service... Dec 13 02:06:22.717785 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:06:22.764051 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:06:22.764123 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:06:22.764144 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:06:22.810202 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:06:22.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:22.817000 audit: BPF prog-id=9 op=LOAD Dec 13 02:06:22.818311 systemd[1]: Starting systemd-networkd.service... Dec 13 02:06:22.846133 systemd-networkd[829]: lo: Link UP Dec 13 02:06:22.846144 systemd-networkd[829]: lo: Gained carrier Dec 13 02:06:22.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:22.847223 systemd-networkd[829]: Enumeration completed Dec 13 02:06:22.847309 systemd[1]: Started systemd-networkd.service. Dec 13 02:06:22.854312 systemd[1]: Reached target network.target. Dec 13 02:06:22.866993 systemd-networkd[829]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:06:22.867362 systemd[1]: Starting iscsiuio.service... Dec 13 02:06:22.885769 systemd[1]: Started iscsiuio.service. Dec 13 02:06:22.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:22.889708 systemd[1]: Starting iscsid.service... Dec 13 02:06:22.901011 iscsid[837]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:06:22.901011 iscsid[837]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 02:06:22.901011 iscsid[837]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier]. Dec 13 02:06:22.901011 iscsid[837]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:06:22.901011 iscsid[837]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:06:22.901011 iscsid[837]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:06:22.901011 iscsid[837]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:06:22.981189 kernel: mlx5_core d27d:00:02.0 enP53885s1: Link up Dec 13 02:06:22.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:22.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:22.903165 systemd[1]: Started iscsid.service. Dec 13 02:06:22.915723 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:06:22.943250 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:06:22.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:22.950006 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:06:22.954211 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:06:22.957031 systemd[1]: Reached target remote-fs.target. Dec 13 02:06:22.968096 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:06:22.985778 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:06:23.014050 kernel: hv_netvsc 7c1e5234-0720-7c1e-5234-07207c1e5234 eth0: Data path switched to VF: enP53885s1 Dec 13 02:06:23.014269 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:06:22.986838 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:06:23.014145 systemd-networkd[829]: enP53885s1: Link UP Dec 13 02:06:23.014444 systemd-networkd[829]: eth0: Link UP Dec 13 02:06:23.014894 systemd-networkd[829]: eth0: Gained carrier Dec 13 02:06:23.022011 systemd-networkd[829]: enP53885s1: Gained carrier Dec 13 02:06:23.043544 systemd-networkd[829]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 02:06:23.226896 systemd[1]: Finished ignition-setup.service. Dec 13 02:06:23.242506 kernel: kauditd_printk_skb: 17 callbacks suppressed Dec 13 02:06:23.242541 kernel: audit: type=1130 audit(1734055583.233:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:23.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:23.238900 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:06:24.167675 systemd-networkd[829]: eth0: Gained IPv6LL Dec 13 02:06:27.511175 ignition[856]: Ignition 2.14.0 Dec 13 02:06:27.511189 ignition[856]: Stage: fetch-offline Dec 13 02:06:27.511266 ignition[856]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:06:27.511309 ignition[856]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 02:06:27.669546 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 02:06:27.673207 ignition[856]: parsed url from cmdline: "" Dec 13 02:06:27.673214 ignition[856]: no config URL provided Dec 13 02:06:27.673225 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:06:27.673242 ignition[856]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:06:27.699207 kernel: audit: type=1130 audit(1734055587.682:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:27.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:27.676909 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:06:27.673251 ignition[856]: failed to fetch config: resource requires networking Dec 13 02:06:27.683674 systemd[1]: Starting ignition-fetch.service... Dec 13 02:06:27.675582 ignition[856]: Ignition finished successfully Dec 13 02:06:27.691571 ignition[862]: Ignition 2.14.0 Dec 13 02:06:27.691576 ignition[862]: Stage: fetch Dec 13 02:06:27.691679 ignition[862]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:06:27.691703 ignition[862]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 02:06:27.695759 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 02:06:27.711185 ignition[862]: parsed url from cmdline: "" Dec 13 02:06:27.711190 ignition[862]: no config URL provided Dec 13 02:06:27.711199 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:06:27.711211 ignition[862]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:06:27.711245 ignition[862]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 02:06:27.809615 ignition[862]: GET result: OK Dec 13 02:06:27.809645 ignition[862]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Dec 13 02:06:28.010368 ignition[862]: opening config device: "/dev/sr0" Dec 13 02:06:28.010860 ignition[862]: getting drive status for "/dev/sr0" Dec 13 02:06:28.010914 ignition[862]: drive status: OK Dec 13 02:06:28.010954 ignition[862]: mounting config device Dec 13 02:06:28.010967 ignition[862]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure4008221295" Dec 13 02:06:28.037459 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/12/14 00:00 (1000) Dec 13 02:06:28.037644 ignition[862]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure4008221295" Dec 13 02:06:28.038632 ignition[862]: checking for config drive Dec 13 02:06:28.039734 systemd[1]: tmp-ignition\x2dazure4008221295.mount: Deactivated successfully. Dec 13 02:06:28.038991 ignition[862]: reading config Dec 13 02:06:28.044303 unknown[862]: fetched base config from "system" Dec 13 02:06:28.039352 ignition[862]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure4008221295" Dec 13 02:06:28.068052 kernel: audit: type=1130 audit(1734055588.049:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:28.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:28.044311 unknown[862]: fetched base config from "system" Dec 13 02:06:28.041459 ignition[862]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure4008221295" Dec 13 02:06:28.044316 unknown[862]: fetched user config from "azure" Dec 13 02:06:28.041482 ignition[862]: config has been read from custom data Dec 13 02:06:28.046338 systemd[1]: Finished ignition-fetch.service. Dec 13 02:06:28.041540 ignition[862]: parsing config with SHA512: ac317188a707b3f96f6b277f0fcdf312094b223f6d030b5fc5c03503b3a3707decd2a5e6582f6b17f5f2358726044096261e97d6d1b7ef4528c7752256cd09b1 Dec 13 02:06:28.051373 systemd[1]: Starting ignition-kargs.service... Dec 13 02:06:28.044810 ignition[862]: fetch: fetch complete Dec 13 02:06:28.103254 kernel: audit: type=1130 audit(1734055588.086:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:28.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:28.082553 systemd[1]: Finished ignition-kargs.service. Dec 13 02:06:28.044815 ignition[862]: fetch: fetch passed Dec 13 02:06:28.087731 systemd[1]: Starting ignition-disks.service... Dec 13 02:06:28.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:28.044870 ignition[862]: Ignition finished successfully Dec 13 02:06:28.128624 kernel: audit: type=1130 audit(1734055588.111:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:28.107592 systemd[1]: Finished ignition-disks.service. Dec 13 02:06:28.072370 ignition[870]: Ignition 2.14.0 Dec 13 02:06:28.111941 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:06:28.072378 ignition[870]: Stage: kargs Dec 13 02:06:28.128669 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:06:28.072494 ignition[870]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:06:28.072522 ignition[870]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 02:06:28.077928 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 02:06:28.079574 ignition[870]: kargs: kargs passed Dec 13 02:06:28.079863 ignition[870]: Ignition finished successfully Dec 13 02:06:28.095648 ignition[876]: Ignition 2.14.0 Dec 13 02:06:28.095655 ignition[876]: Stage: disks Dec 13 02:06:28.095770 ignition[876]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:06:28.095796 ignition[876]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 02:06:28.102534 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 02:06:28.103922 ignition[876]: disks: disks passed Dec 13 02:06:28.103982 ignition[876]: Ignition finished successfully Dec 13 02:06:28.162405 systemd[1]: Reached target local-fs.target. Dec 13 02:06:28.164844 systemd[1]: Reached target sysinit.target. Dec 13 02:06:28.165036 systemd[1]: Reached target basic.target. Dec 13 02:06:28.172142 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:06:28.224913 systemd-fsck[884]: ROOT: clean, 621/7326000 files, 481077/7359488 blocks Dec 13 02:06:28.231476 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:06:28.248999 kernel: audit: type=1130 audit(1734055588.233:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:28.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:28.235086 systemd[1]: Mounting sysroot.mount... Dec 13 02:06:28.265449 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:06:28.265737 systemd[1]: Mounted sysroot.mount. Dec 13 02:06:28.267741 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:06:28.320475 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:06:28.327551 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 02:06:28.332982 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:06:28.333113 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:06:28.343393 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:06:28.381919 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:06:28.389144 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:06:28.396862 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (894) Dec 13 02:06:28.407923 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:06:28.407975 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:06:28.407990 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:06:28.415273 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:06:28.422519 initrd-setup-root[899]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:06:28.454636 initrd-setup-root[925]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:06:28.514094 initrd-setup-root[933]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:06:28.521532 initrd-setup-root[941]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:06:29.036396 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:06:29.055463 kernel: audit: type=1130 audit(1734055589.039:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:29.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:29.055297 systemd[1]: Starting ignition-mount.service... Dec 13 02:06:29.061517 systemd[1]: Starting sysroot-boot.service... Dec 13 02:06:29.071117 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:06:29.071242 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:06:29.088723 systemd[1]: Finished sysroot-boot.service. Dec 13 02:06:29.095033 ignition[962]: INFO : Ignition 2.14.0 Dec 13 02:06:29.095033 ignition[962]: INFO : Stage: mount Dec 13 02:06:29.095033 ignition[962]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:06:29.095033 ignition[962]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 02:06:29.125567 kernel: audit: type=1130 audit(1734055589.099:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:29.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:29.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:29.114126 systemd[1]: Finished ignition-mount.service. Dec 13 02:06:29.142306 kernel: audit: type=1130 audit(1734055589.125:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:29.142333 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 02:06:29.142333 ignition[962]: INFO : mount: mount passed Dec 13 02:06:29.142333 ignition[962]: INFO : Ignition finished successfully Dec 13 02:06:30.179282 coreos-metadata[893]: Dec 13 02:06:30.179 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 02:06:30.194057 coreos-metadata[893]: Dec 13 02:06:30.194 INFO Fetch successful Dec 13 02:06:30.229451 coreos-metadata[893]: Dec 13 02:06:30.229 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 02:06:30.246921 coreos-metadata[893]: Dec 13 02:06:30.246 INFO Fetch successful Dec 13 02:06:30.262620 coreos-metadata[893]: Dec 13 02:06:30.262 INFO wrote hostname ci-3510.3.6-a-6288c93be1 to /sysroot/etc/hostname Dec 13 02:06:30.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:30.264507 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 02:06:30.288076 kernel: audit: type=1130 audit(1734055590.269:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:30.271169 systemd[1]: Starting ignition-files.service... Dec 13 02:06:30.291408 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:06:30.307456 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (972) Dec 13 02:06:30.316820 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:06:30.316875 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:06:30.316888 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:06:30.325218 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:06:30.339650 ignition[991]: INFO : Ignition 2.14.0 Dec 13 02:06:30.339650 ignition[991]: INFO : Stage: files Dec 13 02:06:30.344533 ignition[991]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:06:30.344533 ignition[991]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 02:06:30.344533 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 02:06:30.357927 ignition[991]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:06:30.357927 ignition[991]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:06:30.357927 ignition[991]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:06:30.412810 ignition[991]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:06:30.418754 ignition[991]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:06:30.423679 unknown[991]: wrote ssh authorized keys file for user: core Dec 13 02:06:30.427454 ignition[991]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:06:30.444881 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:06:30.450576 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:06:30.457222 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:06:30.463070 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:06:30.472533 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:06:30.472533 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:06:30.472533 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 02:06:30.472533 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:06:30.529834 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (994) Dec 13 02:06:30.507144 systemd[1]: mnt-oem4061819992.mount: Deactivated successfully. Dec 13 02:06:30.534060 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4061819992" Dec 13 02:06:30.534060 ignition[991]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4061819992": device or resource busy Dec 13 02:06:30.534060 ignition[991]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4061819992", trying btrfs: device or resource busy Dec 13 02:06:30.534060 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4061819992" Dec 13 02:06:30.534060 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4061819992" Dec 13 02:06:30.534060 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem4061819992" Dec 13 02:06:30.534060 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem4061819992" Dec 13 02:06:30.534060 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 02:06:30.534060 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:06:30.534060 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:06:30.534060 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2234024998" Dec 13 02:06:30.534060 ignition[991]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2234024998": device or resource busy Dec 13 02:06:30.534060 ignition[991]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2234024998", trying btrfs: device or resource busy Dec 13 02:06:30.534060 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2234024998" Dec 13 02:06:30.621117 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2234024998" Dec 13 02:06:30.621117 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem2234024998" Dec 13 02:06:30.621117 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem2234024998" Dec 13 02:06:30.621117 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:06:30.621117 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:06:30.621117 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 02:06:30.543189 systemd[1]: mnt-oem2234024998.mount: Deactivated successfully. Dec 13 02:06:31.132307 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Dec 13 02:06:31.534442 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:06:31.534442 ignition[991]: INFO : files: op(f): [started] processing unit "waagent.service" Dec 13 02:06:31.534442 ignition[991]: INFO : files: op(f): [finished] processing unit "waagent.service" Dec 13 02:06:31.534442 ignition[991]: INFO : files: op(10): [started] processing unit "nvidia.service" Dec 13 02:06:31.534442 ignition[991]: INFO : files: op(10): [finished] processing unit "nvidia.service" Dec 13 02:06:31.534442 ignition[991]: INFO : files: op(11): [started] setting preset to enabled for "waagent.service" Dec 13 02:06:31.570664 kernel: audit: type=1130 audit(1734055591.546:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.542752 systemd[1]: Finished ignition-files.service. Dec 13 02:06:31.571120 ignition[991]: INFO : files: op(11): [finished] setting preset to enabled for "waagent.service" Dec 13 02:06:31.571120 ignition[991]: INFO : files: op(12): [started] setting preset to enabled for "nvidia.service" Dec 13 02:06:31.571120 ignition[991]: INFO : files: op(12): [finished] setting preset to enabled for "nvidia.service" Dec 13 02:06:31.571120 ignition[991]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:06:31.571120 ignition[991]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:06:31.571120 ignition[991]: INFO : files: files passed Dec 13 02:06:31.571120 ignition[991]: INFO : Ignition finished successfully Dec 13 02:06:31.560953 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:06:31.567127 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:06:31.570389 systemd[1]: Starting ignition-quench.service... Dec 13 02:06:31.614373 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:06:31.614621 systemd[1]: Finished ignition-quench.service. Dec 13 02:06:31.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.649669 kernel: audit: type=1130 audit(1734055591.621:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.649762 kernel: audit: type=1131 audit(1734055591.621:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.653275 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:06:31.659246 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:06:31.688741 kernel: audit: type=1130 audit(1734055591.663:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.665334 systemd[1]: Reached target ignition-complete.target. Dec 13 02:06:31.681164 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:06:31.702797 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:06:31.702905 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:06:31.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.710260 systemd[1]: Reached target initrd-fs.target. Dec 13 02:06:31.739954 kernel: audit: type=1130 audit(1734055591.710:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.739992 kernel: audit: type=1131 audit(1734055591.710:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.736547 systemd[1]: Reached target initrd.target. Dec 13 02:06:31.739974 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:06:31.742003 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:06:31.755720 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:06:31.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.761142 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:06:31.771371 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:06:31.776012 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:06:31.778397 systemd[1]: Stopped target timers.target. Dec 13 02:06:31.782967 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:06:31.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.783125 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:06:31.787477 systemd[1]: Stopped target initrd.target. Dec 13 02:06:31.792503 systemd[1]: Stopped target basic.target. Dec 13 02:06:31.796730 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:06:31.801945 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:06:31.806341 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:06:31.811044 systemd[1]: Stopped target remote-fs.target. Dec 13 02:06:31.815902 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:06:31.820693 systemd[1]: Stopped target sysinit.target. Dec 13 02:06:31.828940 systemd[1]: Stopped target local-fs.target. Dec 13 02:06:31.833360 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:06:31.837910 systemd[1]: Stopped target swap.target. Dec 13 02:06:31.841923 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:06:31.844482 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:06:31.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.849191 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:06:31.853644 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:06:31.856324 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:06:31.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.861288 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:06:31.864586 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:06:31.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.870037 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:06:31.870522 systemd[1]: Stopped ignition-files.service. Dec 13 02:06:31.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.874750 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 02:06:31.874852 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 02:06:31.892160 iscsid[837]: iscsid shutting down. Dec 13 02:06:31.880483 systemd[1]: Stopping ignition-mount.service... Dec 13 02:06:31.898005 ignition[1030]: INFO : Ignition 2.14.0 Dec 13 02:06:31.898005 ignition[1030]: INFO : Stage: umount Dec 13 02:06:31.898005 ignition[1030]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:06:31.898005 ignition[1030]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 02:06:31.898005 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 02:06:31.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.890238 systemd[1]: Stopping iscsid.service... Dec 13 02:06:31.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.927397 ignition[1030]: INFO : umount: umount passed Dec 13 02:06:31.927397 ignition[1030]: INFO : Ignition finished successfully Dec 13 02:06:31.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.898566 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:06:31.912534 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:06:31.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.912682 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:06:31.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.917797 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:06:31.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.917925 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:06:31.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.925163 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:06:31.925273 systemd[1]: Stopped iscsid.service. Dec 13 02:06:31.927839 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:06:31.927923 systemd[1]: Stopped ignition-mount.service. Dec 13 02:06:31.932821 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:06:31.932905 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:06:31.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.937168 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:06:31.937214 systemd[1]: Stopped ignition-disks.service. Dec 13 02:06:31.943788 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:06:31.943837 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:06:31.948402 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:06:31.948485 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:06:31.953252 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:06:31.953307 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:06:31.956090 systemd[1]: Stopped target paths.target. Dec 13 02:06:31.960746 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:06:31.963505 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:06:31.967773 systemd[1]: Stopped target slices.target. Dec 13 02:06:31.969967 systemd[1]: Stopped target sockets.target. Dec 13 02:06:31.975902 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:06:31.975971 systemd[1]: Closed iscsid.socket. Dec 13 02:06:31.980198 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:06:31.980260 systemd[1]: Stopped ignition-setup.service. Dec 13 02:06:32.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:31.988245 systemd[1]: Stopping iscsiuio.service... Dec 13 02:06:32.004823 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:06:32.032210 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:06:32.032313 systemd[1]: Stopped iscsiuio.service. Dec 13 02:06:32.047591 systemd[1]: Stopped target network.target. Dec 13 02:06:32.057130 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:06:32.057196 systemd[1]: Closed iscsiuio.socket. Dec 13 02:06:32.061766 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:06:32.073711 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:06:32.079482 systemd-networkd[829]: eth0: DHCPv6 lease lost Dec 13 02:06:32.082331 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:06:32.083339 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:06:32.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.087893 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:06:32.087977 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:06:32.095233 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:06:32.097594 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:06:32.104000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:06:32.104000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:06:32.105175 systemd[1]: Stopping network-cleanup.service... Dec 13 02:06:32.109829 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:06:32.109908 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:06:32.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.118330 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:06:32.118395 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:06:32.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.125032 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:06:32.125091 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:06:32.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.132733 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:06:32.138258 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:06:32.143339 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:06:32.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.143574 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:06:32.150868 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:06:32.150920 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:06:32.158190 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:06:32.158240 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:06:32.164897 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:06:32.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.165813 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:06:32.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.169658 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:06:32.170638 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:06:32.178239 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:06:32.206144 kernel: hv_netvsc 7c1e5234-0720-7c1e-5234-07207c1e5234 eth0: Data path switched from VF: enP53885s1 Dec 13 02:06:32.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.181332 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:06:32.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.196447 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:06:32.209640 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:06:32.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.209728 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 02:06:32.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.213220 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:06:32.213290 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:06:32.216693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:06:32.216735 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:06:32.224371 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 02:06:32.232199 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:06:32.232317 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:06:32.242458 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:06:32.242545 systemd[1]: Stopped network-cleanup.service. Dec 13 02:06:32.323217 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:06:32.324016 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:06:32.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.332119 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:06:32.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:32.335672 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:06:32.335758 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:06:32.347857 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:06:32.364390 systemd[1]: Switching root. Dec 13 02:06:32.392955 systemd-journald[183]: Journal stopped Dec 13 02:06:46.785878 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 02:06:46.785909 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:06:46.785922 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:06:46.785931 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:06:46.785941 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:06:46.785950 kernel: SELinux: policy capability open_perms=1 Dec 13 02:06:46.785962 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:06:46.785972 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:06:46.785983 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:06:46.785992 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:06:46.786000 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:06:46.786010 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:06:46.786019 kernel: kauditd_printk_skb: 38 callbacks suppressed Dec 13 02:06:46.786030 kernel: audit: type=1403 audit(1734055594.731:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:06:46.786043 systemd[1]: Successfully loaded SELinux policy in 302.421ms. Dec 13 02:06:46.786055 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.698ms. Dec 13 02:06:46.786068 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:06:46.786077 systemd[1]: Detected virtualization microsoft. Dec 13 02:06:46.786091 systemd[1]: Detected architecture x86-64. Dec 13 02:06:46.786100 systemd[1]: Detected first boot. Dec 13 02:06:46.786112 systemd[1]: Hostname set to <ci-3510.3.6-a-6288c93be1>. Dec 13 02:06:46.786122 systemd[1]: Initializing machine ID from random generator. Dec 13 02:06:46.786133 kernel: audit: type=1400 audit(1734055595.449:83): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:06:46.786144 kernel: audit: type=1400 audit(1734055595.465:84): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:06:46.786156 kernel: audit: type=1400 audit(1734055595.465:85): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:06:46.786166 kernel: audit: type=1334 audit(1734055595.480:86): prog-id=10 op=LOAD Dec 13 02:06:46.786176 kernel: audit: type=1334 audit(1734055595.480:87): prog-id=10 op=UNLOAD Dec 13 02:06:46.786186 kernel: audit: type=1334 audit(1734055595.493:88): prog-id=11 op=LOAD Dec 13 02:06:46.786197 kernel: audit: type=1334 audit(1734055595.493:89): prog-id=11 op=UNLOAD Dec 13 02:06:46.786206 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:06:46.786215 kernel: audit: type=1400 audit(1734055597.153:90): avc: denied { associate } for pid=1063 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:06:46.786227 kernel: audit: type=1300 audit(1734055597.153:90): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1046 pid=1063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:06:46.786241 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:06:46.786251 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:06:46.786263 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:06:46.786275 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:06:46.786285 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 02:06:46.786293 kernel: audit: type=1334 audit(1734055606.182:92): prog-id=12 op=LOAD Dec 13 02:06:46.786306 kernel: audit: type=1334 audit(1734055606.182:93): prog-id=3 op=UNLOAD Dec 13 02:06:46.786318 kernel: audit: type=1334 audit(1734055606.192:94): prog-id=13 op=LOAD Dec 13 02:06:46.786331 kernel: audit: type=1334 audit(1734055606.202:95): prog-id=14 op=LOAD Dec 13 02:06:46.786340 kernel: audit: type=1334 audit(1734055606.202:96): prog-id=4 op=UNLOAD Dec 13 02:06:46.786351 kernel: audit: type=1334 audit(1734055606.202:97): prog-id=5 op=UNLOAD Dec 13 02:06:46.786363 kernel: audit: type=1131 audit(1734055606.208:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.786372 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:06:46.786383 systemd[1]: Stopped initrd-switch-root.service. Dec 13 02:06:46.786394 kernel: audit: type=1334 audit(1734055606.237:99): prog-id=12 op=UNLOAD Dec 13 02:06:46.786405 kernel: audit: type=1130 audit(1734055606.250:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.786414 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:06:46.786424 kernel: audit: type=1131 audit(1734055606.250:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.786487 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:06:46.786501 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:06:46.786511 systemd[1]: Created slice system-getty.slice. Dec 13 02:06:46.786520 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:06:46.786535 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:06:46.786548 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:06:46.786558 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:06:46.786573 systemd[1]: Created slice user.slice. Dec 13 02:06:46.786593 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:06:46.786614 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:06:46.786633 systemd[1]: Set up automount boot.automount. Dec 13 02:06:46.786654 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:06:46.786672 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 02:06:46.786695 systemd[1]: Stopped target initrd-fs.target. Dec 13 02:06:46.786716 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 02:06:46.786736 systemd[1]: Reached target integritysetup.target. Dec 13 02:06:46.786757 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:06:46.786778 systemd[1]: Reached target remote-fs.target. Dec 13 02:06:46.786796 systemd[1]: Reached target slices.target. Dec 13 02:06:46.786818 systemd[1]: Reached target swap.target. Dec 13 02:06:46.786836 systemd[1]: Reached target torcx.target. Dec 13 02:06:46.786862 systemd[1]: Reached target veritysetup.target. Dec 13 02:06:46.786880 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:06:46.786901 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:06:46.786920 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:06:46.786939 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:06:46.786963 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:06:46.786981 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:06:46.786999 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:06:46.787018 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:06:46.787036 systemd[1]: Mounting media.mount... Dec 13 02:06:46.787056 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:06:46.787074 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:06:46.787096 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:06:46.787115 systemd[1]: Mounting tmp.mount... Dec 13 02:06:46.787140 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:06:46.787161 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:06:46.787177 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:06:46.787196 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:06:46.787213 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:06:46.787233 systemd[1]: Starting modprobe@drm.service... Dec 13 02:06:46.787251 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:06:46.787269 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:06:46.787290 systemd[1]: Starting modprobe@loop.service... Dec 13 02:06:46.787316 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:06:46.787335 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:06:46.787357 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 02:06:46.787377 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:06:46.787394 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:06:46.787412 systemd[1]: Stopped systemd-journald.service. Dec 13 02:06:46.787432 systemd[1]: Starting systemd-journald.service... Dec 13 02:06:46.787458 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:06:46.787491 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:06:46.787506 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:06:46.787519 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:06:46.787532 kernel: loop: module loaded Dec 13 02:06:46.787546 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:06:46.787561 systemd[1]: Stopped verity-setup.service. Dec 13 02:06:46.787571 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:06:46.789943 kernel: fuse: init (API version 7.34) Dec 13 02:06:46.789972 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:06:46.789993 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:06:46.790008 systemd[1]: Mounted media.mount. Dec 13 02:06:46.790024 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:06:46.790039 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:06:46.790055 systemd[1]: Mounted tmp.mount. Dec 13 02:06:46.790073 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:06:46.790088 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:06:46.790115 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:06:46.790131 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:06:46.790154 systemd-journald[1160]: Journal started Dec 13 02:06:46.790234 systemd-journald[1160]: Runtime Journal (/run/log/journal/a189833577294de8bde471a5e1d6944c) is 8.0M, max 159.0M, 151.0M free. Dec 13 02:06:46.790289 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:06:34.731000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:06:35.449000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:06:35.465000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:06:35.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:06:35.480000 audit: BPF prog-id=10 op=LOAD Dec 13 02:06:35.480000 audit: BPF prog-id=10 op=UNLOAD Dec 13 02:06:35.493000 audit: BPF prog-id=11 op=LOAD Dec 13 02:06:35.493000 audit: BPF prog-id=11 op=UNLOAD Dec 13 02:06:37.153000 audit[1063]: AVC avc: denied { associate } for pid=1063 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:06:37.153000 audit[1063]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1046 pid=1063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:06:37.153000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:06:37.162000 audit[1063]: AVC avc: denied { associate } for pid=1063 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:06:37.162000 audit[1063]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1046 pid=1063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:06:37.162000 audit: CWD cwd="/" Dec 13 02:06:37.162000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:06:37.162000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:06:37.162000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:06:46.182000 audit: BPF prog-id=12 op=LOAD Dec 13 02:06:46.182000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:06:46.192000 audit: BPF prog-id=13 op=LOAD Dec 13 02:06:46.202000 audit: BPF prog-id=14 op=LOAD Dec 13 02:06:46.202000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:06:46.202000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:06:46.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.237000 audit: BPF prog-id=12 op=UNLOAD Dec 13 02:06:46.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.658000 audit: BPF prog-id=15 op=LOAD Dec 13 02:06:46.659000 audit: BPF prog-id=16 op=LOAD Dec 13 02:06:46.659000 audit: BPF prog-id=17 op=LOAD Dec 13 02:06:46.659000 audit: BPF prog-id=13 op=UNLOAD Dec 13 02:06:46.659000 audit: BPF prog-id=14 op=UNLOAD Dec 13 02:06:46.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.782000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:06:46.782000 audit[1160]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc84996260 a2=4000 a3=7ffc849962fc items=0 ppid=1 pid=1160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:06:46.782000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:06:46.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:37.113028 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:06:46.180646 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:06:37.117652 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:06:46.208911 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:06:37.117686 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:06:37.117728 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 02:06:37.117742 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 02:06:37.117804 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 02:06:37.117820 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 02:06:37.118045 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 02:06:37.118099 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:06:37.118116 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:06:37.141296 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 02:06:37.141338 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 02:06:37.141358 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 02:06:37.141371 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 02:06:37.141391 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 02:06:37.141404 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 02:06:45.188794 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:45Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:06:45.189027 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:45Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:06:45.189124 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:45Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:06:45.189291 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:45Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:06:45.189337 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:45Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 02:06:45.189390 /usr/lib/systemd/system-generators/torcx-generator[1063]: time="2024-12-13T02:06:45Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 02:06:46.795459 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:06:46.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.806517 systemd[1]: Started systemd-journald.service. Dec 13 02:06:46.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.807583 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:06:46.807742 systemd[1]: Finished modprobe@drm.service. Dec 13 02:06:46.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.810462 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:06:46.810615 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:06:46.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.813464 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:06:46.813607 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:06:46.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.816192 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:06:46.816330 systemd[1]: Finished modprobe@loop.service. Dec 13 02:06:46.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.819111 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:06:46.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.822298 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:06:46.829650 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:06:46.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.832943 systemd[1]: Reached target network-pre.target. Dec 13 02:06:46.836923 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:06:46.845126 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:06:46.847328 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:06:46.860888 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:06:46.865806 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:06:46.868571 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:06:46.870549 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:06:46.873194 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:06:46.875111 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:06:46.878994 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:06:46.886007 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:06:46.888970 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:06:46.899386 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:06:46.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.903294 systemd-journald[1160]: Time spent on flushing to /var/log/journal/a189833577294de8bde471a5e1d6944c is 17.183ms for 1156 entries. Dec 13 02:06:46.903294 systemd-journald[1160]: System Journal (/var/log/journal/a189833577294de8bde471a5e1d6944c) is 8.0M, max 2.6G, 2.6G free. Dec 13 02:06:46.976225 systemd-journald[1160]: Received client request to flush runtime journal. Dec 13 02:06:46.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.909037 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:06:46.977424 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 02:06:46.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:46.926800 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:06:46.929416 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:06:46.972661 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:06:46.977597 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:06:47.400799 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:06:47.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:47.405712 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:06:47.750085 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:06:47.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:48.036788 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:06:48.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:48.040000 audit: BPF prog-id=18 op=LOAD Dec 13 02:06:48.040000 audit: BPF prog-id=19 op=LOAD Dec 13 02:06:48.040000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:06:48.040000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:06:48.041744 systemd[1]: Starting systemd-udevd.service... Dec 13 02:06:48.060653 systemd-udevd[1191]: Using default interface naming scheme 'v252'. Dec 13 02:06:48.224120 systemd[1]: Started systemd-udevd.service. Dec 13 02:06:48.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:48.227000 audit: BPF prog-id=20 op=LOAD Dec 13 02:06:48.229613 systemd[1]: Starting systemd-networkd.service... Dec 13 02:06:48.276316 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 02:06:48.345362 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:06:48.345507 kernel: hv_vmbus: registering driver hv_balloon Dec 13 02:06:48.338000 audit[1204]: AVC avc: denied { confidentiality } for pid=1204 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:06:48.352000 audit: BPF prog-id=21 op=LOAD Dec 13 02:06:48.353000 audit: BPF prog-id=22 op=LOAD Dec 13 02:06:48.353000 audit: BPF prog-id=23 op=LOAD Dec 13 02:06:48.354586 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:06:48.368791 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 02:06:48.369471 kernel: hv_vmbus: registering driver hv_utils Dec 13 02:06:48.386460 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 02:06:48.408727 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 02:06:48.408871 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 02:06:48.408901 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 02:06:48.338000 audit[1204]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=562381ad8540 a1=f884 a2=7f9bfb04ebc5 a3=5 items=12 ppid=1191 pid=1204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:06:48.338000 audit: CWD cwd="/" Dec 13 02:06:48.338000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:06:48.338000 audit: PATH item=1 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:06:48.338000 audit: PATH item=2 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:06:48.338000 audit: PATH item=3 name=(null) inode=14676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:06:48.338000 audit: PATH item=4 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:06:48.338000 audit: PATH item=5 name=(null) inode=14677 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:06:48.338000 audit: PATH item=6 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:06:48.338000 audit: PATH item=7 name=(null) inode=14678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:06:48.338000 audit: PATH item=8 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:06:48.338000 audit: PATH item=9 name=(null) inode=14679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:06:48.338000 audit: PATH item=10 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:06:48.338000 audit: PATH item=11 name=(null) inode=14680 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:06:48.338000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:06:49.508074 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 02:06:49.517896 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 02:06:49.518239 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 02:06:49.524174 kernel: Console: switching to colour dummy device 80x25 Dec 13 02:06:49.525259 systemd[1]: Started systemd-userdbd.service. Dec 13 02:06:49.526019 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 02:06:49.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:49.692028 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1198) Dec 13 02:06:49.775074 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:06:49.812025 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Dec 13 02:06:49.853027 systemd-networkd[1197]: lo: Link UP Dec 13 02:06:49.853039 systemd-networkd[1197]: lo: Gained carrier Dec 13 02:06:49.853635 systemd-networkd[1197]: Enumeration completed Dec 13 02:06:49.853769 systemd[1]: Started systemd-networkd.service. Dec 13 02:06:49.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:49.858335 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:06:49.875674 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:06:49.889398 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:06:49.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:49.893680 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:06:49.937036 kernel: mlx5_core d27d:00:02.0 enP53885s1: Link up Dec 13 02:06:49.958852 kernel: hv_netvsc 7c1e5234-0720-7c1e-5234-07207c1e5234 eth0: Data path switched to VF: enP53885s1 Dec 13 02:06:49.958371 systemd-networkd[1197]: enP53885s1: Link UP Dec 13 02:06:49.958531 systemd-networkd[1197]: eth0: Link UP Dec 13 02:06:49.958537 systemd-networkd[1197]: eth0: Gained carrier Dec 13 02:06:49.963312 systemd-networkd[1197]: enP53885s1: Gained carrier Dec 13 02:06:50.000182 systemd-networkd[1197]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 02:06:50.166848 lvm[1269]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:06:50.216175 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:06:50.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:50.219579 systemd[1]: Reached target cryptsetup.target. Dec 13 02:06:50.223450 systemd[1]: Starting lvm2-activation.service... Dec 13 02:06:50.230181 lvm[1271]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:06:50.254217 systemd[1]: Finished lvm2-activation.service. Dec 13 02:06:50.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:50.256957 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:06:50.259266 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:06:50.259300 systemd[1]: Reached target local-fs.target. Dec 13 02:06:50.261443 systemd[1]: Reached target machines.target. Dec 13 02:06:50.265200 systemd[1]: Starting ldconfig.service... Dec 13 02:06:50.267685 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:06:50.267792 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:06:50.269076 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:06:50.272811 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:06:50.277087 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:06:50.281432 systemd[1]: Starting systemd-sysext.service... Dec 13 02:06:50.371419 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1273 (bootctl) Dec 13 02:06:50.373185 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:06:50.409744 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:06:50.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:50.517448 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:06:50.780611 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:06:50.780960 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:06:50.797022 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 02:06:50.827028 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:06:50.842028 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 02:06:50.847617 (sd-sysext)[1285]: Using extensions 'kubernetes'. Dec 13 02:06:50.848928 (sd-sysext)[1285]: Merged extensions into '/usr'. Dec 13 02:06:50.866315 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:06:50.868110 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:06:50.868543 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:06:50.872653 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:06:50.875159 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:06:50.880230 systemd[1]: Starting modprobe@loop.service... Dec 13 02:06:50.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:50.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:50.881058 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:06:50.881189 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:06:50.881304 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:06:50.882781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:06:50.884376 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:06:50.886822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:06:50.887621 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:06:50.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:50.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:50.890417 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:06:50.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:50.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:50.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:50.890833 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:06:50.890946 systemd[1]: Finished modprobe@loop.service. Dec 13 02:06:50.892377 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:06:50.892484 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:06:50.895108 systemd[1]: Finished systemd-sysext.service. Dec 13 02:06:50.899373 systemd[1]: Starting ensure-sysext.service... Dec 13 02:06:50.903370 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:06:50.912943 systemd[1]: Reloading. Dec 13 02:06:50.930521 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:06:50.954358 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:06:50.967202 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:06:50.992991 /usr/lib/systemd/system-generators/torcx-generator[1311]: time="2024-12-13T02:06:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:06:50.993055 /usr/lib/systemd/system-generators/torcx-generator[1311]: time="2024-12-13T02:06:50Z" level=info msg="torcx already run" Dec 13 02:06:51.113263 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:06:51.113287 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:06:51.131594 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:06:51.196757 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:06:51.197000 audit: BPF prog-id=24 op=LOAD Dec 13 02:06:51.198000 audit: BPF prog-id=25 op=LOAD Dec 13 02:06:51.198000 audit: BPF prog-id=18 op=UNLOAD Dec 13 02:06:51.198000 audit: BPF prog-id=19 op=UNLOAD Dec 13 02:06:51.199000 audit: BPF prog-id=26 op=LOAD Dec 13 02:06:51.199000 audit: BPF prog-id=20 op=UNLOAD Dec 13 02:06:51.201000 audit: BPF prog-id=27 op=LOAD Dec 13 02:06:51.201000 audit: BPF prog-id=21 op=UNLOAD Dec 13 02:06:51.201000 audit: BPF prog-id=28 op=LOAD Dec 13 02:06:51.201000 audit: BPF prog-id=29 op=LOAD Dec 13 02:06:51.201000 audit: BPF prog-id=22 op=UNLOAD Dec 13 02:06:51.201000 audit: BPF prog-id=23 op=UNLOAD Dec 13 02:06:51.202000 audit: BPF prog-id=30 op=LOAD Dec 13 02:06:51.202000 audit: BPF prog-id=15 op=UNLOAD Dec 13 02:06:51.202000 audit: BPF prog-id=31 op=LOAD Dec 13 02:06:51.202000 audit: BPF prog-id=32 op=LOAD Dec 13 02:06:51.202000 audit: BPF prog-id=16 op=UNLOAD Dec 13 02:06:51.202000 audit: BPF prog-id=17 op=UNLOAD Dec 13 02:06:51.208719 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:06:51.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.222362 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:06:51.222693 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:06:51.224308 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:06:51.228352 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:06:51.232268 systemd[1]: Starting modprobe@loop.service... Dec 13 02:06:51.234416 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:06:51.234632 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:06:51.234811 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:06:51.235894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:06:51.236127 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:06:51.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.239384 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:06:51.239536 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:06:51.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.242831 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:06:51.242982 systemd[1]: Finished modprobe@loop.service. Dec 13 02:06:51.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.247341 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:06:51.247715 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:06:51.249489 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:06:51.253309 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:06:51.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.257266 systemd[1]: Starting modprobe@loop.service... Dec 13 02:06:51.257445 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:06:51.257596 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:06:51.257753 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:06:51.258725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:06:51.258883 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:06:51.259457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:06:51.259579 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:06:51.260336 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:06:51.264081 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:06:51.264476 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:06:51.266288 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:06:51.269263 systemd[1]: Starting modprobe@drm.service... Dec 13 02:06:51.272464 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:06:51.272722 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:06:51.273835 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:06:51.274073 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:06:51.275157 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:06:51.275536 systemd[1]: Finished modprobe@loop.service. Dec 13 02:06:51.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.276196 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:06:51.276355 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:06:51.277605 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:06:51.277761 systemd[1]: Finished modprobe@drm.service. Dec 13 02:06:51.278499 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:06:51.279841 systemd[1]: Finished ensure-sysext.service. Dec 13 02:06:51.283788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:06:51.284274 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:06:51.284593 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:06:51.434966 systemd-fsck[1281]: fsck.fat 4.2 (2021-01-31) Dec 13 02:06:51.434966 systemd-fsck[1281]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 02:06:51.436749 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:06:51.446274 systemd[1]: Mounting boot.mount... Dec 13 02:06:51.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.461562 systemd[1]: Mounted boot.mount. Dec 13 02:06:51.493304 systemd-networkd[1197]: eth0: Gained IPv6LL Dec 13 02:06:51.498992 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:06:51.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.502662 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:06:51.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.992345 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:06:51.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:51.996538 systemd[1]: Starting audit-rules.service... Dec 13 02:06:51.999996 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:06:52.003732 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:06:52.006000 audit: BPF prog-id=33 op=LOAD Dec 13 02:06:52.008892 systemd[1]: Starting systemd-resolved.service... Dec 13 02:06:52.011000 audit: BPF prog-id=34 op=LOAD Dec 13 02:06:52.013560 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:06:52.018857 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:06:52.051414 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:06:52.051000 audit[1393]: SYSTEM_BOOT pid=1393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:06:52.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:52.057759 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:06:52.059350 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:06:52.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:52.115611 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:06:52.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:52.120379 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:06:52.123146 systemd[1]: Reached target time-set.target. Dec 13 02:06:52.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:52.168322 systemd-resolved[1391]: Positive Trust Anchors: Dec 13 02:06:52.168339 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:06:52.168378 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:06:52.243372 systemd-resolved[1391]: Using system hostname 'ci-3510.3.6-a-6288c93be1'. Dec 13 02:06:52.245959 systemd[1]: Started systemd-resolved.service. Dec 13 02:06:52.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:06:52.248948 systemd[1]: Reached target network.target. Dec 13 02:06:52.252231 systemd[1]: Reached target network-online.target. Dec 13 02:06:52.254889 systemd[1]: Reached target nss-lookup.target. Dec 13 02:06:52.263821 systemd-timesyncd[1392]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Dec 13 02:06:52.263892 systemd-timesyncd[1392]: Initial clock synchronization to Fri 2024-12-13 02:06:52.263803 UTC. Dec 13 02:06:52.374038 kernel: kauditd_printk_skb: 126 callbacks suppressed Dec 13 02:06:52.374172 kernel: audit: type=1305 audit(1734055612.364:211): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:06:52.364000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:06:52.366462 systemd[1]: Finished audit-rules.service. Dec 13 02:06:52.374348 augenrules[1409]: No rules Dec 13 02:06:52.364000 audit[1409]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff45c33ed0 a2=420 a3=0 items=0 ppid=1388 pid=1409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:06:52.392648 kernel: audit: type=1300 audit(1734055612.364:211): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff45c33ed0 a2=420 a3=0 items=0 ppid=1388 pid=1409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:06:52.392688 kernel: audit: type=1327 audit(1734055612.364:211): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:06:52.364000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:06:57.606711 ldconfig[1272]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:06:57.618396 systemd[1]: Finished ldconfig.service. Dec 13 02:06:57.622793 systemd[1]: Starting systemd-update-done.service... Dec 13 02:06:57.649081 systemd[1]: Finished systemd-update-done.service. Dec 13 02:06:57.652279 systemd[1]: Reached target sysinit.target. Dec 13 02:06:57.655268 systemd[1]: Started motdgen.path. Dec 13 02:06:57.657774 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:06:57.661624 systemd[1]: Started logrotate.timer. Dec 13 02:06:57.664029 systemd[1]: Started mdadm.timer. Dec 13 02:06:57.665825 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:06:57.668264 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:06:57.668311 systemd[1]: Reached target paths.target. Dec 13 02:06:57.670742 systemd[1]: Reached target timers.target. Dec 13 02:06:57.673416 systemd[1]: Listening on dbus.socket. Dec 13 02:06:57.676400 systemd[1]: Starting docker.socket... Dec 13 02:06:57.681902 systemd[1]: Listening on sshd.socket. Dec 13 02:06:57.684467 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:06:57.684902 systemd[1]: Listening on docker.socket. Dec 13 02:06:57.687292 systemd[1]: Reached target sockets.target. Dec 13 02:06:57.690252 systemd[1]: Reached target basic.target. Dec 13 02:06:57.692761 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:06:57.692797 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:06:57.693848 systemd[1]: Starting containerd.service... Dec 13 02:06:57.697048 systemd[1]: Starting dbus.service... Dec 13 02:06:57.700091 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:06:57.703684 systemd[1]: Starting extend-filesystems.service... Dec 13 02:06:57.706174 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:06:57.726344 systemd[1]: Starting kubelet.service... Dec 13 02:06:57.730062 systemd[1]: Starting motdgen.service... Dec 13 02:06:57.733182 systemd[1]: Started nvidia.service. Dec 13 02:06:57.736602 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:06:57.740029 systemd[1]: Starting sshd-keygen.service... Dec 13 02:06:57.744582 systemd[1]: Starting systemd-logind.service... Dec 13 02:06:57.746692 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:06:57.746827 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:06:57.747414 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:06:57.748717 systemd[1]: Starting update-engine.service... Dec 13 02:06:57.752462 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:06:57.771271 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:06:57.771522 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:06:57.786180 jq[1419]: false Dec 13 02:06:57.786388 jq[1430]: true Dec 13 02:06:57.777563 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:06:57.777790 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:06:57.805995 jq[1438]: true Dec 13 02:06:57.812629 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:06:57.812832 systemd[1]: Finished motdgen.service. Dec 13 02:06:57.833991 extend-filesystems[1420]: Found loop1 Dec 13 02:06:57.833991 extend-filesystems[1420]: Found sda Dec 13 02:06:57.833991 extend-filesystems[1420]: Found sda1 Dec 13 02:06:57.833991 extend-filesystems[1420]: Found sda2 Dec 13 02:06:57.833991 extend-filesystems[1420]: Found sda3 Dec 13 02:06:57.833991 extend-filesystems[1420]: Found usr Dec 13 02:06:57.833991 extend-filesystems[1420]: Found sda4 Dec 13 02:06:57.833991 extend-filesystems[1420]: Found sda6 Dec 13 02:06:57.856781 extend-filesystems[1420]: Found sda7 Dec 13 02:06:57.856781 extend-filesystems[1420]: Found sda9 Dec 13 02:06:57.856781 extend-filesystems[1420]: Checking size of /dev/sda9 Dec 13 02:06:57.884589 env[1440]: time="2024-12-13T02:06:57.884537303Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:06:57.920639 extend-filesystems[1420]: Old size kept for /dev/sda9 Dec 13 02:06:57.935246 extend-filesystems[1420]: Found sr0 Dec 13 02:06:57.923340 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:06:57.923488 systemd[1]: Finished extend-filesystems.service. Dec 13 02:06:57.948235 env[1440]: time="2024-12-13T02:06:57.948191548Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:06:57.948415 env[1440]: time="2024-12-13T02:06:57.948389849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:06:57.951062 env[1440]: time="2024-12-13T02:06:57.950147555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:06:57.951062 env[1440]: time="2024-12-13T02:06:57.950175955Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:06:57.951062 env[1440]: time="2024-12-13T02:06:57.950416856Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:06:57.951062 env[1440]: time="2024-12-13T02:06:57.950433456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:06:57.951062 env[1440]: time="2024-12-13T02:06:57.950444957Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:06:57.951062 env[1440]: time="2024-12-13T02:06:57.950454357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:06:57.951062 env[1440]: time="2024-12-13T02:06:57.950525357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:06:57.951062 env[1440]: time="2024-12-13T02:06:57.950733358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:06:57.951062 env[1440]: time="2024-12-13T02:06:57.950874458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:06:57.951062 env[1440]: time="2024-12-13T02:06:57.950888758Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:06:57.951320 env[1440]: time="2024-12-13T02:06:57.950936558Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:06:57.951320 env[1440]: time="2024-12-13T02:06:57.950946758Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:06:57.957986 systemd-logind[1428]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:06:57.959626 systemd-logind[1428]: New seat seat0. Dec 13 02:06:57.987133 env[1440]: time="2024-12-13T02:06:57.984889089Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:06:57.987133 env[1440]: time="2024-12-13T02:06:57.984952389Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:06:57.987133 env[1440]: time="2024-12-13T02:06:57.984972589Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:06:57.987133 env[1440]: time="2024-12-13T02:06:57.985054990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:06:57.987133 env[1440]: time="2024-12-13T02:06:57.985078290Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:06:57.987133 env[1440]: time="2024-12-13T02:06:57.985103490Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:06:57.987133 env[1440]: time="2024-12-13T02:06:57.985122890Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:06:57.987133 env[1440]: time="2024-12-13T02:06:57.985141490Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:06:57.987133 env[1440]: time="2024-12-13T02:06:57.985159890Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:06:57.987133 env[1440]: time="2024-12-13T02:06:57.985181790Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:06:57.987133 env[1440]: time="2024-12-13T02:06:57.985200090Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:06:57.987133 env[1440]: time="2024-12-13T02:06:57.985218390Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:06:57.987133 env[1440]: time="2024-12-13T02:06:57.985337791Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:06:57.987133 env[1440]: time="2024-12-13T02:06:57.985416991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.985720992Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.985761592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.985783892Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.985841493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.985858793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.985877193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.985893493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.985935893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.985954093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.985973293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.985989693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.986022393Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.986151894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.986176694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:06:57.990687 env[1440]: time="2024-12-13T02:06:57.986196494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:06:57.991228 env[1440]: time="2024-12-13T02:06:57.986212694Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:06:57.991228 env[1440]: time="2024-12-13T02:06:57.986233594Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:06:57.991228 env[1440]: time="2024-12-13T02:06:57.986248494Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:06:57.991228 env[1440]: time="2024-12-13T02:06:57.986291994Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:06:57.991228 env[1440]: time="2024-12-13T02:06:57.986335794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:06:57.991418 env[1440]: time="2024-12-13T02:06:57.986611596Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:06:57.991418 env[1440]: time="2024-12-13T02:06:57.986686596Z" level=info msg="Connect containerd service" Dec 13 02:06:57.991418 env[1440]: time="2024-12-13T02:06:57.986730196Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:06:58.061437 env[1440]: time="2024-12-13T02:06:57.995254029Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:06:58.061437 env[1440]: time="2024-12-13T02:06:57.995549530Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:06:58.061437 env[1440]: time="2024-12-13T02:06:57.995595030Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:06:58.061437 env[1440]: time="2024-12-13T02:06:57.995932031Z" level=info msg="containerd successfully booted in 0.112439s" Dec 13 02:06:58.061437 env[1440]: time="2024-12-13T02:06:58.002620357Z" level=info msg="Start subscribing containerd event" Dec 13 02:06:58.061437 env[1440]: time="2024-12-13T02:06:58.002713057Z" level=info msg="Start recovering state" Dec 13 02:06:58.061437 env[1440]: time="2024-12-13T02:06:58.002814357Z" level=info msg="Start event monitor" Dec 13 02:06:58.061437 env[1440]: time="2024-12-13T02:06:58.002838057Z" level=info msg="Start snapshots syncer" Dec 13 02:06:58.061437 env[1440]: time="2024-12-13T02:06:58.002891658Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:06:58.061437 env[1440]: time="2024-12-13T02:06:58.002902758Z" level=info msg="Start streaming server" Dec 13 02:06:58.020544 dbus-daemon[1418]: [system] SELinux support is enabled Dec 13 02:06:58.062105 bash[1468]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:06:57.995729 systemd[1]: Started containerd.service. Dec 13 02:06:58.028230 dbus-daemon[1418]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 02:06:58.015847 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:06:58.021187 systemd[1]: Started dbus.service. Dec 13 02:06:58.027656 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:06:58.027682 systemd[1]: Reached target system-config.target. Dec 13 02:06:58.030606 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:06:58.030626 systemd[1]: Reached target user-config.target. Dec 13 02:06:58.033360 systemd[1]: Started systemd-logind.service. Dec 13 02:06:58.102439 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 02:06:58.946988 sshd_keygen[1439]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:06:58.973703 systemd[1]: Finished sshd-keygen.service. Dec 13 02:06:58.978739 systemd[1]: Starting issuegen.service... Dec 13 02:06:58.982948 systemd[1]: Started waagent.service. Dec 13 02:06:58.992401 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:06:58.992596 systemd[1]: Finished issuegen.service. Dec 13 02:06:58.996815 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:06:59.006827 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:06:59.011642 systemd[1]: Started getty@tty1.service. Dec 13 02:06:59.016186 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:06:59.019616 systemd[1]: Reached target getty.target. Dec 13 02:06:59.037951 update_engine[1429]: I1213 02:06:59.037583 1429 main.cc:92] Flatcar Update Engine starting Dec 13 02:06:59.066346 systemd[1]: Started kubelet.service. Dec 13 02:06:59.096361 systemd[1]: Started update-engine.service. Dec 13 02:06:59.098349 update_engine[1429]: I1213 02:06:59.098191 1429 update_check_scheduler.cc:74] Next update check in 10m43s Dec 13 02:06:59.101845 systemd[1]: Started locksmithd.service. Dec 13 02:06:59.104776 systemd[1]: Reached target multi-user.target. Dec 13 02:06:59.109479 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:06:59.125906 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:06:59.126136 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:06:59.129498 systemd[1]: Startup finished in 825ms (firmware) + 25.753s (loader) + 1.017s (kernel) + 15.523s (initrd) + 23.795s (userspace) = 1min 6.916s. Dec 13 02:06:59.434149 login[1528]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:06:59.437489 login[1529]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:06:59.469424 systemd[1]: Created slice user-500.slice. Dec 13 02:06:59.471562 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:06:59.487044 systemd-logind[1428]: New session 2 of user core. Dec 13 02:06:59.492080 systemd-logind[1428]: New session 1 of user core. Dec 13 02:06:59.498465 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:06:59.500361 systemd[1]: Starting user@500.service... Dec 13 02:06:59.516456 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:06:59.724669 kubelet[1532]: E1213 02:06:59.724559 1532 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:06:59.727016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:06:59.727180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:06:59.727511 systemd[1]: kubelet.service: Consumed 1.006s CPU time. Dec 13 02:06:59.753963 systemd[1544]: Queued start job for default target default.target. Dec 13 02:06:59.754627 systemd[1544]: Reached target paths.target. Dec 13 02:06:59.754659 systemd[1544]: Reached target sockets.target. Dec 13 02:06:59.754678 systemd[1544]: Reached target timers.target. Dec 13 02:06:59.754696 systemd[1544]: Reached target basic.target. Dec 13 02:06:59.754758 systemd[1544]: Reached target default.target. Dec 13 02:06:59.754798 systemd[1544]: Startup finished in 229ms. Dec 13 02:06:59.754909 systemd[1]: Started user@500.service. Dec 13 02:06:59.756396 systemd[1]: Started session-1.scope. Dec 13 02:06:59.757324 systemd[1]: Started session-2.scope. Dec 13 02:07:00.199596 locksmithd[1533]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:07:04.977202 waagent[1523]: 2024-12-13T02:07:04.977061Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 02:07:04.995164 waagent[1523]: 2024-12-13T02:07:04.989323Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 02:07:04.995164 waagent[1523]: 2024-12-13T02:07:04.992247Z INFO Daemon Daemon Python: 3.9.16 Dec 13 02:07:04.998502 waagent[1523]: 2024-12-13T02:07:04.998403Z INFO Daemon Daemon Run daemon Dec 13 02:07:05.008470 waagent[1523]: 2024-12-13T02:07:05.001169Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 02:07:05.012236 waagent[1523]: 2024-12-13T02:07:05.012102Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 02:07:05.050518 waagent[1523]: 2024-12-13T02:07:05.014606Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 02:07:05.050518 waagent[1523]: 2024-12-13T02:07:05.015579Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 02:07:05.050518 waagent[1523]: 2024-12-13T02:07:05.016526Z INFO Daemon Daemon Using waagent for provisioning Dec 13 02:07:05.050518 waagent[1523]: 2024-12-13T02:07:05.018049Z INFO Daemon Daemon Activate resource disk Dec 13 02:07:05.050518 waagent[1523]: 2024-12-13T02:07:05.018933Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 02:07:05.050518 waagent[1523]: 2024-12-13T02:07:05.026846Z INFO Daemon Daemon Found device: None Dec 13 02:07:05.050518 waagent[1523]: 2024-12-13T02:07:05.027938Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 02:07:05.050518 waagent[1523]: 2024-12-13T02:07:05.029052Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 02:07:05.050518 waagent[1523]: 2024-12-13T02:07:05.031081Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 02:07:05.050518 waagent[1523]: 2024-12-13T02:07:05.032141Z INFO Daemon Daemon Running default provisioning handler Dec 13 02:07:05.053443 waagent[1523]: 2024-12-13T02:07:05.053299Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 02:07:05.061864 waagent[1523]: 2024-12-13T02:07:05.061694Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 02:07:05.067132 waagent[1523]: 2024-12-13T02:07:05.067037Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 02:07:05.070113 waagent[1523]: 2024-12-13T02:07:05.070037Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 02:07:05.120290 waagent[1523]: 2024-12-13T02:07:05.117629Z INFO Daemon Daemon Successfully mounted dvd Dec 13 02:07:05.164213 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 02:07:05.211133 waagent[1523]: 2024-12-13T02:07:05.210880Z INFO Daemon Daemon Detect protocol endpoint Dec 13 02:07:05.214768 waagent[1523]: 2024-12-13T02:07:05.214669Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 02:07:05.220934 waagent[1523]: 2024-12-13T02:07:05.220833Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 02:07:05.225342 waagent[1523]: 2024-12-13T02:07:05.225250Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 02:07:05.232863 waagent[1523]: 2024-12-13T02:07:05.232779Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 02:07:05.236399 waagent[1523]: 2024-12-13T02:07:05.236297Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 02:07:05.350833 waagent[1523]: 2024-12-13T02:07:05.350664Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 02:07:05.355282 waagent[1523]: 2024-12-13T02:07:05.355228Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 02:07:05.362782 waagent[1523]: 2024-12-13T02:07:05.356245Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 02:07:05.717089 waagent[1523]: 2024-12-13T02:07:05.716868Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 02:07:05.729464 waagent[1523]: 2024-12-13T02:07:05.729377Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 02:07:05.733017 waagent[1523]: 2024-12-13T02:07:05.732922Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 02:07:05.815526 waagent[1523]: 2024-12-13T02:07:05.815390Z INFO Daemon Daemon Found private key matching thumbprint 05D1AD4B202A11D7CB7D9ACB27015625EF54E233 Dec 13 02:07:05.826791 waagent[1523]: 2024-12-13T02:07:05.815940Z INFO Daemon Daemon Certificate with thumbprint E22B76AD027E0E3BE44E039400B654BF5326A34B has no matching private key. Dec 13 02:07:05.826791 waagent[1523]: 2024-12-13T02:07:05.817302Z INFO Daemon Daemon Fetch goal state completed Dec 13 02:07:05.868695 waagent[1523]: 2024-12-13T02:07:05.868603Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 1fb42117-c8f4-4c5f-b417-c7971fc0baed New eTag: 8393678883140511659] Dec 13 02:07:05.878370 waagent[1523]: 2024-12-13T02:07:05.869611Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 02:07:05.885872 waagent[1523]: 2024-12-13T02:07:05.885799Z INFO Daemon Daemon Starting provisioning Dec 13 02:07:05.893546 waagent[1523]: 2024-12-13T02:07:05.886258Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 02:07:05.893546 waagent[1523]: 2024-12-13T02:07:05.887317Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-6288c93be1] Dec 13 02:07:05.909487 waagent[1523]: 2024-12-13T02:07:05.909324Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-6288c93be1] Dec 13 02:07:05.918251 waagent[1523]: 2024-12-13T02:07:05.910262Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 02:07:05.918251 waagent[1523]: 2024-12-13T02:07:05.911866Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 02:07:05.926921 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 02:07:05.927203 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 02:07:05.927281 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 02:07:05.927630 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:07:05.933050 systemd-networkd[1197]: eth0: DHCPv6 lease lost Dec 13 02:07:05.934404 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:07:05.934567 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:07:05.936848 systemd[1]: Starting systemd-networkd.service... Dec 13 02:07:05.968782 systemd-networkd[1585]: enP53885s1: Link UP Dec 13 02:07:05.968792 systemd-networkd[1585]: enP53885s1: Gained carrier Dec 13 02:07:05.970361 systemd-networkd[1585]: eth0: Link UP Dec 13 02:07:05.970370 systemd-networkd[1585]: eth0: Gained carrier Dec 13 02:07:05.970795 systemd-networkd[1585]: lo: Link UP Dec 13 02:07:05.970805 systemd-networkd[1585]: lo: Gained carrier Dec 13 02:07:05.971134 systemd-networkd[1585]: eth0: Gained IPv6LL Dec 13 02:07:05.971413 systemd-networkd[1585]: Enumeration completed Dec 13 02:07:05.976808 waagent[1523]: 2024-12-13T02:07:05.972865Z INFO Daemon Daemon Create user account if not exists Dec 13 02:07:05.976808 waagent[1523]: 2024-12-13T02:07:05.973580Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 02:07:05.976808 waagent[1523]: 2024-12-13T02:07:05.974579Z INFO Daemon Daemon Configure sudoer Dec 13 02:07:05.971546 systemd[1]: Started systemd-networkd.service. Dec 13 02:07:05.977246 waagent[1523]: 2024-12-13T02:07:05.977187Z INFO Daemon Daemon Configure sshd Dec 13 02:07:05.978120 waagent[1523]: 2024-12-13T02:07:05.978069Z INFO Daemon Daemon Deploy ssh public key. Dec 13 02:07:05.987506 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:07:05.994695 systemd-networkd[1585]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:07:06.025109 systemd-networkd[1585]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 02:07:06.028961 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:07:06.044258 waagent[1523]: 2024-12-13T02:07:06.044136Z INFO Daemon Daemon Decode custom data Dec 13 02:07:06.047623 waagent[1523]: 2024-12-13T02:07:06.047538Z INFO Daemon Daemon Save custom data Dec 13 02:07:07.109328 waagent[1523]: 2024-12-13T02:07:07.109218Z INFO Daemon Daemon Provisioning complete Dec 13 02:07:07.125909 waagent[1523]: 2024-12-13T02:07:07.125704Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 02:07:07.140205 waagent[1523]: 2024-12-13T02:07:07.127235Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 02:07:07.140205 waagent[1523]: 2024-12-13T02:07:07.135843Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 02:07:07.436292 waagent[1594]: 2024-12-13T02:07:07.436118Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 02:07:07.437098 waagent[1594]: 2024-12-13T02:07:07.437031Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 02:07:07.437257 waagent[1594]: 2024-12-13T02:07:07.437204Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 02:07:07.449318 waagent[1594]: 2024-12-13T02:07:07.449228Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 02:07:07.449508 waagent[1594]: 2024-12-13T02:07:07.449451Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 02:07:07.513703 waagent[1594]: 2024-12-13T02:07:07.513573Z INFO ExtHandler ExtHandler Found private key matching thumbprint 05D1AD4B202A11D7CB7D9ACB27015625EF54E233 Dec 13 02:07:07.513942 waagent[1594]: 2024-12-13T02:07:07.513881Z INFO ExtHandler ExtHandler Certificate with thumbprint E22B76AD027E0E3BE44E039400B654BF5326A34B has no matching private key. Dec 13 02:07:07.514205 waagent[1594]: 2024-12-13T02:07:07.514153Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 02:07:07.528472 waagent[1594]: 2024-12-13T02:07:07.528406Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: fe5bd2ad-490d-4af4-8f46-61a5acb5d7f8 New eTag: 8393678883140511659] Dec 13 02:07:07.529053 waagent[1594]: 2024-12-13T02:07:07.528982Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 02:07:07.688900 waagent[1594]: 2024-12-13T02:07:07.688695Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 02:07:07.712630 waagent[1594]: 2024-12-13T02:07:07.712533Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1594 Dec 13 02:07:07.716116 waagent[1594]: 2024-12-13T02:07:07.716047Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 02:07:07.717383 waagent[1594]: 2024-12-13T02:07:07.717319Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 02:07:07.774410 waagent[1594]: 2024-12-13T02:07:07.774337Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 02:07:07.774821 waagent[1594]: 2024-12-13T02:07:07.774760Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 02:07:07.783277 waagent[1594]: 2024-12-13T02:07:07.783214Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 02:07:07.783801 waagent[1594]: 2024-12-13T02:07:07.783732Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 02:07:07.784912 waagent[1594]: 2024-12-13T02:07:07.784843Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 02:07:07.786227 waagent[1594]: 2024-12-13T02:07:07.786165Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 02:07:07.786648 waagent[1594]: 2024-12-13T02:07:07.786590Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 02:07:07.786802 waagent[1594]: 2024-12-13T02:07:07.786754Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 02:07:07.787358 waagent[1594]: 2024-12-13T02:07:07.787298Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 02:07:07.787661 waagent[1594]: 2024-12-13T02:07:07.787603Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 02:07:07.787661 waagent[1594]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 02:07:07.787661 waagent[1594]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 02:07:07.787661 waagent[1594]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 02:07:07.787661 waagent[1594]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 02:07:07.787661 waagent[1594]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 02:07:07.787661 waagent[1594]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 02:07:07.790884 waagent[1594]: 2024-12-13T02:07:07.790676Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 02:07:07.791722 waagent[1594]: 2024-12-13T02:07:07.791663Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 02:07:07.791897 waagent[1594]: 2024-12-13T02:07:07.791832Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 02:07:07.792312 waagent[1594]: 2024-12-13T02:07:07.792249Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 02:07:07.793197 waagent[1594]: 2024-12-13T02:07:07.793133Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 02:07:07.793798 waagent[1594]: 2024-12-13T02:07:07.793733Z INFO EnvHandler ExtHandler Configure routes Dec 13 02:07:07.793945 waagent[1594]: 2024-12-13T02:07:07.793897Z INFO EnvHandler ExtHandler Gateway:None Dec 13 02:07:07.794097 waagent[1594]: 2024-12-13T02:07:07.794051Z INFO EnvHandler ExtHandler Routes:None Dec 13 02:07:07.795121 waagent[1594]: 2024-12-13T02:07:07.795060Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 02:07:07.795291 waagent[1594]: 2024-12-13T02:07:07.795238Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 02:07:07.795564 waagent[1594]: 2024-12-13T02:07:07.795505Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 02:07:07.806617 waagent[1594]: 2024-12-13T02:07:07.806558Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 02:07:07.807327 waagent[1594]: 2024-12-13T02:07:07.807284Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 02:07:07.808224 waagent[1594]: 2024-12-13T02:07:07.808171Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 02:07:07.848590 waagent[1594]: 2024-12-13T02:07:07.848443Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1585' Dec 13 02:07:07.860427 waagent[1594]: 2024-12-13T02:07:07.860336Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 02:07:07.926606 waagent[1594]: 2024-12-13T02:07:07.926451Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 02:07:07.926606 waagent[1594]: Executing ['ip', '-a', '-o', 'link']: Dec 13 02:07:07.926606 waagent[1594]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 02:07:07.926606 waagent[1594]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:07:20 brd ff:ff:ff:ff:ff:ff Dec 13 02:07:07.926606 waagent[1594]: 3: enP53885s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:07:20 brd ff:ff:ff:ff:ff:ff\ altname enP53885p0s2 Dec 13 02:07:07.926606 waagent[1594]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 02:07:07.926606 waagent[1594]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 02:07:07.926606 waagent[1594]: 2: eth0 inet 10.200.8.12/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 02:07:07.926606 waagent[1594]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 02:07:07.926606 waagent[1594]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 02:07:07.926606 waagent[1594]: 2: eth0 inet6 fe80::7e1e:52ff:fe34:720/64 scope link \ valid_lft forever preferred_lft forever Dec 13 02:07:08.143761 waagent[1594]: 2024-12-13T02:07:08.143688Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 02:07:09.140556 waagent[1523]: 2024-12-13T02:07:09.140380Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 02:07:09.146372 waagent[1523]: 2024-12-13T02:07:09.146308Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 02:07:09.732797 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:07:09.733083 systemd[1]: Stopped kubelet.service. Dec 13 02:07:09.733140 systemd[1]: kubelet.service: Consumed 1.006s CPU time. Dec 13 02:07:09.735022 systemd[1]: Starting kubelet.service... Dec 13 02:07:09.853531 systemd[1]: Started kubelet.service. Dec 13 02:07:10.211128 waagent[1632]: 2024-12-13T02:07:10.210950Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 02:07:10.211779 waagent[1632]: 2024-12-13T02:07:10.211712Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 02:07:10.211923 waagent[1632]: 2024-12-13T02:07:10.211870Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 02:07:10.212083 waagent[1632]: 2024-12-13T02:07:10.212036Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 13 02:07:10.221805 waagent[1632]: 2024-12-13T02:07:10.221692Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 02:07:10.222226 waagent[1632]: 2024-12-13T02:07:10.222167Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 02:07:10.222388 waagent[1632]: 2024-12-13T02:07:10.222341Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 02:07:10.234632 waagent[1632]: 2024-12-13T02:07:10.234551Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 02:07:10.247789 waagent[1632]: 2024-12-13T02:07:10.247712Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 02:07:10.248823 waagent[1632]: 2024-12-13T02:07:10.248757Z INFO ExtHandler Dec 13 02:07:10.248982 waagent[1632]: 2024-12-13T02:07:10.248925Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 816d299b-c753-4ff5-946b-7e26be567b94 eTag: 8393678883140511659 source: Fabric] Dec 13 02:07:10.249704 waagent[1632]: 2024-12-13T02:07:10.249647Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 02:07:10.456558 waagent[1632]: 2024-12-13T02:07:10.456403Z INFO ExtHandler Dec 13 02:07:10.456810 waagent[1632]: 2024-12-13T02:07:10.456740Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 02:07:10.465943 waagent[1632]: 2024-12-13T02:07:10.465809Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 02:07:10.466410 waagent[1632]: 2024-12-13T02:07:10.466355Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 02:07:10.484771 kubelet[1639]: E1213 02:07:10.484724 1639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:07:10.488067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:07:10.488226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:07:10.491312 waagent[1632]: 2024-12-13T02:07:10.491252Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 02:07:10.563185 waagent[1632]: 2024-12-13T02:07:10.563057Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E22B76AD027E0E3BE44E039400B654BF5326A34B', 'hasPrivateKey': False} Dec 13 02:07:10.564189 waagent[1632]: 2024-12-13T02:07:10.564112Z INFO ExtHandler Downloaded certificate {'thumbprint': '05D1AD4B202A11D7CB7D9ACB27015625EF54E233', 'hasPrivateKey': True} Dec 13 02:07:10.565185 waagent[1632]: 2024-12-13T02:07:10.565122Z INFO ExtHandler Fetch goal state completed Dec 13 02:07:10.585678 waagent[1632]: 2024-12-13T02:07:10.585568Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 02:07:10.597551 waagent[1632]: 2024-12-13T02:07:10.597437Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1632 Dec 13 02:07:10.600665 waagent[1632]: 2024-12-13T02:07:10.600588Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 02:07:10.601707 waagent[1632]: 2024-12-13T02:07:10.601640Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 02:07:10.602012 waagent[1632]: 2024-12-13T02:07:10.601947Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 02:07:10.604037 waagent[1632]: 2024-12-13T02:07:10.603965Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 02:07:10.609113 waagent[1632]: 2024-12-13T02:07:10.609056Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 02:07:10.609491 waagent[1632]: 2024-12-13T02:07:10.609432Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 02:07:10.617881 waagent[1632]: 2024-12-13T02:07:10.617821Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 02:07:10.618403 waagent[1632]: 2024-12-13T02:07:10.618339Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 02:07:10.624746 waagent[1632]: 2024-12-13T02:07:10.624644Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 02:07:10.625843 waagent[1632]: 2024-12-13T02:07:10.625774Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 02:07:10.627427 waagent[1632]: 2024-12-13T02:07:10.627366Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 02:07:10.627869 waagent[1632]: 2024-12-13T02:07:10.627814Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 02:07:10.628093 waagent[1632]: 2024-12-13T02:07:10.627982Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 02:07:10.628658 waagent[1632]: 2024-12-13T02:07:10.628604Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 02:07:10.629113 waagent[1632]: 2024-12-13T02:07:10.629046Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 02:07:10.629758 waagent[1632]: 2024-12-13T02:07:10.629704Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 02:07:10.629955 waagent[1632]: 2024-12-13T02:07:10.629901Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 02:07:10.630057 waagent[1632]: 2024-12-13T02:07:10.629976Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 02:07:10.630057 waagent[1632]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 02:07:10.630057 waagent[1632]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 02:07:10.630057 waagent[1632]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 02:07:10.630057 waagent[1632]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 02:07:10.630057 waagent[1632]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 02:07:10.630057 waagent[1632]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 02:07:10.630602 waagent[1632]: 2024-12-13T02:07:10.630493Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 02:07:10.632980 waagent[1632]: 2024-12-13T02:07:10.632757Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 02:07:10.634408 waagent[1632]: 2024-12-13T02:07:10.634334Z INFO EnvHandler ExtHandler Configure routes Dec 13 02:07:10.634596 waagent[1632]: 2024-12-13T02:07:10.634523Z INFO EnvHandler ExtHandler Gateway:None Dec 13 02:07:10.634758 waagent[1632]: 2024-12-13T02:07:10.634695Z INFO EnvHandler ExtHandler Routes:None Dec 13 02:07:10.636706 waagent[1632]: 2024-12-13T02:07:10.636648Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 02:07:10.639233 waagent[1632]: 2024-12-13T02:07:10.639049Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 02:07:10.639432 waagent[1632]: 2024-12-13T02:07:10.639368Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 02:07:10.657750 waagent[1632]: 2024-12-13T02:07:10.657672Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 02:07:10.663730 waagent[1632]: 2024-12-13T02:07:10.663658Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 02:07:10.663730 waagent[1632]: Executing ['ip', '-a', '-o', 'link']: Dec 13 02:07:10.663730 waagent[1632]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 02:07:10.663730 waagent[1632]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:07:20 brd ff:ff:ff:ff:ff:ff Dec 13 02:07:10.663730 waagent[1632]: 3: enP53885s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:07:20 brd ff:ff:ff:ff:ff:ff\ altname enP53885p0s2 Dec 13 02:07:10.663730 waagent[1632]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 02:07:10.663730 waagent[1632]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 02:07:10.663730 waagent[1632]: 2: eth0 inet 10.200.8.12/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 02:07:10.663730 waagent[1632]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 02:07:10.663730 waagent[1632]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 02:07:10.663730 waagent[1632]: 2: eth0 inet6 fe80::7e1e:52ff:fe34:720/64 scope link \ valid_lft forever preferred_lft forever Dec 13 02:07:10.702979 waagent[1632]: 2024-12-13T02:07:10.702907Z INFO ExtHandler ExtHandler Dec 13 02:07:10.704064 waagent[1632]: 2024-12-13T02:07:10.703981Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 44c03141-359e-4232-86bb-5d9011ba6632 correlation 67f11fda-1130-4277-9f7c-3a97d101c3bd created: 2024-12-13T02:05:41.119911Z] Dec 13 02:07:10.705102 waagent[1632]: 2024-12-13T02:07:10.705043Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 02:07:10.712694 waagent[1632]: 2024-12-13T02:07:10.712566Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 9 ms] Dec 13 02:07:10.751679 waagent[1632]: 2024-12-13T02:07:10.751595Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 02:07:10.776110 waagent[1632]: 2024-12-13T02:07:10.776029Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 13 02:07:10.776110 waagent[1632]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 02:07:10.776110 waagent[1632]: pkts bytes target prot opt in out source destination Dec 13 02:07:10.776110 waagent[1632]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 02:07:10.776110 waagent[1632]: pkts bytes target prot opt in out source destination Dec 13 02:07:10.776110 waagent[1632]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 02:07:10.776110 waagent[1632]: pkts bytes target prot opt in out source destination Dec 13 02:07:10.776110 waagent[1632]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 02:07:10.776110 waagent[1632]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 02:07:10.776110 waagent[1632]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 02:07:10.776596 waagent[1632]: 2024-12-13T02:07:10.776333Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B7AFF952-481D-477B-9577-88608602E9B1;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 02:07:10.783601 waagent[1632]: 2024-12-13T02:07:10.783495Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 02:07:10.783601 waagent[1632]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 02:07:10.783601 waagent[1632]: pkts bytes target prot opt in out source destination Dec 13 02:07:10.783601 waagent[1632]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 02:07:10.783601 waagent[1632]: pkts bytes target prot opt in out source destination Dec 13 02:07:10.783601 waagent[1632]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 02:07:10.783601 waagent[1632]: pkts bytes target prot opt in out source destination Dec 13 02:07:10.783601 waagent[1632]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 02:07:10.783601 waagent[1632]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 02:07:10.783601 waagent[1632]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 02:07:10.784204 waagent[1632]: 2024-12-13T02:07:10.784143Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 02:07:20.732571 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:07:20.732837 systemd[1]: Stopped kubelet.service. Dec 13 02:07:20.734655 systemd[1]: Starting kubelet.service... Dec 13 02:07:20.919706 systemd[1]: Started kubelet.service. Dec 13 02:07:21.361994 kubelet[1694]: E1213 02:07:21.361942 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:07:21.363733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:07:21.363891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:07:31.482744 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 02:07:31.483082 systemd[1]: Stopped kubelet.service. Dec 13 02:07:31.485773 systemd[1]: Starting kubelet.service... Dec 13 02:07:31.827782 systemd[1]: Started kubelet.service. Dec 13 02:07:32.166304 kubelet[1703]: E1213 02:07:32.166186 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:07:32.168060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:07:32.168215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:07:37.628973 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 02:07:42.232748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 02:07:42.233111 systemd[1]: Stopped kubelet.service. Dec 13 02:07:42.235081 systemd[1]: Starting kubelet.service... Dec 13 02:07:42.397460 systemd[1]: Started kubelet.service. Dec 13 02:07:42.870243 kubelet[1712]: E1213 02:07:42.870194 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:07:42.871912 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:07:42.872090 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:07:44.832940 update_engine[1429]: I1213 02:07:44.832860 1429 update_attempter.cc:509] Updating boot flags... Dec 13 02:07:48.331161 systemd[1]: Created slice system-sshd.slice. Dec 13 02:07:48.333376 systemd[1]: Started sshd@0-10.200.8.12:22-10.200.16.10:40206.service. Dec 13 02:07:49.344719 sshd[1784]: Accepted publickey for core from 10.200.16.10 port 40206 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:07:49.346268 sshd[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:07:49.350834 systemd-logind[1428]: New session 3 of user core. Dec 13 02:07:49.352307 systemd[1]: Started session-3.scope. Dec 13 02:07:49.891083 systemd[1]: Started sshd@1-10.200.8.12:22-10.200.16.10:45930.service. Dec 13 02:07:50.521476 sshd[1792]: Accepted publickey for core from 10.200.16.10 port 45930 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:07:50.523323 sshd[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:07:50.529086 systemd[1]: Started session-4.scope. Dec 13 02:07:50.529522 systemd-logind[1428]: New session 4 of user core. Dec 13 02:07:50.967176 sshd[1792]: pam_unix(sshd:session): session closed for user core Dec 13 02:07:50.970321 systemd[1]: sshd@1-10.200.8.12:22-10.200.16.10:45930.service: Deactivated successfully. Dec 13 02:07:50.971178 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:07:50.971821 systemd-logind[1428]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:07:50.972590 systemd-logind[1428]: Removed session 4. Dec 13 02:07:51.074851 systemd[1]: Started sshd@2-10.200.8.12:22-10.200.16.10:45946.service. Dec 13 02:07:51.703821 sshd[1798]: Accepted publickey for core from 10.200.16.10 port 45946 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:07:51.705290 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:07:51.709935 systemd[1]: Started session-5.scope. Dec 13 02:07:51.710667 systemd-logind[1428]: New session 5 of user core. Dec 13 02:07:52.142376 sshd[1798]: pam_unix(sshd:session): session closed for user core Dec 13 02:07:52.145166 systemd[1]: sshd@2-10.200.8.12:22-10.200.16.10:45946.service: Deactivated successfully. Dec 13 02:07:52.146018 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:07:52.146675 systemd-logind[1428]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:07:52.147450 systemd-logind[1428]: Removed session 5. Dec 13 02:07:52.247387 systemd[1]: Started sshd@3-10.200.8.12:22-10.200.16.10:45948.service. Dec 13 02:07:52.875776 sshd[1804]: Accepted publickey for core from 10.200.16.10 port 45948 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:07:52.877255 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:07:52.878165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 02:07:52.878512 systemd[1]: Stopped kubelet.service. Dec 13 02:07:52.880439 systemd[1]: Starting kubelet.service... Dec 13 02:07:52.884156 systemd-logind[1428]: New session 6 of user core. Dec 13 02:07:52.886089 systemd[1]: Started session-6.scope. Dec 13 02:07:53.206975 systemd[1]: Started kubelet.service. Dec 13 02:07:53.320596 sshd[1804]: pam_unix(sshd:session): session closed for user core Dec 13 02:07:53.323830 systemd[1]: sshd@3-10.200.8.12:22-10.200.16.10:45948.service: Deactivated successfully. Dec 13 02:07:53.324683 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:07:53.325359 systemd-logind[1428]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:07:53.326176 systemd-logind[1428]: Removed session 6. Dec 13 02:07:53.424105 systemd[1]: Started sshd@4-10.200.8.12:22-10.200.16.10:45952.service. Dec 13 02:07:53.513244 kubelet[1811]: E1213 02:07:53.513190 1811 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:07:53.514902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:07:53.515078 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:07:54.052110 sshd[1819]: Accepted publickey for core from 10.200.16.10 port 45952 ssh2: RSA SHA256:gXnTcda5xTHu03Chb+JqgZafruXVzN/4W1lBkFcVm+I Dec 13 02:07:54.053780 sshd[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:07:54.058747 systemd[1]: Started session-7.scope. Dec 13 02:07:54.059466 systemd-logind[1428]: New session 7 of user core. Dec 13 02:07:54.620713 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:07:54.621072 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:07:54.646367 systemd[1]: Starting coreos-metadata.service... Dec 13 02:07:54.741495 coreos-metadata[1826]: Dec 13 02:07:54.741 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 02:07:54.744142 coreos-metadata[1826]: Dec 13 02:07:54.744 INFO Fetch successful Dec 13 02:07:54.744516 coreos-metadata[1826]: Dec 13 02:07:54.744 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 02:07:54.746037 coreos-metadata[1826]: Dec 13 02:07:54.745 INFO Fetch successful Dec 13 02:07:54.746479 coreos-metadata[1826]: Dec 13 02:07:54.746 INFO Fetching http://168.63.129.16/machine/165a043a-4187-44a3-8046-d7a40b88c76b/a9615c75%2D225f%2D456b%2D87ef%2Da226a6671844.%5Fci%2D3510.3.6%2Da%2D6288c93be1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 02:07:54.747923 coreos-metadata[1826]: Dec 13 02:07:54.747 INFO Fetch successful Dec 13 02:07:54.780916 coreos-metadata[1826]: Dec 13 02:07:54.780 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 02:07:54.793413 coreos-metadata[1826]: Dec 13 02:07:54.793 INFO Fetch successful Dec 13 02:07:54.802584 systemd[1]: Finished coreos-metadata.service. Dec 13 02:07:55.280852 systemd[1]: Stopped kubelet.service. Dec 13 02:07:55.283952 systemd[1]: Starting kubelet.service... Dec 13 02:07:55.315774 systemd[1]: Reloading. Dec 13 02:07:55.419616 /usr/lib/systemd/system-generators/torcx-generator[1878]: time="2024-12-13T02:07:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:07:55.427336 /usr/lib/systemd/system-generators/torcx-generator[1878]: time="2024-12-13T02:07:55Z" level=info msg="torcx already run" Dec 13 02:07:55.531684 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:07:55.531705 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:07:55.548545 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:07:55.656837 systemd[1]: Started kubelet.service. Dec 13 02:07:55.662310 systemd[1]: Stopping kubelet.service... Dec 13 02:07:55.663260 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:07:55.663471 systemd[1]: Stopped kubelet.service. Dec 13 02:07:55.665364 systemd[1]: Starting kubelet.service... Dec 13 02:07:55.953101 systemd[1]: Started kubelet.service. Dec 13 02:07:55.998852 kubelet[1951]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:07:55.998852 kubelet[1951]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:07:55.998852 kubelet[1951]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:07:55.999383 kubelet[1951]: I1213 02:07:55.998943 1951 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:07:57.171802 kubelet[1951]: I1213 02:07:57.171749 1951 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 02:07:57.173277 kubelet[1951]: I1213 02:07:57.173236 1951 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:07:57.173735 kubelet[1951]: I1213 02:07:57.173715 1951 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 02:07:57.207354 kubelet[1951]: I1213 02:07:57.207315 1951 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:07:57.214230 kubelet[1951]: E1213 02:07:57.214193 1951 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 02:07:57.214230 kubelet[1951]: I1213 02:07:57.214224 1951 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 02:07:57.219263 kubelet[1951]: I1213 02:07:57.218739 1951 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:07:57.219263 kubelet[1951]: I1213 02:07:57.218859 1951 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 02:07:57.219263 kubelet[1951]: I1213 02:07:57.218985 1951 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:07:57.219263 kubelet[1951]: I1213 02:07:57.219025 1951 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.8.12","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 02:07:57.219584 kubelet[1951]: I1213 02:07:57.219286 1951 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:07:57.219584 kubelet[1951]: I1213 02:07:57.219299 1951 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 02:07:57.219584 kubelet[1951]: I1213 02:07:57.219420 1951 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:07:57.228457 kubelet[1951]: I1213 02:07:57.228425 1951 kubelet.go:408] "Attempting to sync node with API server" Dec 13 02:07:57.228457 kubelet[1951]: I1213 02:07:57.228463 1951 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:07:57.228639 kubelet[1951]: I1213 02:07:57.228503 1951 kubelet.go:314] "Adding apiserver pod source" Dec 13 02:07:57.228639 kubelet[1951]: I1213 02:07:57.228521 1951 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:07:57.245366 kubelet[1951]: E1213 02:07:57.245287 1951 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:57.245366 kubelet[1951]: E1213 02:07:57.245348 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:57.248069 kubelet[1951]: I1213 02:07:57.248044 1951 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:07:57.250078 kubelet[1951]: I1213 02:07:57.250055 1951 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:07:57.250185 kubelet[1951]: W1213 02:07:57.250133 1951 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:07:57.250856 kubelet[1951]: I1213 02:07:57.250825 1951 server.go:1269] "Started kubelet" Dec 13 02:07:57.252298 kubelet[1951]: W1213 02:07:57.252280 1951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 02:07:57.252433 kubelet[1951]: E1213 02:07:57.252416 1951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 02:07:57.252542 kubelet[1951]: I1213 02:07:57.252519 1951 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:07:57.260134 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:07:57.261075 kubelet[1951]: I1213 02:07:57.260279 1951 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:07:57.262746 kubelet[1951]: I1213 02:07:57.262687 1951 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:07:57.263096 kubelet[1951]: I1213 02:07:57.263075 1951 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:07:57.266108 kubelet[1951]: I1213 02:07:57.266089 1951 server.go:460] "Adding debug handlers to kubelet server" Dec 13 02:07:57.269653 kubelet[1951]: I1213 02:07:57.269631 1951 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 02:07:57.273304 kubelet[1951]: I1213 02:07:57.273286 1951 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 02:07:57.273736 kubelet[1951]: E1213 02:07:57.273711 1951 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Dec 13 02:07:57.276448 kubelet[1951]: I1213 02:07:57.276427 1951 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 02:07:57.276651 kubelet[1951]: I1213 02:07:57.276628 1951 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:07:57.283257 kubelet[1951]: I1213 02:07:57.283233 1951 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:07:57.283398 kubelet[1951]: I1213 02:07:57.283375 1951 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:07:57.284218 kubelet[1951]: E1213 02:07:57.284198 1951 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:07:57.284304 kubelet[1951]: E1213 02:07:57.284270 1951 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.12\" not found" node="10.200.8.12" Dec 13 02:07:57.285382 kubelet[1951]: I1213 02:07:57.285361 1951 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:07:57.300529 kubelet[1951]: I1213 02:07:57.300491 1951 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:07:57.301555 kubelet[1951]: I1213 02:07:57.301525 1951 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:07:57.301555 kubelet[1951]: I1213 02:07:57.301557 1951 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:07:57.301709 kubelet[1951]: I1213 02:07:57.301577 1951 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 02:07:57.301709 kubelet[1951]: E1213 02:07:57.301619 1951 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:07:57.308258 kubelet[1951]: I1213 02:07:57.308229 1951 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:07:57.308258 kubelet[1951]: I1213 02:07:57.308246 1951 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:07:57.308424 kubelet[1951]: I1213 02:07:57.308268 1951 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:07:57.321477 kubelet[1951]: I1213 02:07:57.321444 1951 policy_none.go:49] "None policy: Start" Dec 13 02:07:57.322180 kubelet[1951]: I1213 02:07:57.322162 1951 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:07:57.322285 kubelet[1951]: I1213 02:07:57.322190 1951 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:07:57.329725 systemd[1]: Created slice kubepods.slice. Dec 13 02:07:57.334044 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 02:07:57.341129 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 02:07:57.342662 kubelet[1951]: I1213 02:07:57.342639 1951 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:07:57.342889 kubelet[1951]: I1213 02:07:57.342878 1951 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 02:07:57.342980 kubelet[1951]: I1213 02:07:57.342951 1951 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:07:57.344160 kubelet[1951]: I1213 02:07:57.343676 1951 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:07:57.345577 kubelet[1951]: E1213 02:07:57.345557 1951 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.12\" not found" Dec 13 02:07:57.446707 kubelet[1951]: I1213 02:07:57.444419 1951 kubelet_node_status.go:72] "Attempting to register node" node="10.200.8.12" Dec 13 02:07:57.453058 kubelet[1951]: I1213 02:07:57.453027 1951 kubelet_node_status.go:75] "Successfully registered node" node="10.200.8.12" Dec 13 02:07:57.496688 sudo[1822]: pam_unix(sudo:session): session closed for user root Dec 13 02:07:57.568616 kubelet[1951]: I1213 02:07:57.568584 1951 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 02:07:57.569116 env[1440]: time="2024-12-13T02:07:57.569064547Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:07:57.569695 kubelet[1951]: I1213 02:07:57.569672 1951 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 02:07:57.632768 sshd[1819]: pam_unix(sshd:session): session closed for user core Dec 13 02:07:57.636286 systemd[1]: sshd@4-10.200.8.12:22-10.200.16.10:45952.service: Deactivated successfully. Dec 13 02:07:57.637356 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:07:57.638086 systemd-logind[1428]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:07:57.639228 systemd-logind[1428]: Removed session 7. Dec 13 02:07:58.175554 kubelet[1951]: I1213 02:07:58.175502 1951 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 02:07:58.176193 kubelet[1951]: W1213 02:07:58.175870 1951 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:07:58.176290 kubelet[1951]: W1213 02:07:58.176274 1951 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:07:58.246127 kubelet[1951]: I1213 02:07:58.246078 1951 apiserver.go:52] "Watching apiserver" Dec 13 02:07:58.246392 kubelet[1951]: E1213 02:07:58.246080 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:07:58.256118 systemd[1]: Created slice kubepods-burstable-podf1448780_5b7b_43d0_899b_d74b73b55c37.slice. Dec 13 02:07:58.265790 systemd[1]: Created slice kubepods-besteffort-pod80aba31a_6707_4baa_9d62_6e11c372198d.slice. Dec 13 02:07:58.277722 kubelet[1951]: I1213 02:07:58.277692 1951 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 02:07:58.280314 kubelet[1951]: I1213 02:07:58.280274 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-bpf-maps\") pod \"cilium-kdj6v\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " pod="kube-system/cilium-kdj6v" Dec 13 02:07:58.280537 kubelet[1951]: I1213 02:07:58.280324 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1448780-5b7b-43d0-899b-d74b73b55c37-cilium-config-path\") pod \"cilium-kdj6v\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " pod="kube-system/cilium-kdj6v" Dec 13 02:07:58.280537 kubelet[1951]: I1213 02:07:58.280348 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-host-proc-sys-net\") pod \"cilium-kdj6v\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " pod="kube-system/cilium-kdj6v" Dec 13 02:07:58.280537 kubelet[1951]: I1213 02:07:58.280367 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq97d\" (UniqueName: \"kubernetes.io/projected/f1448780-5b7b-43d0-899b-d74b73b55c37-kube-api-access-mq97d\") pod \"cilium-kdj6v\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " pod="kube-system/cilium-kdj6v" Dec 13 02:07:58.280537 kubelet[1951]: I1213 02:07:58.280391 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6kwf\" (UniqueName: \"kubernetes.io/projected/80aba31a-6707-4baa-9d62-6e11c372198d-kube-api-access-j6kwf\") pod \"kube-proxy-7h25b\" (UID: \"80aba31a-6707-4baa-9d62-6e11c372198d\") " pod="kube-system/kube-proxy-7h25b" Dec 13 02:07:58.280537 kubelet[1951]: I1213 02:07:58.280410 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-cilium-cgroup\") pod \"cilium-kdj6v\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " pod="kube-system/cilium-kdj6v" Dec 13 02:07:58.280752 kubelet[1951]: I1213 02:07:58.280429 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80aba31a-6707-4baa-9d62-6e11c372198d-xtables-lock\") pod \"kube-proxy-7h25b\" (UID: \"80aba31a-6707-4baa-9d62-6e11c372198d\") " pod="kube-system/kube-proxy-7h25b" Dec 13 02:07:58.280752 kubelet[1951]: I1213 02:07:58.280448 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-cni-path\") pod \"cilium-kdj6v\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " pod="kube-system/cilium-kdj6v" Dec 13 02:07:58.280752 kubelet[1951]: I1213 02:07:58.280468 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-lib-modules\") pod \"cilium-kdj6v\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " pod="kube-system/cilium-kdj6v" Dec 13 02:07:58.280752 kubelet[1951]: I1213 02:07:58.280488 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1448780-5b7b-43d0-899b-d74b73b55c37-clustermesh-secrets\") pod \"cilium-kdj6v\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " pod="kube-system/cilium-kdj6v" Dec 13 02:07:58.280752 kubelet[1951]: I1213 02:07:58.280512 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1448780-5b7b-43d0-899b-d74b73b55c37-hubble-tls\") pod \"cilium-kdj6v\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " pod="kube-system/cilium-kdj6v" Dec 13 02:07:58.280752 kubelet[1951]: I1213 02:07:58.280531 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/80aba31a-6707-4baa-9d62-6e11c372198d-kube-proxy\") pod \"kube-proxy-7h25b\" (UID: \"80aba31a-6707-4baa-9d62-6e11c372198d\") " pod="kube-system/kube-proxy-7h25b" Dec 13 02:07:58.280976 kubelet[1951]: I1213 02:07:58.280552 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-cilium-run\") pod \"cilium-kdj6v\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " pod="kube-system/cilium-kdj6v" Dec 13 02:07:58.280976 kubelet[1951]: I1213 02:07:58.280571 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-hostproc\") pod \"cilium-kdj6v\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " pod="kube-system/cilium-kdj6v" Dec 13 02:07:58.280976 kubelet[1951]: I1213 02:07:58.280593 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-etc-cni-netd\") pod \"cilium-kdj6v\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " pod="kube-system/cilium-kdj6v" Dec 13 02:07:58.280976 kubelet[1951]: I1213 02:07:58.280614 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-xtables-lock\") pod \"cilium-kdj6v\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " pod="kube-system/cilium-kdj6v" Dec 13 02:07:58.280976 kubelet[1951]: I1213 02:07:58.280638 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-host-proc-sys-kernel\") pod \"cilium-kdj6v\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " pod="kube-system/cilium-kdj6v" Dec 13 02:07:58.280976 kubelet[1951]: I1213 02:07:58.280665 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80aba31a-6707-4baa-9d62-6e11c372198d-lib-modules\") pod \"kube-proxy-7h25b\" (UID: \"80aba31a-6707-4baa-9d62-6e11c372198d\") " pod="kube-system/kube-proxy-7h25b" Dec 13 02:07:58.382866 kubelet[1951]: I1213 02:07:58.382778 1951 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 02:07:58.564631 env[1440]: time="2024-12-13T02:07:58.564574028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kdj6v,Uid:f1448780-5b7b-43d0-899b-d74b73b55c37,Namespace:kube-system,Attempt:0,}" Dec 13 02:07:58.574309 env[1440]: time="2024-12-13T02:07:58.574267504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7h25b,Uid:80aba31a-6707-4baa-9d62-6e11c372198d,Namespace:kube-system,Attempt:0,}" Dec 13 02:07:59.246804 kubelet[1951]: E1213 02:07:59.246752 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:00.247416 kubelet[1951]: E1213 02:08:00.247372 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:00.378670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3093260580.mount: Deactivated successfully. Dec 13 02:08:00.405326 env[1440]: time="2024-12-13T02:08:00.405268792Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:00.408913 env[1440]: time="2024-12-13T02:08:00.408816887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:00.422812 env[1440]: time="2024-12-13T02:08:00.422768062Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:00.425792 env[1440]: time="2024-12-13T02:08:00.425752543Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:00.430425 env[1440]: time="2024-12-13T02:08:00.430381967Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:00.433647 env[1440]: time="2024-12-13T02:08:00.433610954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:00.437318 env[1440]: time="2024-12-13T02:08:00.437279153Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:00.442185 env[1440]: time="2024-12-13T02:08:00.442149084Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:00.516142 env[1440]: time="2024-12-13T02:08:00.513085992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:00.516142 env[1440]: time="2024-12-13T02:08:00.513137594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:00.516142 env[1440]: time="2024-12-13T02:08:00.513155394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:00.516142 env[1440]: time="2024-12-13T02:08:00.513315699Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828 pid=1998 runtime=io.containerd.runc.v2 Dec 13 02:08:00.516446 env[1440]: time="2024-12-13T02:08:00.516306179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:00.516446 env[1440]: time="2024-12-13T02:08:00.516369981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:00.516446 env[1440]: time="2024-12-13T02:08:00.516399182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:00.516607 env[1440]: time="2024-12-13T02:08:00.516537085Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a93ba0135f53589a069489075a403f53fb52da2e43b86ff4bd5fe9e99cd1e611 pid=2010 runtime=io.containerd.runc.v2 Dec 13 02:08:00.538477 systemd[1]: Started cri-containerd-d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828.scope. Dec 13 02:08:00.551561 systemd[1]: Started cri-containerd-a93ba0135f53589a069489075a403f53fb52da2e43b86ff4bd5fe9e99cd1e611.scope. Dec 13 02:08:00.579988 env[1440]: time="2024-12-13T02:08:00.579926591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kdj6v,Uid:f1448780-5b7b-43d0-899b-d74b73b55c37,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828\"" Dec 13 02:08:00.583068 env[1440]: time="2024-12-13T02:08:00.583024874Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:08:00.587455 env[1440]: time="2024-12-13T02:08:00.587416792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7h25b,Uid:80aba31a-6707-4baa-9d62-6e11c372198d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a93ba0135f53589a069489075a403f53fb52da2e43b86ff4bd5fe9e99cd1e611\"" Dec 13 02:08:01.248272 kubelet[1951]: E1213 02:08:01.248232 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:02.249344 kubelet[1951]: E1213 02:08:02.249287 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:03.250246 kubelet[1951]: E1213 02:08:03.250195 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:04.251394 kubelet[1951]: E1213 02:08:04.251340 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:05.252485 kubelet[1951]: E1213 02:08:05.252401 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:06.253445 kubelet[1951]: E1213 02:08:06.253397 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:07.253979 kubelet[1951]: E1213 02:08:07.253928 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:08.254198 kubelet[1951]: E1213 02:08:08.254149 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:09.254328 kubelet[1951]: E1213 02:08:09.254268 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:09.892299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015579045.mount: Deactivated successfully. Dec 13 02:08:10.255330 kubelet[1951]: E1213 02:08:10.255264 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:11.256211 kubelet[1951]: E1213 02:08:11.256181 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:12.257151 kubelet[1951]: E1213 02:08:12.257109 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:12.555808 env[1440]: time="2024-12-13T02:08:12.555652304Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:12.562268 env[1440]: time="2024-12-13T02:08:12.562223431Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:12.566562 env[1440]: time="2024-12-13T02:08:12.566514815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:12.567144 env[1440]: time="2024-12-13T02:08:12.567106726Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:08:12.569563 env[1440]: time="2024-12-13T02:08:12.569534073Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 02:08:12.570475 env[1440]: time="2024-12-13T02:08:12.570441691Z" level=info msg="CreateContainer within sandbox \"d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:08:12.600437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831287760.mount: Deactivated successfully. Dec 13 02:08:12.618653 env[1440]: time="2024-12-13T02:08:12.618610226Z" level=info msg="CreateContainer within sandbox \"d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0\"" Dec 13 02:08:12.619613 env[1440]: time="2024-12-13T02:08:12.619577745Z" level=info msg="StartContainer for \"a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0\"" Dec 13 02:08:12.639463 systemd[1]: Started cri-containerd-a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0.scope. Dec 13 02:08:12.674269 env[1440]: time="2024-12-13T02:08:12.672342969Z" level=info msg="StartContainer for \"a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0\" returns successfully" Dec 13 02:08:12.679600 systemd[1]: cri-containerd-a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0.scope: Deactivated successfully. Dec 13 02:08:13.405278 kubelet[1951]: E1213 02:08:13.258106 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:13.593142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0-rootfs.mount: Deactivated successfully. Dec 13 02:08:14.258630 kubelet[1951]: E1213 02:08:14.258581 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:15.259502 kubelet[1951]: E1213 02:08:15.259464 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:16.260152 kubelet[1951]: E1213 02:08:16.260089 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:16.430412 env[1440]: time="2024-12-13T02:08:16.430355725Z" level=info msg="shim disconnected" id=a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0 Dec 13 02:08:16.430412 env[1440]: time="2024-12-13T02:08:16.430410526Z" level=warning msg="cleaning up after shim disconnected" id=a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0 namespace=k8s.io Dec 13 02:08:16.430903 env[1440]: time="2024-12-13T02:08:16.430422327Z" level=info msg="cleaning up dead shim" Dec 13 02:08:16.438393 env[1440]: time="2024-12-13T02:08:16.438276164Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:08:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2121 runtime=io.containerd.runc.v2\n" Dec 13 02:08:17.163542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3632509045.mount: Deactivated successfully. Dec 13 02:08:17.229336 kubelet[1951]: E1213 02:08:17.229287 1951 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:17.260992 kubelet[1951]: E1213 02:08:17.260939 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:17.345853 env[1440]: time="2024-12-13T02:08:17.344509556Z" level=info msg="CreateContainer within sandbox \"d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:08:17.370795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295899210.mount: Deactivated successfully. Dec 13 02:08:17.392560 env[1440]: time="2024-12-13T02:08:17.392508774Z" level=info msg="CreateContainer within sandbox \"d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a\"" Dec 13 02:08:17.393213 env[1440]: time="2024-12-13T02:08:17.393178785Z" level=info msg="StartContainer for \"994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a\"" Dec 13 02:08:17.421645 systemd[1]: Started cri-containerd-994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a.scope. Dec 13 02:08:17.464185 env[1440]: time="2024-12-13T02:08:17.464130494Z" level=info msg="StartContainer for \"994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a\" returns successfully" Dec 13 02:08:17.479290 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:08:17.479600 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:08:17.481056 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:08:17.483044 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:08:17.487401 systemd[1]: cri-containerd-994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a.scope: Deactivated successfully. Dec 13 02:08:17.498498 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:08:17.775794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628277583.mount: Deactivated successfully. Dec 13 02:08:17.969844 env[1440]: time="2024-12-13T02:08:17.969722810Z" level=info msg="shim disconnected" id=994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a Dec 13 02:08:17.969844 env[1440]: time="2024-12-13T02:08:17.969788111Z" level=warning msg="cleaning up after shim disconnected" id=994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a namespace=k8s.io Dec 13 02:08:17.969844 env[1440]: time="2024-12-13T02:08:17.969799711Z" level=info msg="cleaning up dead shim" Dec 13 02:08:17.980030 env[1440]: time="2024-12-13T02:08:17.979968284Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:08:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2186 runtime=io.containerd.runc.v2\n" Dec 13 02:08:18.057509 env[1440]: time="2024-12-13T02:08:18.057143875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:18.063120 env[1440]: time="2024-12-13T02:08:18.063075374Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:18.067157 env[1440]: time="2024-12-13T02:08:18.067122041Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:18.070948 env[1440]: time="2024-12-13T02:08:18.070912804Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:18.071311 env[1440]: time="2024-12-13T02:08:18.071261010Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 02:08:18.073581 env[1440]: time="2024-12-13T02:08:18.073552048Z" level=info msg="CreateContainer within sandbox \"a93ba0135f53589a069489075a403f53fb52da2e43b86ff4bd5fe9e99cd1e611\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:08:18.106168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount220354092.mount: Deactivated successfully. Dec 13 02:08:18.114901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034459155.mount: Deactivated successfully. Dec 13 02:08:18.130572 env[1440]: time="2024-12-13T02:08:18.130527594Z" level=info msg="CreateContainer within sandbox \"a93ba0135f53589a069489075a403f53fb52da2e43b86ff4bd5fe9e99cd1e611\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"23ee99680c529a84ec9ecb083a647700f6c631071b9db021bac37f0d5654afb9\"" Dec 13 02:08:18.131141 env[1440]: time="2024-12-13T02:08:18.131109204Z" level=info msg="StartContainer for \"23ee99680c529a84ec9ecb083a647700f6c631071b9db021bac37f0d5654afb9\"" Dec 13 02:08:18.147827 systemd[1]: Started cri-containerd-23ee99680c529a84ec9ecb083a647700f6c631071b9db021bac37f0d5654afb9.scope. Dec 13 02:08:18.192983 env[1440]: time="2024-12-13T02:08:18.192929130Z" level=info msg="StartContainer for \"23ee99680c529a84ec9ecb083a647700f6c631071b9db021bac37f0d5654afb9\" returns successfully" Dec 13 02:08:18.262061 kubelet[1951]: E1213 02:08:18.261988 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:18.352557 env[1440]: time="2024-12-13T02:08:18.352438980Z" level=info msg="CreateContainer within sandbox \"d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:08:18.379375 kubelet[1951]: I1213 02:08:18.379027 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7h25b" podStartSLOduration=3.895176214 podStartE2EDuration="21.378986721s" podCreationTimestamp="2024-12-13 02:07:57 +0000 UTC" firstStartedPulling="2024-12-13 02:08:00.588374218 +0000 UTC m=+4.630215766" lastFinishedPulling="2024-12-13 02:08:18.072184625 +0000 UTC m=+22.114026273" observedRunningTime="2024-12-13 02:08:18.362200242 +0000 UTC m=+22.404041790" watchObservedRunningTime="2024-12-13 02:08:18.378986721 +0000 UTC m=+22.420828269" Dec 13 02:08:18.398442 env[1440]: time="2024-12-13T02:08:18.398386543Z" level=info msg="CreateContainer within sandbox \"d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb\"" Dec 13 02:08:18.399096 env[1440]: time="2024-12-13T02:08:18.399062854Z" level=info msg="StartContainer for \"5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb\"" Dec 13 02:08:18.416268 systemd[1]: Started cri-containerd-5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb.scope. Dec 13 02:08:18.449968 systemd[1]: cri-containerd-5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb.scope: Deactivated successfully. Dec 13 02:08:18.451736 env[1440]: time="2024-12-13T02:08:18.451670428Z" level=info msg="StartContainer for \"5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb\" returns successfully" Dec 13 02:08:18.717185 env[1440]: time="2024-12-13T02:08:18.717067436Z" level=info msg="shim disconnected" id=5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb Dec 13 02:08:18.717692 env[1440]: time="2024-12-13T02:08:18.717658046Z" level=warning msg="cleaning up after shim disconnected" id=5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb namespace=k8s.io Dec 13 02:08:18.717692 env[1440]: time="2024-12-13T02:08:18.717685146Z" level=info msg="cleaning up dead shim" Dec 13 02:08:18.732806 env[1440]: time="2024-12-13T02:08:18.732767596Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:08:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2349 runtime=io.containerd.runc.v2\n" Dec 13 02:08:19.262173 kubelet[1951]: E1213 02:08:19.262117 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:19.355214 env[1440]: time="2024-12-13T02:08:19.355165587Z" level=info msg="CreateContainer within sandbox \"d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:08:19.387442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1826864555.mount: Deactivated successfully. Dec 13 02:08:19.409135 env[1440]: time="2024-12-13T02:08:19.409081060Z" level=info msg="CreateContainer within sandbox \"d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1\"" Dec 13 02:08:19.409700 env[1440]: time="2024-12-13T02:08:19.409669569Z" level=info msg="StartContainer for \"d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1\"" Dec 13 02:08:19.425481 systemd[1]: Started cri-containerd-d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1.scope. Dec 13 02:08:19.453173 systemd[1]: cri-containerd-d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1.scope: Deactivated successfully. Dec 13 02:08:19.454895 env[1440]: time="2024-12-13T02:08:19.454829600Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1448780_5b7b_43d0_899b_d74b73b55c37.slice/cri-containerd-d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1.scope/memory.events\": no such file or directory" Dec 13 02:08:19.460160 env[1440]: time="2024-12-13T02:08:19.460120886Z" level=info msg="StartContainer for \"d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1\" returns successfully" Dec 13 02:08:19.488811 env[1440]: time="2024-12-13T02:08:19.488757550Z" level=info msg="shim disconnected" id=d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1 Dec 13 02:08:19.488811 env[1440]: time="2024-12-13T02:08:19.488808851Z" level=warning msg="cleaning up after shim disconnected" id=d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1 namespace=k8s.io Dec 13 02:08:19.489149 env[1440]: time="2024-12-13T02:08:19.488822851Z" level=info msg="cleaning up dead shim" Dec 13 02:08:19.497268 env[1440]: time="2024-12-13T02:08:19.497225187Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:08:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2470 runtime=io.containerd.runc.v2\n" Dec 13 02:08:20.262340 kubelet[1951]: E1213 02:08:20.262299 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:20.358939 env[1440]: time="2024-12-13T02:08:20.358804293Z" level=info msg="CreateContainer within sandbox \"d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:08:20.396343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3364223509.mount: Deactivated successfully. Dec 13 02:08:20.410438 env[1440]: time="2024-12-13T02:08:20.410383607Z" level=info msg="CreateContainer within sandbox \"d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\"" Dec 13 02:08:20.411080 env[1440]: time="2024-12-13T02:08:20.410986917Z" level=info msg="StartContainer for \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\"" Dec 13 02:08:20.428830 systemd[1]: Started cri-containerd-1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148.scope. Dec 13 02:08:20.466426 env[1440]: time="2024-12-13T02:08:20.466140188Z" level=info msg="StartContainer for \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\" returns successfully" Dec 13 02:08:20.568283 kubelet[1951]: I1213 02:08:20.568179 1951 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 02:08:21.182031 kernel: Initializing XFRM netlink socket Dec 13 02:08:21.262612 kubelet[1951]: E1213 02:08:21.262527 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:21.380406 kubelet[1951]: I1213 02:08:21.380351 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kdj6v" podStartSLOduration=12.393820166 podStartE2EDuration="24.380330672s" podCreationTimestamp="2024-12-13 02:07:57 +0000 UTC" firstStartedPulling="2024-12-13 02:08:00.58214005 +0000 UTC m=+4.623981598" lastFinishedPulling="2024-12-13 02:08:12.568650556 +0000 UTC m=+16.610492104" observedRunningTime="2024-12-13 02:08:21.380316571 +0000 UTC m=+25.422158119" watchObservedRunningTime="2024-12-13 02:08:21.380330672 +0000 UTC m=+25.422172220" Dec 13 02:08:22.262953 kubelet[1951]: E1213 02:08:22.262880 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:22.878078 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:08:22.878481 systemd-networkd[1585]: cilium_host: Link UP Dec 13 02:08:22.878660 systemd-networkd[1585]: cilium_net: Link UP Dec 13 02:08:22.878665 systemd-networkd[1585]: cilium_net: Gained carrier Dec 13 02:08:22.878857 systemd-networkd[1585]: cilium_host: Gained carrier Dec 13 02:08:22.881227 systemd-networkd[1585]: cilium_net: Gained IPv6LL Dec 13 02:08:22.881864 systemd-networkd[1585]: cilium_host: Gained IPv6LL Dec 13 02:08:23.034401 systemd-networkd[1585]: cilium_vxlan: Link UP Dec 13 02:08:23.034411 systemd-networkd[1585]: cilium_vxlan: Gained carrier Dec 13 02:08:23.263598 kubelet[1951]: E1213 02:08:23.263561 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:23.315025 kernel: NET: Registered PF_ALG protocol family Dec 13 02:08:24.167214 systemd-networkd[1585]: lxc_health: Link UP Dec 13 02:08:24.180132 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:08:24.181144 systemd-networkd[1585]: lxc_health: Gained carrier Dec 13 02:08:24.264210 kubelet[1951]: E1213 02:08:24.264161 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:24.741253 systemd-networkd[1585]: cilium_vxlan: Gained IPv6LL Dec 13 02:08:25.066471 kubelet[1951]: I1213 02:08:25.066157 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jms5s\" (UniqueName: \"kubernetes.io/projected/af0dbb63-6315-4f3d-870b-182b19cfb01e-kube-api-access-jms5s\") pod \"nginx-deployment-8587fbcb89-hxfp7\" (UID: \"af0dbb63-6315-4f3d-870b-182b19cfb01e\") " pod="default/nginx-deployment-8587fbcb89-hxfp7" Dec 13 02:08:25.066434 systemd[1]: Created slice kubepods-besteffort-podaf0dbb63_6315_4f3d_870b_182b19cfb01e.slice. Dec 13 02:08:25.265242 kubelet[1951]: E1213 02:08:25.265183 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:25.370933 env[1440]: time="2024-12-13T02:08:25.370755141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hxfp7,Uid:af0dbb63-6315-4f3d-870b-182b19cfb01e,Namespace:default,Attempt:0,}" Dec 13 02:08:25.469899 systemd-networkd[1585]: lxc7d8f9e6ca341: Link UP Dec 13 02:08:25.483362 kernel: eth0: renamed from tmp1842a Dec 13 02:08:25.493032 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:08:25.499230 systemd-networkd[1585]: lxc7d8f9e6ca341: Gained carrier Dec 13 02:08:25.500054 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7d8f9e6ca341: link becomes ready Dec 13 02:08:25.992845 kubelet[1951]: I1213 02:08:25.992803 1951 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:08:26.021145 systemd-networkd[1585]: lxc_health: Gained IPv6LL Dec 13 02:08:26.265838 kubelet[1951]: E1213 02:08:26.265709 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:27.266829 kubelet[1951]: E1213 02:08:27.266783 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:27.429256 systemd-networkd[1585]: lxc7d8f9e6ca341: Gained IPv6LL Dec 13 02:08:28.268325 kubelet[1951]: E1213 02:08:28.268273 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:28.492297 env[1440]: time="2024-12-13T02:08:28.492213108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:28.492297 env[1440]: time="2024-12-13T02:08:28.492261008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:28.492297 env[1440]: time="2024-12-13T02:08:28.492275109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:28.492940 env[1440]: time="2024-12-13T02:08:28.492885516Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1842a603e2d1dcd91c72f7aae40c8f3b8f5cdff202ea1d961ef301bf15bc054d pid=2991 runtime=io.containerd.runc.v2 Dec 13 02:08:28.516748 systemd[1]: Started cri-containerd-1842a603e2d1dcd91c72f7aae40c8f3b8f5cdff202ea1d961ef301bf15bc054d.scope. Dec 13 02:08:28.554074 env[1440]: time="2024-12-13T02:08:28.554039510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hxfp7,Uid:af0dbb63-6315-4f3d-870b-182b19cfb01e,Namespace:default,Attempt:0,} returns sandbox id \"1842a603e2d1dcd91c72f7aae40c8f3b8f5cdff202ea1d961ef301bf15bc054d\"" Dec 13 02:08:28.555787 env[1440]: time="2024-12-13T02:08:28.555730232Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:08:29.269494 kubelet[1951]: E1213 02:08:29.269402 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:30.270132 kubelet[1951]: E1213 02:08:30.270078 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:31.270270 kubelet[1951]: E1213 02:08:31.270202 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:31.463837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2589332914.mount: Deactivated successfully. Dec 13 02:08:32.271088 kubelet[1951]: E1213 02:08:32.271038 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:32.953510 env[1440]: time="2024-12-13T02:08:32.953450605Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:32.963192 env[1440]: time="2024-12-13T02:08:32.963139120Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:32.970120 env[1440]: time="2024-12-13T02:08:32.970072202Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:32.976646 env[1440]: time="2024-12-13T02:08:32.976606079Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:32.977239 env[1440]: time="2024-12-13T02:08:32.977203086Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:08:32.980354 env[1440]: time="2024-12-13T02:08:32.980319123Z" level=info msg="CreateContainer within sandbox \"1842a603e2d1dcd91c72f7aae40c8f3b8f5cdff202ea1d961ef301bf15bc054d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 02:08:33.015178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount493730478.mount: Deactivated successfully. Dec 13 02:08:33.026233 env[1440]: time="2024-12-13T02:08:33.026183758Z" level=info msg="CreateContainer within sandbox \"1842a603e2d1dcd91c72f7aae40c8f3b8f5cdff202ea1d961ef301bf15bc054d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1f09a2913c11c10120b59208d820b6042251d346bd038beb674e7d2f812217c9\"" Dec 13 02:08:33.027078 env[1440]: time="2024-12-13T02:08:33.027027568Z" level=info msg="StartContainer for \"1f09a2913c11c10120b59208d820b6042251d346bd038beb674e7d2f812217c9\"" Dec 13 02:08:33.050934 systemd[1]: Started cri-containerd-1f09a2913c11c10120b59208d820b6042251d346bd038beb674e7d2f812217c9.scope. Dec 13 02:08:33.091166 env[1440]: time="2024-12-13T02:08:33.091124008Z" level=info msg="StartContainer for \"1f09a2913c11c10120b59208d820b6042251d346bd038beb674e7d2f812217c9\" returns successfully" Dec 13 02:08:33.271741 kubelet[1951]: E1213 02:08:33.271675 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:34.007383 systemd[1]: run-containerd-runc-k8s.io-1f09a2913c11c10120b59208d820b6042251d346bd038beb674e7d2f812217c9-runc.L8Ghhi.mount: Deactivated successfully. Dec 13 02:08:34.272214 kubelet[1951]: E1213 02:08:34.272075 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:35.272590 kubelet[1951]: E1213 02:08:35.272528 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:36.273639 kubelet[1951]: E1213 02:08:36.273584 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:37.228791 kubelet[1951]: E1213 02:08:37.228727 1951 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:37.274742 kubelet[1951]: E1213 02:08:37.274682 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:38.274972 kubelet[1951]: E1213 02:08:38.274875 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:39.275599 kubelet[1951]: E1213 02:08:39.275546 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:40.276484 kubelet[1951]: E1213 02:08:40.276429 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:41.277497 kubelet[1951]: E1213 02:08:41.277444 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:42.277988 kubelet[1951]: E1213 02:08:42.277929 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:42.814840 kubelet[1951]: I1213 02:08:42.814779 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-hxfp7" podStartSLOduration=13.391687467 podStartE2EDuration="17.81475854s" podCreationTimestamp="2024-12-13 02:08:25 +0000 UTC" firstStartedPulling="2024-12-13 02:08:28.555325927 +0000 UTC m=+32.597167475" lastFinishedPulling="2024-12-13 02:08:32.978397 +0000 UTC m=+37.020238548" observedRunningTime="2024-12-13 02:08:33.404610428 +0000 UTC m=+37.446451976" watchObservedRunningTime="2024-12-13 02:08:42.81475854 +0000 UTC m=+46.856600188" Dec 13 02:08:42.820093 systemd[1]: Created slice kubepods-besteffort-podf886939d_f255_4c23_a0ed_b4533fa73a4b.slice. Dec 13 02:08:42.881229 kubelet[1951]: I1213 02:08:42.881170 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6g96\" (UniqueName: \"kubernetes.io/projected/f886939d-f255-4c23-a0ed-b4533fa73a4b-kube-api-access-s6g96\") pod \"nfs-server-provisioner-0\" (UID: \"f886939d-f255-4c23-a0ed-b4533fa73a4b\") " pod="default/nfs-server-provisioner-0" Dec 13 02:08:42.881516 kubelet[1951]: I1213 02:08:42.881485 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/f886939d-f255-4c23-a0ed-b4533fa73a4b-data\") pod \"nfs-server-provisioner-0\" (UID: \"f886939d-f255-4c23-a0ed-b4533fa73a4b\") " pod="default/nfs-server-provisioner-0" Dec 13 02:08:43.124077 env[1440]: time="2024-12-13T02:08:43.123605046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f886939d-f255-4c23-a0ed-b4533fa73a4b,Namespace:default,Attempt:0,}" Dec 13 02:08:43.209040 systemd-networkd[1585]: lxcc0b30486c9bb: Link UP Dec 13 02:08:43.214027 kernel: eth0: renamed from tmp4a67e Dec 13 02:08:43.225645 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:08:43.225769 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc0b30486c9bb: link becomes ready Dec 13 02:08:43.226320 systemd-networkd[1585]: lxcc0b30486c9bb: Gained carrier Dec 13 02:08:43.278886 kubelet[1951]: E1213 02:08:43.278807 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:43.394164 env[1440]: time="2024-12-13T02:08:43.393989559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:43.394333 env[1440]: time="2024-12-13T02:08:43.394049359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:43.394333 env[1440]: time="2024-12-13T02:08:43.394062859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:43.394794 env[1440]: time="2024-12-13T02:08:43.394745566Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a67e5623ea7dacedcdb83f72a9fca5bdcf32081630187f93560bc9f3973cc69 pid=3114 runtime=io.containerd.runc.v2 Dec 13 02:08:43.411664 systemd[1]: Started cri-containerd-4a67e5623ea7dacedcdb83f72a9fca5bdcf32081630187f93560bc9f3973cc69.scope. Dec 13 02:08:43.458446 env[1440]: time="2024-12-13T02:08:43.458385457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f886939d-f255-4c23-a0ed-b4533fa73a4b,Namespace:default,Attempt:0,} returns sandbox id \"4a67e5623ea7dacedcdb83f72a9fca5bdcf32081630187f93560bc9f3973cc69\"" Dec 13 02:08:43.460646 env[1440]: time="2024-12-13T02:08:43.460594878Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 02:08:43.993713 systemd[1]: run-containerd-runc-k8s.io-4a67e5623ea7dacedcdb83f72a9fca5bdcf32081630187f93560bc9f3973cc69-runc.StUcNf.mount: Deactivated successfully. Dec 13 02:08:44.279844 kubelet[1951]: E1213 02:08:44.279458 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:45.094123 systemd-networkd[1585]: lxcc0b30486c9bb: Gained IPv6LL Dec 13 02:08:45.280013 kubelet[1951]: E1213 02:08:45.279953 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:46.235302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2544192615.mount: Deactivated successfully. Dec 13 02:08:46.280720 kubelet[1951]: E1213 02:08:46.280675 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:47.280887 kubelet[1951]: E1213 02:08:47.280839 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:48.281162 kubelet[1951]: E1213 02:08:48.281092 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:48.362834 env[1440]: time="2024-12-13T02:08:48.362784377Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:48.370792 env[1440]: time="2024-12-13T02:08:48.370746444Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:48.374689 env[1440]: time="2024-12-13T02:08:48.374649877Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:48.379582 env[1440]: time="2024-12-13T02:08:48.379545818Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:48.380157 env[1440]: time="2024-12-13T02:08:48.380124023Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 02:08:48.382853 env[1440]: time="2024-12-13T02:08:48.382822145Z" level=info msg="CreateContainer within sandbox \"4a67e5623ea7dacedcdb83f72a9fca5bdcf32081630187f93560bc9f3973cc69\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 02:08:48.411611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3209607094.mount: Deactivated successfully. Dec 13 02:08:48.427792 env[1440]: time="2024-12-13T02:08:48.427743323Z" level=info msg="CreateContainer within sandbox \"4a67e5623ea7dacedcdb83f72a9fca5bdcf32081630187f93560bc9f3973cc69\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"306cde8bc95a34bccb7659af4433dc4b307f663c3fc7d49e8070439b8c07088e\"" Dec 13 02:08:48.428395 env[1440]: time="2024-12-13T02:08:48.428348728Z" level=info msg="StartContainer for \"306cde8bc95a34bccb7659af4433dc4b307f663c3fc7d49e8070439b8c07088e\"" Dec 13 02:08:48.450378 systemd[1]: Started cri-containerd-306cde8bc95a34bccb7659af4433dc4b307f663c3fc7d49e8070439b8c07088e.scope. Dec 13 02:08:48.480882 env[1440]: time="2024-12-13T02:08:48.480842569Z" level=info msg="StartContainer for \"306cde8bc95a34bccb7659af4433dc4b307f663c3fc7d49e8070439b8c07088e\" returns successfully" Dec 13 02:08:49.281711 kubelet[1951]: E1213 02:08:49.281652 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:49.443445 kubelet[1951]: I1213 02:08:49.443364 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.521906726 podStartE2EDuration="7.443346888s" podCreationTimestamp="2024-12-13 02:08:42 +0000 UTC" firstStartedPulling="2024-12-13 02:08:43.459991872 +0000 UTC m=+47.501833520" lastFinishedPulling="2024-12-13 02:08:48.381432134 +0000 UTC m=+52.423273682" observedRunningTime="2024-12-13 02:08:49.442572582 +0000 UTC m=+53.484414130" watchObservedRunningTime="2024-12-13 02:08:49.443346888 +0000 UTC m=+53.485188536" Dec 13 02:08:50.281924 kubelet[1951]: E1213 02:08:50.281856 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:51.282158 kubelet[1951]: E1213 02:08:51.282096 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:52.282801 kubelet[1951]: E1213 02:08:52.282746 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:53.283178 kubelet[1951]: E1213 02:08:53.283117 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:54.283691 kubelet[1951]: E1213 02:08:54.283631 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:55.284416 kubelet[1951]: E1213 02:08:55.284356 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:56.284569 kubelet[1951]: E1213 02:08:56.284501 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:57.228843 kubelet[1951]: E1213 02:08:57.228779 1951 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:57.284697 kubelet[1951]: E1213 02:08:57.284664 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:58.286215 kubelet[1951]: E1213 02:08:58.286165 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:58.522749 systemd[1]: Created slice kubepods-besteffort-pod91fdcf86_1764_4080_a1f1_43ae32acba33.slice. Dec 13 02:08:58.669801 kubelet[1951]: I1213 02:08:58.669341 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6806ebb6-72f0-4d31-9177-99e385a2f154\" (UniqueName: \"kubernetes.io/nfs/91fdcf86-1764-4080-a1f1-43ae32acba33-pvc-6806ebb6-72f0-4d31-9177-99e385a2f154\") pod \"test-pod-1\" (UID: \"91fdcf86-1764-4080-a1f1-43ae32acba33\") " pod="default/test-pod-1" Dec 13 02:08:58.669801 kubelet[1951]: I1213 02:08:58.669417 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pgrx\" (UniqueName: \"kubernetes.io/projected/91fdcf86-1764-4080-a1f1-43ae32acba33-kube-api-access-6pgrx\") pod \"test-pod-1\" (UID: \"91fdcf86-1764-4080-a1f1-43ae32acba33\") " pod="default/test-pod-1" Dec 13 02:08:58.939039 kernel: FS-Cache: Loaded Dec 13 02:08:59.044668 kernel: RPC: Registered named UNIX socket transport module. Dec 13 02:08:59.045963 kernel: RPC: Registered udp transport module. Dec 13 02:08:59.046071 kernel: RPC: Registered tcp transport module. Dec 13 02:08:59.050985 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 02:08:59.214032 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 02:08:59.286832 kubelet[1951]: E1213 02:08:59.286776 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:08:59.440498 kernel: NFS: Registering the id_resolver key type Dec 13 02:08:59.440653 kernel: Key type id_resolver registered Dec 13 02:08:59.440683 kernel: Key type id_legacy registered Dec 13 02:08:59.607993 nfsidmap[3233]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-6288c93be1' Dec 13 02:08:59.629317 nfsidmap[3234]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-6288c93be1' Dec 13 02:08:59.726889 env[1440]: time="2024-12-13T02:08:59.726821716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:91fdcf86-1764-4080-a1f1-43ae32acba33,Namespace:default,Attempt:0,}" Dec 13 02:08:59.805624 systemd-networkd[1585]: lxcc352d15229dc: Link UP Dec 13 02:08:59.812128 kernel: eth0: renamed from tmpb8183 Dec 13 02:08:59.827920 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:08:59.828079 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc352d15229dc: link becomes ready Dec 13 02:08:59.829268 systemd-networkd[1585]: lxcc352d15229dc: Gained carrier Dec 13 02:09:00.024822 env[1440]: time="2024-12-13T02:09:00.024724165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:09:00.024822 env[1440]: time="2024-12-13T02:09:00.024774065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:09:00.024822 env[1440]: time="2024-12-13T02:09:00.024788065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:09:00.025437 env[1440]: time="2024-12-13T02:09:00.025382569Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b818366100a91433e356129caefaf234dd4a590e5d841b31ae98811d58cf1d98 pid=3260 runtime=io.containerd.runc.v2 Dec 13 02:09:00.047699 systemd[1]: Started cri-containerd-b818366100a91433e356129caefaf234dd4a590e5d841b31ae98811d58cf1d98.scope. Dec 13 02:09:00.087369 env[1440]: time="2024-12-13T02:09:00.086701084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:91fdcf86-1764-4080-a1f1-43ae32acba33,Namespace:default,Attempt:0,} returns sandbox id \"b818366100a91433e356129caefaf234dd4a590e5d841b31ae98811d58cf1d98\"" Dec 13 02:09:00.088379 env[1440]: time="2024-12-13T02:09:00.088342995Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:09:00.287787 kubelet[1951]: E1213 02:09:00.287648 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:00.421416 env[1440]: time="2024-12-13T02:09:00.421361851Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:00.429243 env[1440]: time="2024-12-13T02:09:00.429175504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:00.433742 env[1440]: time="2024-12-13T02:09:00.433684734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:00.440857 env[1440]: time="2024-12-13T02:09:00.440802583Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:00.441496 env[1440]: time="2024-12-13T02:09:00.441459587Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:09:00.444519 env[1440]: time="2024-12-13T02:09:00.444488007Z" level=info msg="CreateContainer within sandbox \"b818366100a91433e356129caefaf234dd4a590e5d841b31ae98811d58cf1d98\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 02:09:00.515597 env[1440]: time="2024-12-13T02:09:00.515536689Z" level=info msg="CreateContainer within sandbox \"b818366100a91433e356129caefaf234dd4a590e5d841b31ae98811d58cf1d98\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"38c23e52b5fe72ee578c9c34f91a74f068805282ca7a573a4fc7e4afeb493f89\"" Dec 13 02:09:00.516473 env[1440]: time="2024-12-13T02:09:00.516435495Z" level=info msg="StartContainer for \"38c23e52b5fe72ee578c9c34f91a74f068805282ca7a573a4fc7e4afeb493f89\"" Dec 13 02:09:00.536772 systemd[1]: Started cri-containerd-38c23e52b5fe72ee578c9c34f91a74f068805282ca7a573a4fc7e4afeb493f89.scope. Dec 13 02:09:00.584364 env[1440]: time="2024-12-13T02:09:00.584307354Z" level=info msg="StartContainer for \"38c23e52b5fe72ee578c9c34f91a74f068805282ca7a573a4fc7e4afeb493f89\" returns successfully" Dec 13 02:09:01.221232 systemd-networkd[1585]: lxcc352d15229dc: Gained IPv6LL Dec 13 02:09:01.288329 kubelet[1951]: E1213 02:09:01.288273 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:01.467399 kubelet[1951]: I1213 02:09:01.467341 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.112467679 podStartE2EDuration="17.467320483s" podCreationTimestamp="2024-12-13 02:08:44 +0000 UTC" firstStartedPulling="2024-12-13 02:09:00.088032893 +0000 UTC m=+64.129874541" lastFinishedPulling="2024-12-13 02:09:00.442885797 +0000 UTC m=+64.484727345" observedRunningTime="2024-12-13 02:09:01.467224983 +0000 UTC m=+65.509066631" watchObservedRunningTime="2024-12-13 02:09:01.467320483 +0000 UTC m=+65.509162031" Dec 13 02:09:02.288902 kubelet[1951]: E1213 02:09:02.288838 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:03.289335 kubelet[1951]: E1213 02:09:03.289271 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:04.289766 kubelet[1951]: E1213 02:09:04.289698 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:05.290539 kubelet[1951]: E1213 02:09:05.290475 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:06.290765 kubelet[1951]: E1213 02:09:06.290696 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:06.554677 env[1440]: time="2024-12-13T02:09:06.554547668Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:09:06.560541 env[1440]: time="2024-12-13T02:09:06.560505105Z" level=info msg="StopContainer for \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\" with timeout 2 (s)" Dec 13 02:09:06.560811 env[1440]: time="2024-12-13T02:09:06.560775006Z" level=info msg="Stop container \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\" with signal terminated" Dec 13 02:09:06.567933 systemd-networkd[1585]: lxc_health: Link DOWN Dec 13 02:09:06.567941 systemd-networkd[1585]: lxc_health: Lost carrier Dec 13 02:09:06.588486 systemd[1]: cri-containerd-1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148.scope: Deactivated successfully. Dec 13 02:09:06.588799 systemd[1]: cri-containerd-1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148.scope: Consumed 6.270s CPU time. Dec 13 02:09:06.608699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148-rootfs.mount: Deactivated successfully. Dec 13 02:09:07.291809 kubelet[1951]: E1213 02:09:07.291742 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:07.373079 kubelet[1951]: E1213 02:09:07.373037 1951 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:09:08.292530 kubelet[1951]: E1213 02:09:08.292477 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:08.500930 kubelet[1951]: I1213 02:09:08.500551 1951 setters.go:600] "Node became not ready" node="10.200.8.12" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:09:08Z","lastTransitionTime":"2024-12-13T02:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:09:08.567155 env[1440]: time="2024-12-13T02:09:08.566991477Z" level=info msg="Kill container \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\"" Dec 13 02:09:09.202237 env[1440]: time="2024-12-13T02:09:09.202158260Z" level=info msg="shim disconnected" id=1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148 Dec 13 02:09:09.202237 env[1440]: time="2024-12-13T02:09:09.202224960Z" level=warning msg="cleaning up after shim disconnected" id=1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148 namespace=k8s.io Dec 13 02:09:09.202237 env[1440]: time="2024-12-13T02:09:09.202238360Z" level=info msg="cleaning up dead shim" Dec 13 02:09:09.215884 env[1440]: time="2024-12-13T02:09:09.215826541Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3390 runtime=io.containerd.runc.v2\n" Dec 13 02:09:09.224386 env[1440]: time="2024-12-13T02:09:09.224334191Z" level=info msg="StopContainer for \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\" returns successfully" Dec 13 02:09:09.225171 env[1440]: time="2024-12-13T02:09:09.225032095Z" level=info msg="StopPodSandbox for \"d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828\"" Dec 13 02:09:09.225348 env[1440]: time="2024-12-13T02:09:09.225225596Z" level=info msg="Container to stop \"a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:09:09.225348 env[1440]: time="2024-12-13T02:09:09.225260596Z" level=info msg="Container to stop \"994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:09:09.225348 env[1440]: time="2024-12-13T02:09:09.225277996Z" level=info msg="Container to stop \"d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:09:09.225348 env[1440]: time="2024-12-13T02:09:09.225294096Z" level=info msg="Container to stop \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:09:09.225348 env[1440]: time="2024-12-13T02:09:09.225310596Z" level=info msg="Container to stop \"5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:09:09.229161 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828-shm.mount: Deactivated successfully. Dec 13 02:09:09.236386 systemd[1]: cri-containerd-d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828.scope: Deactivated successfully. Dec 13 02:09:09.255499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828-rootfs.mount: Deactivated successfully. Dec 13 02:09:09.275989 env[1440]: time="2024-12-13T02:09:09.275936295Z" level=info msg="shim disconnected" id=d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828 Dec 13 02:09:09.276335 env[1440]: time="2024-12-13T02:09:09.276310597Z" level=warning msg="cleaning up after shim disconnected" id=d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828 namespace=k8s.io Dec 13 02:09:09.276426 env[1440]: time="2024-12-13T02:09:09.276411598Z" level=info msg="cleaning up dead shim" Dec 13 02:09:09.283922 env[1440]: time="2024-12-13T02:09:09.283882442Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3420 runtime=io.containerd.runc.v2\n" Dec 13 02:09:09.284262 env[1440]: time="2024-12-13T02:09:09.284231244Z" level=info msg="TearDown network for sandbox \"d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828\" successfully" Dec 13 02:09:09.284345 env[1440]: time="2024-12-13T02:09:09.284259644Z" level=info msg="StopPodSandbox for \"d2a1680bc5db79e9d9a5a2114270c4d28e8c2176c9d314d0a9058eb09f178828\" returns successfully" Dec 13 02:09:09.292716 kubelet[1951]: E1213 02:09:09.292652 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:09.449631 kubelet[1951]: I1213 02:09:09.449582 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-cilium-cgroup\") pod \"f1448780-5b7b-43d0-899b-d74b73b55c37\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " Dec 13 02:09:09.449631 kubelet[1951]: I1213 02:09:09.449643 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-hostproc\") pod \"f1448780-5b7b-43d0-899b-d74b73b55c37\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " Dec 13 02:09:09.449926 kubelet[1951]: I1213 02:09:09.449684 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq97d\" (UniqueName: \"kubernetes.io/projected/f1448780-5b7b-43d0-899b-d74b73b55c37-kube-api-access-mq97d\") pod \"f1448780-5b7b-43d0-899b-d74b73b55c37\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " Dec 13 02:09:09.449926 kubelet[1951]: I1213 02:09:09.449710 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1448780-5b7b-43d0-899b-d74b73b55c37-clustermesh-secrets\") pod \"f1448780-5b7b-43d0-899b-d74b73b55c37\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " Dec 13 02:09:09.449926 kubelet[1951]: I1213 02:09:09.449734 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-cilium-run\") pod \"f1448780-5b7b-43d0-899b-d74b73b55c37\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " Dec 13 02:09:09.449926 kubelet[1951]: I1213 02:09:09.449755 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-etc-cni-netd\") pod \"f1448780-5b7b-43d0-899b-d74b73b55c37\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " Dec 13 02:09:09.449926 kubelet[1951]: I1213 02:09:09.449780 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1448780-5b7b-43d0-899b-d74b73b55c37-cilium-config-path\") pod \"f1448780-5b7b-43d0-899b-d74b73b55c37\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " Dec 13 02:09:09.449926 kubelet[1951]: I1213 02:09:09.449806 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-host-proc-sys-net\") pod \"f1448780-5b7b-43d0-899b-d74b73b55c37\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " Dec 13 02:09:09.450292 kubelet[1951]: I1213 02:09:09.449830 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-cni-path\") pod \"f1448780-5b7b-43d0-899b-d74b73b55c37\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " Dec 13 02:09:09.450292 kubelet[1951]: I1213 02:09:09.449857 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1448780-5b7b-43d0-899b-d74b73b55c37-hubble-tls\") pod \"f1448780-5b7b-43d0-899b-d74b73b55c37\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " Dec 13 02:09:09.450292 kubelet[1951]: I1213 02:09:09.449882 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-xtables-lock\") pod \"f1448780-5b7b-43d0-899b-d74b73b55c37\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " Dec 13 02:09:09.450292 kubelet[1951]: I1213 02:09:09.449909 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-host-proc-sys-kernel\") pod \"f1448780-5b7b-43d0-899b-d74b73b55c37\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " Dec 13 02:09:09.450292 kubelet[1951]: I1213 02:09:09.449934 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-bpf-maps\") pod \"f1448780-5b7b-43d0-899b-d74b73b55c37\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " Dec 13 02:09:09.450292 kubelet[1951]: I1213 02:09:09.449962 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-lib-modules\") pod \"f1448780-5b7b-43d0-899b-d74b73b55c37\" (UID: \"f1448780-5b7b-43d0-899b-d74b73b55c37\") " Dec 13 02:09:09.450596 kubelet[1951]: I1213 02:09:09.450086 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f1448780-5b7b-43d0-899b-d74b73b55c37" (UID: "f1448780-5b7b-43d0-899b-d74b73b55c37"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:09.450596 kubelet[1951]: I1213 02:09:09.450139 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f1448780-5b7b-43d0-899b-d74b73b55c37" (UID: "f1448780-5b7b-43d0-899b-d74b73b55c37"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:09.450596 kubelet[1951]: I1213 02:09:09.450165 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-hostproc" (OuterVolumeSpecName: "hostproc") pod "f1448780-5b7b-43d0-899b-d74b73b55c37" (UID: "f1448780-5b7b-43d0-899b-d74b73b55c37"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:09.450790 kubelet[1951]: I1213 02:09:09.450765 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f1448780-5b7b-43d0-899b-d74b73b55c37" (UID: "f1448780-5b7b-43d0-899b-d74b73b55c37"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:09.451070 kubelet[1951]: I1213 02:09:09.451047 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-cni-path" (OuterVolumeSpecName: "cni-path") pod "f1448780-5b7b-43d0-899b-d74b73b55c37" (UID: "f1448780-5b7b-43d0-899b-d74b73b55c37"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:09.451227 kubelet[1951]: I1213 02:09:09.451204 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f1448780-5b7b-43d0-899b-d74b73b55c37" (UID: "f1448780-5b7b-43d0-899b-d74b73b55c37"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:09.451302 kubelet[1951]: I1213 02:09:09.451238 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f1448780-5b7b-43d0-899b-d74b73b55c37" (UID: "f1448780-5b7b-43d0-899b-d74b73b55c37"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:09.457597 kubelet[1951]: I1213 02:09:09.453063 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f1448780-5b7b-43d0-899b-d74b73b55c37" (UID: "f1448780-5b7b-43d0-899b-d74b73b55c37"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:09.456790 systemd[1]: var-lib-kubelet-pods-f1448780\x2d5b7b\x2d43d0\x2d899b\x2dd74b73b55c37-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmq97d.mount: Deactivated successfully. Dec 13 02:09:09.457885 kubelet[1951]: I1213 02:09:09.456391 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f1448780-5b7b-43d0-899b-d74b73b55c37" (UID: "f1448780-5b7b-43d0-899b-d74b73b55c37"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:09.457885 kubelet[1951]: I1213 02:09:09.456415 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f1448780-5b7b-43d0-899b-d74b73b55c37" (UID: "f1448780-5b7b-43d0-899b-d74b73b55c37"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:09.457885 kubelet[1951]: I1213 02:09:09.456505 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1448780-5b7b-43d0-899b-d74b73b55c37-kube-api-access-mq97d" (OuterVolumeSpecName: "kube-api-access-mq97d") pod "f1448780-5b7b-43d0-899b-d74b73b55c37" (UID: "f1448780-5b7b-43d0-899b-d74b73b55c37"). InnerVolumeSpecName "kube-api-access-mq97d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:09:09.458643 kubelet[1951]: I1213 02:09:09.458618 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1448780-5b7b-43d0-899b-d74b73b55c37-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f1448780-5b7b-43d0-899b-d74b73b55c37" (UID: "f1448780-5b7b-43d0-899b-d74b73b55c37"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:09:09.463895 systemd[1]: var-lib-kubelet-pods-f1448780\x2d5b7b\x2d43d0\x2d899b\x2dd74b73b55c37-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:09:09.467230 systemd[1]: var-lib-kubelet-pods-f1448780\x2d5b7b\x2d43d0\x2d899b\x2dd74b73b55c37-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:09:09.468356 kubelet[1951]: I1213 02:09:09.468326 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1448780-5b7b-43d0-899b-d74b73b55c37-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f1448780-5b7b-43d0-899b-d74b73b55c37" (UID: "f1448780-5b7b-43d0-899b-d74b73b55c37"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:09:09.468556 kubelet[1951]: I1213 02:09:09.468526 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1448780-5b7b-43d0-899b-d74b73b55c37-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f1448780-5b7b-43d0-899b-d74b73b55c37" (UID: "f1448780-5b7b-43d0-899b-d74b73b55c37"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:09:09.474478 kubelet[1951]: I1213 02:09:09.474456 1951 scope.go:117] "RemoveContainer" containerID="1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148" Dec 13 02:09:09.476447 env[1440]: time="2024-12-13T02:09:09.476405977Z" level=info msg="RemoveContainer for \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\"" Dec 13 02:09:09.479200 systemd[1]: Removed slice kubepods-burstable-podf1448780_5b7b_43d0_899b_d74b73b55c37.slice. Dec 13 02:09:09.479337 systemd[1]: kubepods-burstable-podf1448780_5b7b_43d0_899b_d74b73b55c37.slice: Consumed 6.379s CPU time. Dec 13 02:09:09.484949 env[1440]: time="2024-12-13T02:09:09.484913127Z" level=info msg="RemoveContainer for \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\" returns successfully" Dec 13 02:09:09.485211 kubelet[1951]: I1213 02:09:09.485189 1951 scope.go:117] "RemoveContainer" containerID="d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1" Dec 13 02:09:09.486248 env[1440]: time="2024-12-13T02:09:09.486221035Z" level=info msg="RemoveContainer for \"d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1\"" Dec 13 02:09:09.492943 env[1440]: time="2024-12-13T02:09:09.492911175Z" level=info msg="RemoveContainer for \"d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1\" returns successfully" Dec 13 02:09:09.493144 kubelet[1951]: I1213 02:09:09.493122 1951 scope.go:117] "RemoveContainer" containerID="5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb" Dec 13 02:09:09.494630 env[1440]: time="2024-12-13T02:09:09.494369183Z" level=info msg="RemoveContainer for \"5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb\"" Dec 13 02:09:09.503531 env[1440]: time="2024-12-13T02:09:09.503489737Z" level=info msg="RemoveContainer for \"5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb\" returns successfully" Dec 13 02:09:09.503734 kubelet[1951]: I1213 02:09:09.503708 1951 scope.go:117] "RemoveContainer" containerID="994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a" Dec 13 02:09:09.504972 env[1440]: time="2024-12-13T02:09:09.504727744Z" level=info msg="RemoveContainer for \"994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a\"" Dec 13 02:09:09.513747 env[1440]: time="2024-12-13T02:09:09.513704797Z" level=info msg="RemoveContainer for \"994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a\" returns successfully" Dec 13 02:09:09.514016 kubelet[1951]: I1213 02:09:09.513977 1951 scope.go:117] "RemoveContainer" containerID="a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0" Dec 13 02:09:09.515042 env[1440]: time="2024-12-13T02:09:09.515010805Z" level=info msg="RemoveContainer for \"a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0\"" Dec 13 02:09:09.524285 env[1440]: time="2024-12-13T02:09:09.524241959Z" level=info msg="RemoveContainer for \"a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0\" returns successfully" Dec 13 02:09:09.524485 kubelet[1951]: I1213 02:09:09.524462 1951 scope.go:117] "RemoveContainer" containerID="1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148" Dec 13 02:09:09.524819 env[1440]: time="2024-12-13T02:09:09.524734262Z" level=error msg="ContainerStatus for \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\": not found" Dec 13 02:09:09.524975 kubelet[1951]: E1213 02:09:09.524951 1951 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\": not found" containerID="1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148" Dec 13 02:09:09.525094 kubelet[1951]: I1213 02:09:09.524984 1951 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148"} err="failed to get container status \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d782e655fca9578f58abb743e77952fd56bb23d7bc3a069d3c7b75300fc0148\": not found" Dec 13 02:09:09.525160 kubelet[1951]: I1213 02:09:09.525099 1951 scope.go:117] "RemoveContainer" containerID="d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1" Dec 13 02:09:09.525390 env[1440]: time="2024-12-13T02:09:09.525335666Z" level=error msg="ContainerStatus for \"d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1\": not found" Dec 13 02:09:09.525526 kubelet[1951]: E1213 02:09:09.525502 1951 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1\": not found" containerID="d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1" Dec 13 02:09:09.525594 kubelet[1951]: I1213 02:09:09.525544 1951 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1"} err="failed to get container status \"d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9641854f259e1cd9d0c26419ff551ac674a1e74b6c0d857e2e3572ffc1794c1\": not found" Dec 13 02:09:09.525594 kubelet[1951]: I1213 02:09:09.525567 1951 scope.go:117] "RemoveContainer" containerID="5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb" Dec 13 02:09:09.525843 env[1440]: time="2024-12-13T02:09:09.525797069Z" level=error msg="ContainerStatus for \"5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb\": not found" Dec 13 02:09:09.526015 kubelet[1951]: E1213 02:09:09.525978 1951 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb\": not found" containerID="5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb" Dec 13 02:09:09.526094 kubelet[1951]: I1213 02:09:09.526024 1951 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb"} err="failed to get container status \"5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a4e8d92445f89c067f23e875bd15e004b80f06b1fec0869864738b83a6c9bdb\": not found" Dec 13 02:09:09.526094 kubelet[1951]: I1213 02:09:09.526046 1951 scope.go:117] "RemoveContainer" containerID="994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a" Dec 13 02:09:09.526267 env[1440]: time="2024-12-13T02:09:09.526217671Z" level=error msg="ContainerStatus for \"994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a\": not found" Dec 13 02:09:09.526367 kubelet[1951]: E1213 02:09:09.526345 1951 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a\": not found" containerID="994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a" Dec 13 02:09:09.526428 kubelet[1951]: I1213 02:09:09.526378 1951 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a"} err="failed to get container status \"994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a\": rpc error: code = NotFound desc = an error occurred when try to find container \"994747f13f85797974eacb251d93a7e5635e987fc3b70bc79fcd4a85b588e15a\": not found" Dec 13 02:09:09.526428 kubelet[1951]: I1213 02:09:09.526397 1951 scope.go:117] "RemoveContainer" containerID="a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0" Dec 13 02:09:09.526668 env[1440]: time="2024-12-13T02:09:09.526624973Z" level=error msg="ContainerStatus for \"a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0\": not found" Dec 13 02:09:09.526783 kubelet[1951]: E1213 02:09:09.526760 1951 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0\": not found" containerID="a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0" Dec 13 02:09:09.526843 kubelet[1951]: I1213 02:09:09.526792 1951 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0"} err="failed to get container status \"a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4e0f0321c9533ae4ac4c3a3d459018ac8bfd15bb767c7eb8523b94c7ece24c0\": not found" Dec 13 02:09:09.550209 kubelet[1951]: I1213 02:09:09.550159 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-cilium-run\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:09.550209 kubelet[1951]: I1213 02:09:09.550194 1951 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-etc-cni-netd\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:09.550209 kubelet[1951]: I1213 02:09:09.550208 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1448780-5b7b-43d0-899b-d74b73b55c37-cilium-config-path\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:09.550209 kubelet[1951]: I1213 02:09:09.550220 1951 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-host-proc-sys-net\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:09.550477 kubelet[1951]: I1213 02:09:09.550230 1951 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1448780-5b7b-43d0-899b-d74b73b55c37-clustermesh-secrets\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:09.550477 kubelet[1951]: I1213 02:09:09.550240 1951 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-cni-path\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:09.550477 kubelet[1951]: I1213 02:09:09.550249 1951 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1448780-5b7b-43d0-899b-d74b73b55c37-hubble-tls\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:09.550477 kubelet[1951]: I1213 02:09:09.550261 1951 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-xtables-lock\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:09.550477 kubelet[1951]: I1213 02:09:09.550270 1951 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-host-proc-sys-kernel\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:09.550477 kubelet[1951]: I1213 02:09:09.550281 1951 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-bpf-maps\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:09.550477 kubelet[1951]: I1213 02:09:09.550290 1951 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-lib-modules\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:09.550477 kubelet[1951]: I1213 02:09:09.550300 1951 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-hostproc\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:09.550673 kubelet[1951]: I1213 02:09:09.550309 1951 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mq97d\" (UniqueName: \"kubernetes.io/projected/f1448780-5b7b-43d0-899b-d74b73b55c37-kube-api-access-mq97d\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:09.550673 kubelet[1951]: I1213 02:09:09.550319 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1448780-5b7b-43d0-899b-d74b73b55c37-cilium-cgroup\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:10.165598 kubelet[1951]: E1213 02:09:10.165546 1951 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1448780-5b7b-43d0-899b-d74b73b55c37" containerName="cilium-agent" Dec 13 02:09:10.165598 kubelet[1951]: E1213 02:09:10.165581 1951 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1448780-5b7b-43d0-899b-d74b73b55c37" containerName="clean-cilium-state" Dec 13 02:09:10.165598 kubelet[1951]: E1213 02:09:10.165592 1951 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1448780-5b7b-43d0-899b-d74b73b55c37" containerName="mount-bpf-fs" Dec 13 02:09:10.165598 kubelet[1951]: E1213 02:09:10.165601 1951 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1448780-5b7b-43d0-899b-d74b73b55c37" containerName="mount-cgroup" Dec 13 02:09:10.165984 kubelet[1951]: E1213 02:09:10.165615 1951 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1448780-5b7b-43d0-899b-d74b73b55c37" containerName="apply-sysctl-overwrites" Dec 13 02:09:10.165984 kubelet[1951]: I1213 02:09:10.165647 1951 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1448780-5b7b-43d0-899b-d74b73b55c37" containerName="cilium-agent" Dec 13 02:09:10.171763 systemd[1]: Created slice kubepods-besteffort-pod7e5fdd44_777c_4975_9f4f_8ef098e718d8.slice. Dec 13 02:09:10.209728 systemd[1]: Created slice kubepods-burstable-podde4d9fe5_9aff_4faf_a2b7_c0ecc1803be9.slice. Dec 13 02:09:10.293065 kubelet[1951]: E1213 02:09:10.292969 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:10.354332 kubelet[1951]: I1213 02:09:10.354277 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-etc-cni-netd\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.354549 kubelet[1951]: I1213 02:09:10.354343 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-ipsec-secrets\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.354549 kubelet[1951]: I1213 02:09:10.354370 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-host-proc-sys-net\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.354549 kubelet[1951]: I1213 02:09:10.354398 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-962lv\" (UniqueName: \"kubernetes.io/projected/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-kube-api-access-962lv\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.354549 kubelet[1951]: I1213 02:09:10.354420 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-config-path\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.354549 kubelet[1951]: I1213 02:09:10.354441 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-run\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.354818 kubelet[1951]: I1213 02:09:10.354462 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-bpf-maps\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.354818 kubelet[1951]: I1213 02:09:10.354482 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cni-path\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.354818 kubelet[1951]: I1213 02:09:10.354504 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-xtables-lock\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.354818 kubelet[1951]: I1213 02:09:10.354530 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-host-proc-sys-kernel\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.354818 kubelet[1951]: I1213 02:09:10.354557 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e5fdd44-777c-4975-9f4f-8ef098e718d8-cilium-config-path\") pod \"cilium-operator-5d85765b45-gwh24\" (UID: \"7e5fdd44-777c-4975-9f4f-8ef098e718d8\") " pod="kube-system/cilium-operator-5d85765b45-gwh24" Dec 13 02:09:10.355048 kubelet[1951]: I1213 02:09:10.354582 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-hostproc\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.355048 kubelet[1951]: I1213 02:09:10.354607 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-cgroup\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.355048 kubelet[1951]: I1213 02:09:10.354633 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-clustermesh-secrets\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.355048 kubelet[1951]: I1213 02:09:10.354658 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-hubble-tls\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.355048 kubelet[1951]: I1213 02:09:10.354685 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz4hb\" (UniqueName: \"kubernetes.io/projected/7e5fdd44-777c-4975-9f4f-8ef098e718d8-kube-api-access-gz4hb\") pod \"cilium-operator-5d85765b45-gwh24\" (UID: \"7e5fdd44-777c-4975-9f4f-8ef098e718d8\") " pod="kube-system/cilium-operator-5d85765b45-gwh24" Dec 13 02:09:10.355209 kubelet[1951]: I1213 02:09:10.354712 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-lib-modules\") pod \"cilium-x52df\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " pod="kube-system/cilium-x52df" Dec 13 02:09:10.518474 env[1440]: time="2024-12-13T02:09:10.518151679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x52df,Uid:de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9,Namespace:kube-system,Attempt:0,}" Dec 13 02:09:10.550937 env[1440]: time="2024-12-13T02:09:10.550856969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:09:10.551217 env[1440]: time="2024-12-13T02:09:10.550903469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:09:10.551217 env[1440]: time="2024-12-13T02:09:10.550917969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:09:10.551630 env[1440]: time="2024-12-13T02:09:10.551574973Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/742bc9f8b066ed74643f5f4862d6c85347181b2f2da67825eea28b70086c5ac5 pid=3451 runtime=io.containerd.runc.v2 Dec 13 02:09:10.564539 systemd[1]: Started cri-containerd-742bc9f8b066ed74643f5f4862d6c85347181b2f2da67825eea28b70086c5ac5.scope. Dec 13 02:09:10.593952 env[1440]: time="2024-12-13T02:09:10.593905319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x52df,Uid:de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9,Namespace:kube-system,Attempt:0,} returns sandbox id \"742bc9f8b066ed74643f5f4862d6c85347181b2f2da67825eea28b70086c5ac5\"" Dec 13 02:09:10.596745 env[1440]: time="2024-12-13T02:09:10.596707535Z" level=info msg="CreateContainer within sandbox \"742bc9f8b066ed74643f5f4862d6c85347181b2f2da67825eea28b70086c5ac5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:09:10.644078 env[1440]: time="2024-12-13T02:09:10.644012510Z" level=info msg="CreateContainer within sandbox \"742bc9f8b066ed74643f5f4862d6c85347181b2f2da67825eea28b70086c5ac5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246\"" Dec 13 02:09:10.644960 env[1440]: time="2024-12-13T02:09:10.644913716Z" level=info msg="StartContainer for \"f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246\"" Dec 13 02:09:10.662454 systemd[1]: Started cri-containerd-f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246.scope. Dec 13 02:09:10.674908 systemd[1]: cri-containerd-f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246.scope: Deactivated successfully. Dec 13 02:09:10.710395 env[1440]: time="2024-12-13T02:09:10.710330196Z" level=info msg="shim disconnected" id=f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246 Dec 13 02:09:10.710395 env[1440]: time="2024-12-13T02:09:10.710396396Z" level=warning msg="cleaning up after shim disconnected" id=f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246 namespace=k8s.io Dec 13 02:09:10.710395 env[1440]: time="2024-12-13T02:09:10.710409497Z" level=info msg="cleaning up dead shim" Dec 13 02:09:10.719461 env[1440]: time="2024-12-13T02:09:10.719398749Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3511 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:09:10Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 02:09:10.719849 env[1440]: time="2024-12-13T02:09:10.719723751Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" Dec 13 02:09:10.722143 env[1440]: time="2024-12-13T02:09:10.722084764Z" level=error msg="Failed to pipe stderr of container \"f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246\"" error="reading from a closed fifo" Dec 13 02:09:10.724250 env[1440]: time="2024-12-13T02:09:10.724204777Z" level=error msg="Failed to pipe stdout of container \"f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246\"" error="reading from a closed fifo" Dec 13 02:09:10.729172 env[1440]: time="2024-12-13T02:09:10.729115305Z" level=error msg="StartContainer for \"f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 02:09:10.729445 kubelet[1951]: E1213 02:09:10.729399 1951 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246" Dec 13 02:09:10.730957 kubelet[1951]: E1213 02:09:10.730925 1951 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 02:09:10.730957 kubelet[1951]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 02:09:10.730957 kubelet[1951]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 02:09:10.730957 kubelet[1951]: rm /hostbin/cilium-mount Dec 13 02:09:10.731165 kubelet[1951]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-962lv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-x52df_kube-system(de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 02:09:10.731165 kubelet[1951]: > logger="UnhandledError" Dec 13 02:09:10.732115 kubelet[1951]: E1213 02:09:10.732081 1951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x52df" podUID="de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" Dec 13 02:09:10.776232 env[1440]: time="2024-12-13T02:09:10.775521675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gwh24,Uid:7e5fdd44-777c-4975-9f4f-8ef098e718d8,Namespace:kube-system,Attempt:0,}" Dec 13 02:09:10.813317 env[1440]: time="2024-12-13T02:09:10.813246695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:09:10.813528 env[1440]: time="2024-12-13T02:09:10.813287495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:09:10.813528 env[1440]: time="2024-12-13T02:09:10.813300795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:09:10.813528 env[1440]: time="2024-12-13T02:09:10.813428496Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c58a33a8c3f541f3cc1471e10ee3c699ca0f5a747a7e734c371547968c5ec54e pid=3532 runtime=io.containerd.runc.v2 Dec 13 02:09:10.829055 systemd[1]: Started cri-containerd-c58a33a8c3f541f3cc1471e10ee3c699ca0f5a747a7e734c371547968c5ec54e.scope. Dec 13 02:09:10.869423 env[1440]: time="2024-12-13T02:09:10.869381921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gwh24,Uid:7e5fdd44-777c-4975-9f4f-8ef098e718d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c58a33a8c3f541f3cc1471e10ee3c699ca0f5a747a7e734c371547968c5ec54e\"" Dec 13 02:09:10.871203 env[1440]: time="2024-12-13T02:09:10.871165831Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:09:11.294036 kubelet[1951]: E1213 02:09:11.293937 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:11.305323 kubelet[1951]: I1213 02:09:11.305292 1951 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1448780-5b7b-43d0-899b-d74b73b55c37" path="/var/lib/kubelet/pods/f1448780-5b7b-43d0-899b-d74b73b55c37/volumes" Dec 13 02:09:11.485032 env[1440]: time="2024-12-13T02:09:11.484978862Z" level=info msg="StopPodSandbox for \"742bc9f8b066ed74643f5f4862d6c85347181b2f2da67825eea28b70086c5ac5\"" Dec 13 02:09:11.485294 env[1440]: time="2024-12-13T02:09:11.485264463Z" level=info msg="Container to stop \"f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:09:11.487762 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-742bc9f8b066ed74643f5f4862d6c85347181b2f2da67825eea28b70086c5ac5-shm.mount: Deactivated successfully. Dec 13 02:09:11.495176 systemd[1]: cri-containerd-742bc9f8b066ed74643f5f4862d6c85347181b2f2da67825eea28b70086c5ac5.scope: Deactivated successfully. Dec 13 02:09:11.512559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-742bc9f8b066ed74643f5f4862d6c85347181b2f2da67825eea28b70086c5ac5-rootfs.mount: Deactivated successfully. Dec 13 02:09:11.531839 env[1440]: time="2024-12-13T02:09:11.531775030Z" level=info msg="shim disconnected" id=742bc9f8b066ed74643f5f4862d6c85347181b2f2da67825eea28b70086c5ac5 Dec 13 02:09:11.531839 env[1440]: time="2024-12-13T02:09:11.531830931Z" level=warning msg="cleaning up after shim disconnected" id=742bc9f8b066ed74643f5f4862d6c85347181b2f2da67825eea28b70086c5ac5 namespace=k8s.io Dec 13 02:09:11.532355 env[1440]: time="2024-12-13T02:09:11.531844831Z" level=info msg="cleaning up dead shim" Dec 13 02:09:11.540435 env[1440]: time="2024-12-13T02:09:11.540399180Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3587 runtime=io.containerd.runc.v2\n" Dec 13 02:09:11.540739 env[1440]: time="2024-12-13T02:09:11.540705981Z" level=info msg="TearDown network for sandbox \"742bc9f8b066ed74643f5f4862d6c85347181b2f2da67825eea28b70086c5ac5\" successfully" Dec 13 02:09:11.540831 env[1440]: time="2024-12-13T02:09:11.540737382Z" level=info msg="StopPodSandbox for \"742bc9f8b066ed74643f5f4862d6c85347181b2f2da67825eea28b70086c5ac5\" returns successfully" Dec 13 02:09:11.663258 kubelet[1951]: I1213 02:09:11.663107 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-etc-cni-netd\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.663788 kubelet[1951]: I1213 02:09:11.663191 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:11.663788 kubelet[1951]: I1213 02:09:11.663747 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-host-proc-sys-net\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.663951 kubelet[1951]: I1213 02:09:11.663815 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-bpf-maps\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.663951 kubelet[1951]: I1213 02:09:11.663913 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cni-path\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.663951 kubelet[1951]: I1213 02:09:11.663936 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-xtables-lock\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.664116 kubelet[1951]: I1213 02:09:11.663860 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:11.664116 kubelet[1951]: I1213 02:09:11.663879 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:11.664116 kubelet[1951]: I1213 02:09:11.663990 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cni-path" (OuterVolumeSpecName: "cni-path") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:11.664116 kubelet[1951]: I1213 02:09:11.664055 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-config-path\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.664116 kubelet[1951]: I1213 02:09:11.664102 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.664353 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-run\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.664394 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-hostproc\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.664432 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-cgroup\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.664461 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-clustermesh-secrets\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.664501 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-ipsec-secrets\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.664527 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-962lv\" (UniqueName: \"kubernetes.io/projected/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-kube-api-access-962lv\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.664548 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-host-proc-sys-kernel\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.664580 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-lib-modules\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.664603 1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-hubble-tls\") pod \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\" (UID: \"de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9\") " Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.664660 1951 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-bpf-maps\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.664676 1951 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cni-path\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.664687 1951 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-xtables-lock\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.664698 1951 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-etc-cni-netd\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.664709 1951 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-host-proc-sys-net\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.665449 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.665496 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-hostproc" (OuterVolumeSpecName: "hostproc") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:11.666904 kubelet[1951]: I1213 02:09:11.665518 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:11.667704 kubelet[1951]: I1213 02:09:11.666767 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:09:11.669690 kubelet[1951]: I1213 02:09:11.669661 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:11.669833 kubelet[1951]: I1213 02:09:11.669815 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:09:11.673118 systemd[1]: var-lib-kubelet-pods-de4d9fe5\x2d9aff\x2d4faf\x2da2b7\x2dc0ecc1803be9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:09:11.673256 systemd[1]: var-lib-kubelet-pods-de4d9fe5\x2d9aff\x2d4faf\x2da2b7\x2dc0ecc1803be9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d962lv.mount: Deactivated successfully. Dec 13 02:09:11.676865 systemd[1]: var-lib-kubelet-pods-de4d9fe5\x2d9aff\x2d4faf\x2da2b7\x2dc0ecc1803be9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:09:11.678917 kubelet[1951]: I1213 02:09:11.678889 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:09:11.679368 kubelet[1951]: I1213 02:09:11.679344 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-kube-api-access-962lv" (OuterVolumeSpecName: "kube-api-access-962lv") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "kube-api-access-962lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:09:11.679601 kubelet[1951]: I1213 02:09:11.679580 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:09:11.681705 kubelet[1951]: I1213 02:09:11.681677 1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" (UID: "de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:09:11.765524 kubelet[1951]: I1213 02:09:11.765483 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-cgroup\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:11.765524 kubelet[1951]: I1213 02:09:11.765520 1951 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-clustermesh-secrets\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:11.765808 kubelet[1951]: I1213 02:09:11.765534 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-config-path\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:11.765808 kubelet[1951]: I1213 02:09:11.765550 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-run\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:11.765808 kubelet[1951]: I1213 02:09:11.765561 1951 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-hostproc\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:11.765808 kubelet[1951]: I1213 02:09:11.765573 1951 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-cilium-ipsec-secrets\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:11.765808 kubelet[1951]: I1213 02:09:11.765610 1951 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-962lv\" (UniqueName: \"kubernetes.io/projected/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-kube-api-access-962lv\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:11.766027 kubelet[1951]: I1213 02:09:11.765913 1951 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-host-proc-sys-kernel\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:11.766027 kubelet[1951]: I1213 02:09:11.765931 1951 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-lib-modules\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:11.766027 kubelet[1951]: I1213 02:09:11.765943 1951 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9-hubble-tls\") on node \"10.200.8.12\" DevicePath \"\"" Dec 13 02:09:12.295121 kubelet[1951]: E1213 02:09:12.295082 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:12.374114 kubelet[1951]: E1213 02:09:12.374061 1951 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:09:12.475703 systemd[1]: var-lib-kubelet-pods-de4d9fe5\x2d9aff\x2d4faf\x2da2b7\x2dc0ecc1803be9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:09:12.488027 kubelet[1951]: I1213 02:09:12.487395 1951 scope.go:117] "RemoveContainer" containerID="f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246" Dec 13 02:09:12.493811 systemd[1]: Removed slice kubepods-burstable-podde4d9fe5_9aff_4faf_a2b7_c0ecc1803be9.slice. Dec 13 02:09:12.497072 env[1440]: time="2024-12-13T02:09:12.497033527Z" level=info msg="RemoveContainer for \"f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246\"" Dec 13 02:09:12.504837 env[1440]: time="2024-12-13T02:09:12.504790971Z" level=info msg="RemoveContainer for \"f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246\" returns successfully" Dec 13 02:09:12.532365 kubelet[1951]: E1213 02:09:12.532335 1951 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" containerName="mount-cgroup" Dec 13 02:09:12.533022 kubelet[1951]: I1213 02:09:12.532546 1951 memory_manager.go:354] "RemoveStaleState removing state" podUID="de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" containerName="mount-cgroup" Dec 13 02:09:12.540373 systemd[1]: Created slice kubepods-burstable-pod33ccb3d6_6e37_4dfe_a600_31c121bd17a7.slice. Dec 13 02:09:12.670579 kubelet[1951]: I1213 02:09:12.670454 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-cilium-cgroup\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.670810 kubelet[1951]: I1213 02:09:12.670789 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-cilium-ipsec-secrets\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.670932 kubelet[1951]: I1213 02:09:12.670918 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-cilium-run\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.671073 kubelet[1951]: I1213 02:09:12.671056 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-hostproc\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.671193 kubelet[1951]: I1213 02:09:12.671179 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-cni-path\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.671348 kubelet[1951]: I1213 02:09:12.671330 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-cilium-config-path\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.671467 kubelet[1951]: I1213 02:09:12.671453 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75q2m\" (UniqueName: \"kubernetes.io/projected/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-kube-api-access-75q2m\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.671570 kubelet[1951]: I1213 02:09:12.671558 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-bpf-maps\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.671670 kubelet[1951]: I1213 02:09:12.671652 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-host-proc-sys-net\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.671770 kubelet[1951]: I1213 02:09:12.671758 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-etc-cni-netd\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.671868 kubelet[1951]: I1213 02:09:12.671849 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-clustermesh-secrets\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.671960 kubelet[1951]: I1213 02:09:12.671948 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-xtables-lock\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.672065 kubelet[1951]: I1213 02:09:12.672051 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-host-proc-sys-kernel\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.672170 kubelet[1951]: I1213 02:09:12.672156 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-hubble-tls\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.672263 kubelet[1951]: I1213 02:09:12.672252 1951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33ccb3d6-6e37-4dfe-a600-31c121bd17a7-lib-modules\") pod \"cilium-zdkdv\" (UID: \"33ccb3d6-6e37-4dfe-a600-31c121bd17a7\") " pod="kube-system/cilium-zdkdv" Dec 13 02:09:12.853709 env[1440]: time="2024-12-13T02:09:12.853546344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zdkdv,Uid:33ccb3d6-6e37-4dfe-a600-31c121bd17a7,Namespace:kube-system,Attempt:0,}" Dec 13 02:09:12.911737 env[1440]: time="2024-12-13T02:09:12.911667673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:09:12.911951 env[1440]: time="2024-12-13T02:09:12.911925875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:09:12.912080 env[1440]: time="2024-12-13T02:09:12.912057675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:09:12.912299 env[1440]: time="2024-12-13T02:09:12.912272477Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d678c2a887947f5e942635375a74afa6296ab124eedcb0c80f0494f7a2e7fed6 pid=3614 runtime=io.containerd.runc.v2 Dec 13 02:09:12.937554 systemd[1]: Started cri-containerd-d678c2a887947f5e942635375a74afa6296ab124eedcb0c80f0494f7a2e7fed6.scope. Dec 13 02:09:12.976356 env[1440]: time="2024-12-13T02:09:12.976306639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zdkdv,Uid:33ccb3d6-6e37-4dfe-a600-31c121bd17a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d678c2a887947f5e942635375a74afa6296ab124eedcb0c80f0494f7a2e7fed6\"" Dec 13 02:09:12.980483 env[1440]: time="2024-12-13T02:09:12.980454062Z" level=info msg="CreateContainer within sandbox \"d678c2a887947f5e942635375a74afa6296ab124eedcb0c80f0494f7a2e7fed6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:09:13.046521 env[1440]: time="2024-12-13T02:09:13.046440732Z" level=info msg="CreateContainer within sandbox \"d678c2a887947f5e942635375a74afa6296ab124eedcb0c80f0494f7a2e7fed6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5fbe2de54c8d889c3bee9ee8c1de05f39dca780a84f5432c6e311ad2f192d522\"" Dec 13 02:09:13.047245 env[1440]: time="2024-12-13T02:09:13.047208636Z" level=info msg="StartContainer for \"5fbe2de54c8d889c3bee9ee8c1de05f39dca780a84f5432c6e311ad2f192d522\"" Dec 13 02:09:13.060142 env[1440]: time="2024-12-13T02:09:13.059946408Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:13.063949 systemd[1]: Started cri-containerd-5fbe2de54c8d889c3bee9ee8c1de05f39dca780a84f5432c6e311ad2f192d522.scope. Dec 13 02:09:13.070069 env[1440]: time="2024-12-13T02:09:13.070030964Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:13.079469 env[1440]: time="2024-12-13T02:09:13.079438616Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:13.080173 env[1440]: time="2024-12-13T02:09:13.079777418Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:09:13.083446 env[1440]: time="2024-12-13T02:09:13.083418839Z" level=info msg="CreateContainer within sandbox \"c58a33a8c3f541f3cc1471e10ee3c699ca0f5a747a7e734c371547968c5ec54e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:09:13.108032 env[1440]: time="2024-12-13T02:09:13.107526173Z" level=info msg="StartContainer for \"5fbe2de54c8d889c3bee9ee8c1de05f39dca780a84f5432c6e311ad2f192d522\" returns successfully" Dec 13 02:09:13.114887 systemd[1]: cri-containerd-5fbe2de54c8d889c3bee9ee8c1de05f39dca780a84f5432c6e311ad2f192d522.scope: Deactivated successfully. Dec 13 02:09:13.119439 env[1440]: time="2024-12-13T02:09:13.119389039Z" level=info msg="CreateContainer within sandbox \"c58a33a8c3f541f3cc1471e10ee3c699ca0f5a747a7e734c371547968c5ec54e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0818341d612dcc5169e3e9c02657fa1540cffab88d9c8ccf70535d04f90b2e39\"" Dec 13 02:09:13.120291 env[1440]: time="2024-12-13T02:09:13.120252744Z" level=info msg="StartContainer for \"0818341d612dcc5169e3e9c02657fa1540cffab88d9c8ccf70535d04f90b2e39\"" Dec 13 02:09:13.144149 systemd[1]: Started cri-containerd-0818341d612dcc5169e3e9c02657fa1540cffab88d9c8ccf70535d04f90b2e39.scope. Dec 13 02:09:13.606619 kubelet[1951]: E1213 02:09:13.295438 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:13.614032 kubelet[1951]: I1213 02:09:13.613459 1951 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9" path="/var/lib/kubelet/pods/de4d9fe5-9aff-4faf-a2b7-c0ecc1803be9/volumes" Dec 13 02:09:13.658846 env[1440]: time="2024-12-13T02:09:13.658764850Z" level=info msg="StartContainer for \"0818341d612dcc5169e3e9c02657fa1540cffab88d9c8ccf70535d04f90b2e39\" returns successfully" Dec 13 02:09:13.660524 env[1440]: time="2024-12-13T02:09:13.660477160Z" level=info msg="shim disconnected" id=5fbe2de54c8d889c3bee9ee8c1de05f39dca780a84f5432c6e311ad2f192d522 Dec 13 02:09:13.660699 env[1440]: time="2024-12-13T02:09:13.660676661Z" level=warning msg="cleaning up after shim disconnected" id=5fbe2de54c8d889c3bee9ee8c1de05f39dca780a84f5432c6e311ad2f192d522 namespace=k8s.io Dec 13 02:09:13.660795 env[1440]: time="2024-12-13T02:09:13.660781762Z" level=info msg="cleaning up dead shim" Dec 13 02:09:13.672064 env[1440]: time="2024-12-13T02:09:13.672010424Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3735 runtime=io.containerd.runc.v2\n" Dec 13 02:09:13.816242 kubelet[1951]: W1213 02:09:13.816192 1951 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde4d9fe5_9aff_4faf_a2b7_c0ecc1803be9.slice/cri-containerd-f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246.scope WatchSource:0}: container "f6c4e1466ddee10a0a6307dd98df9c2da58c1cae2b322ad9c2b2a929f6621246" in namespace "k8s.io": not found Dec 13 02:09:14.295964 kubelet[1951]: E1213 02:09:14.295913 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:14.500077 env[1440]: time="2024-12-13T02:09:14.500032010Z" level=info msg="CreateContainer within sandbox \"d678c2a887947f5e942635375a74afa6296ab124eedcb0c80f0494f7a2e7fed6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:09:14.509444 kubelet[1951]: I1213 02:09:14.509388 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-gwh24" podStartSLOduration=2.298542061 podStartE2EDuration="4.509366961s" podCreationTimestamp="2024-12-13 02:09:10 +0000 UTC" firstStartedPulling="2024-12-13 02:09:10.870822429 +0000 UTC m=+74.912663977" lastFinishedPulling="2024-12-13 02:09:13.081647329 +0000 UTC m=+77.123488877" observedRunningTime="2024-12-13 02:09:14.509248661 +0000 UTC m=+78.551090309" watchObservedRunningTime="2024-12-13 02:09:14.509366961 +0000 UTC m=+78.551208509" Dec 13 02:09:14.543679 env[1440]: time="2024-12-13T02:09:14.543632050Z" level=info msg="CreateContainer within sandbox \"d678c2a887947f5e942635375a74afa6296ab124eedcb0c80f0494f7a2e7fed6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"293e6efe73e9e698d97be968f8db43a947eb5e3302b274437a1ed9326ca8238d\"" Dec 13 02:09:14.544269 env[1440]: time="2024-12-13T02:09:14.544235454Z" level=info msg="StartContainer for \"293e6efe73e9e698d97be968f8db43a947eb5e3302b274437a1ed9326ca8238d\"" Dec 13 02:09:14.567875 systemd[1]: Started cri-containerd-293e6efe73e9e698d97be968f8db43a947eb5e3302b274437a1ed9326ca8238d.scope. Dec 13 02:09:14.602540 env[1440]: time="2024-12-13T02:09:14.602493375Z" level=info msg="StartContainer for \"293e6efe73e9e698d97be968f8db43a947eb5e3302b274437a1ed9326ca8238d\" returns successfully" Dec 13 02:09:14.604632 systemd[1]: cri-containerd-293e6efe73e9e698d97be968f8db43a947eb5e3302b274437a1ed9326ca8238d.scope: Deactivated successfully. Dec 13 02:09:14.622932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-293e6efe73e9e698d97be968f8db43a947eb5e3302b274437a1ed9326ca8238d-rootfs.mount: Deactivated successfully. Dec 13 02:09:14.647750 env[1440]: time="2024-12-13T02:09:14.647687624Z" level=info msg="shim disconnected" id=293e6efe73e9e698d97be968f8db43a947eb5e3302b274437a1ed9326ca8238d Dec 13 02:09:14.647750 env[1440]: time="2024-12-13T02:09:14.647746124Z" level=warning msg="cleaning up after shim disconnected" id=293e6efe73e9e698d97be968f8db43a947eb5e3302b274437a1ed9326ca8238d namespace=k8s.io Dec 13 02:09:14.648068 env[1440]: time="2024-12-13T02:09:14.647758924Z" level=info msg="cleaning up dead shim" Dec 13 02:09:14.655854 env[1440]: time="2024-12-13T02:09:14.655812168Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3801 runtime=io.containerd.runc.v2\n" Dec 13 02:09:15.296230 kubelet[1951]: E1213 02:09:15.296161 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:15.504236 env[1440]: time="2024-12-13T02:09:15.504190307Z" level=info msg="CreateContainer within sandbox \"d678c2a887947f5e942635375a74afa6296ab124eedcb0c80f0494f7a2e7fed6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:09:15.567924 env[1440]: time="2024-12-13T02:09:15.567774153Z" level=info msg="CreateContainer within sandbox \"d678c2a887947f5e942635375a74afa6296ab124eedcb0c80f0494f7a2e7fed6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b14dd671fa902528a0882a48021966b4380679387ec03ea0bf1305ef4cce7ede\"" Dec 13 02:09:15.568665 env[1440]: time="2024-12-13T02:09:15.568633157Z" level=info msg="StartContainer for \"b14dd671fa902528a0882a48021966b4380679387ec03ea0bf1305ef4cce7ede\"" Dec 13 02:09:15.601858 systemd[1]: Started cri-containerd-b14dd671fa902528a0882a48021966b4380679387ec03ea0bf1305ef4cce7ede.scope. Dec 13 02:09:15.630468 systemd[1]: cri-containerd-b14dd671fa902528a0882a48021966b4380679387ec03ea0bf1305ef4cce7ede.scope: Deactivated successfully. Dec 13 02:09:15.636949 env[1440]: time="2024-12-13T02:09:15.636899629Z" level=info msg="StartContainer for \"b14dd671fa902528a0882a48021966b4380679387ec03ea0bf1305ef4cce7ede\" returns successfully" Dec 13 02:09:15.656424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b14dd671fa902528a0882a48021966b4380679387ec03ea0bf1305ef4cce7ede-rootfs.mount: Deactivated successfully. Dec 13 02:09:15.678684 env[1440]: time="2024-12-13T02:09:15.678618855Z" level=info msg="shim disconnected" id=b14dd671fa902528a0882a48021966b4380679387ec03ea0bf1305ef4cce7ede Dec 13 02:09:15.678951 env[1440]: time="2024-12-13T02:09:15.678686756Z" level=warning msg="cleaning up after shim disconnected" id=b14dd671fa902528a0882a48021966b4380679387ec03ea0bf1305ef4cce7ede namespace=k8s.io Dec 13 02:09:15.678951 env[1440]: time="2024-12-13T02:09:15.678702056Z" level=info msg="cleaning up dead shim" Dec 13 02:09:15.687394 env[1440]: time="2024-12-13T02:09:15.687354903Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3863 runtime=io.containerd.runc.v2\n" Dec 13 02:09:16.296805 kubelet[1951]: E1213 02:09:16.296739 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:16.513122 env[1440]: time="2024-12-13T02:09:16.512993159Z" level=info msg="CreateContainer within sandbox \"d678c2a887947f5e942635375a74afa6296ab124eedcb0c80f0494f7a2e7fed6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:09:16.555591 env[1440]: time="2024-12-13T02:09:16.555232585Z" level=info msg="CreateContainer within sandbox \"d678c2a887947f5e942635375a74afa6296ab124eedcb0c80f0494f7a2e7fed6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"45328b399fcc53c3cfe7b8c47e60fb6a6bae838524d1d1ae592a93dae5a09fe9\"" Dec 13 02:09:16.555945 env[1440]: time="2024-12-13T02:09:16.555900589Z" level=info msg="StartContainer for \"45328b399fcc53c3cfe7b8c47e60fb6a6bae838524d1d1ae592a93dae5a09fe9\"" Dec 13 02:09:16.583299 systemd[1]: Started cri-containerd-45328b399fcc53c3cfe7b8c47e60fb6a6bae838524d1d1ae592a93dae5a09fe9.scope. Dec 13 02:09:16.603094 systemd[1]: cri-containerd-45328b399fcc53c3cfe7b8c47e60fb6a6bae838524d1d1ae592a93dae5a09fe9.scope: Deactivated successfully. Dec 13 02:09:16.606710 env[1440]: time="2024-12-13T02:09:16.606490161Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33ccb3d6_6e37_4dfe_a600_31c121bd17a7.slice/cri-containerd-45328b399fcc53c3cfe7b8c47e60fb6a6bae838524d1d1ae592a93dae5a09fe9.scope/memory.events\": no such file or directory" Dec 13 02:09:16.612700 env[1440]: time="2024-12-13T02:09:16.612550093Z" level=info msg="StartContainer for \"45328b399fcc53c3cfe7b8c47e60fb6a6bae838524d1d1ae592a93dae5a09fe9\" returns successfully" Dec 13 02:09:16.628985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45328b399fcc53c3cfe7b8c47e60fb6a6bae838524d1d1ae592a93dae5a09fe9-rootfs.mount: Deactivated successfully. Dec 13 02:09:16.645444 env[1440]: time="2024-12-13T02:09:16.645394070Z" level=info msg="shim disconnected" id=45328b399fcc53c3cfe7b8c47e60fb6a6bae838524d1d1ae592a93dae5a09fe9 Dec 13 02:09:16.645657 env[1440]: time="2024-12-13T02:09:16.645449270Z" level=warning msg="cleaning up after shim disconnected" id=45328b399fcc53c3cfe7b8c47e60fb6a6bae838524d1d1ae592a93dae5a09fe9 namespace=k8s.io Dec 13 02:09:16.645657 env[1440]: time="2024-12-13T02:09:16.645462670Z" level=info msg="cleaning up dead shim" Dec 13 02:09:16.653268 env[1440]: time="2024-12-13T02:09:16.653230312Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3916 runtime=io.containerd.runc.v2\n" Dec 13 02:09:16.933981 kubelet[1951]: W1213 02:09:16.933799 1951 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33ccb3d6_6e37_4dfe_a600_31c121bd17a7.slice/cri-containerd-5fbe2de54c8d889c3bee9ee8c1de05f39dca780a84f5432c6e311ad2f192d522.scope WatchSource:0}: task 5fbe2de54c8d889c3bee9ee8c1de05f39dca780a84f5432c6e311ad2f192d522 not found: not found Dec 13 02:09:17.229291 kubelet[1951]: E1213 02:09:17.229153 1951 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:17.297625 kubelet[1951]: E1213 02:09:17.297574 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:17.375863 kubelet[1951]: E1213 02:09:17.375814 1951 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:09:17.525576 env[1440]: time="2024-12-13T02:09:17.525533962Z" level=info msg="CreateContainer within sandbox \"d678c2a887947f5e942635375a74afa6296ab124eedcb0c80f0494f7a2e7fed6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:09:17.556181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267126803.mount: Deactivated successfully. Dec 13 02:09:17.564489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889837949.mount: Deactivated successfully. Dec 13 02:09:17.577122 env[1440]: time="2024-12-13T02:09:17.577036035Z" level=info msg="CreateContainer within sandbox \"d678c2a887947f5e942635375a74afa6296ab124eedcb0c80f0494f7a2e7fed6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f622a455fbc74ed28bbf48e787bdef8ebd40e04ececaa568b030f51c24d40796\"" Dec 13 02:09:17.577680 env[1440]: time="2024-12-13T02:09:17.577619738Z" level=info msg="StartContainer for \"f622a455fbc74ed28bbf48e787bdef8ebd40e04ececaa568b030f51c24d40796\"" Dec 13 02:09:17.598329 systemd[1]: Started cri-containerd-f622a455fbc74ed28bbf48e787bdef8ebd40e04ececaa568b030f51c24d40796.scope. Dec 13 02:09:17.634046 env[1440]: time="2024-12-13T02:09:17.633977337Z" level=info msg="StartContainer for \"f622a455fbc74ed28bbf48e787bdef8ebd40e04ececaa568b030f51c24d40796\" returns successfully" Dec 13 02:09:17.996030 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:09:18.298889 kubelet[1951]: E1213 02:09:18.298740 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:18.537707 kubelet[1951]: I1213 02:09:18.537653 1951 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zdkdv" podStartSLOduration=6.537636396 podStartE2EDuration="6.537636396s" podCreationTimestamp="2024-12-13 02:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:09:18.537602296 +0000 UTC m=+82.579443944" watchObservedRunningTime="2024-12-13 02:09:18.537636396 +0000 UTC m=+82.579478044" Dec 13 02:09:18.743100 systemd[1]: run-containerd-runc-k8s.io-f622a455fbc74ed28bbf48e787bdef8ebd40e04ececaa568b030f51c24d40796-runc.V5QgHm.mount: Deactivated successfully. Dec 13 02:09:19.299101 kubelet[1951]: E1213 02:09:19.299048 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:20.048739 kubelet[1951]: W1213 02:09:20.048688 1951 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33ccb3d6_6e37_4dfe_a600_31c121bd17a7.slice/cri-containerd-293e6efe73e9e698d97be968f8db43a947eb5e3302b274437a1ed9326ca8238d.scope WatchSource:0}: task 293e6efe73e9e698d97be968f8db43a947eb5e3302b274437a1ed9326ca8238d not found: not found Dec 13 02:09:20.299298 kubelet[1951]: E1213 02:09:20.299186 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:20.615121 systemd-networkd[1585]: lxc_health: Link UP Dec 13 02:09:20.626060 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:09:20.626338 systemd-networkd[1585]: lxc_health: Gained carrier Dec 13 02:09:21.299425 kubelet[1951]: E1213 02:09:21.299376 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:21.701203 systemd-networkd[1585]: lxc_health: Gained IPv6LL Dec 13 02:09:22.300640 kubelet[1951]: E1213 02:09:22.300578 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:23.161986 kubelet[1951]: W1213 02:09:23.160171 1951 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33ccb3d6_6e37_4dfe_a600_31c121bd17a7.slice/cri-containerd-b14dd671fa902528a0882a48021966b4380679387ec03ea0bf1305ef4cce7ede.scope WatchSource:0}: task b14dd671fa902528a0882a48021966b4380679387ec03ea0bf1305ef4cce7ede not found: not found Dec 13 02:09:23.301261 kubelet[1951]: E1213 02:09:23.301206 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:24.301630 kubelet[1951]: E1213 02:09:24.301584 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:25.304579 kubelet[1951]: E1213 02:09:25.304278 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:26.275161 kubelet[1951]: W1213 02:09:26.275091 1951 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33ccb3d6_6e37_4dfe_a600_31c121bd17a7.slice/cri-containerd-45328b399fcc53c3cfe7b8c47e60fb6a6bae838524d1d1ae592a93dae5a09fe9.scope WatchSource:0}: task 45328b399fcc53c3cfe7b8c47e60fb6a6bae838524d1d1ae592a93dae5a09fe9 not found: not found Dec 13 02:09:26.305403 kubelet[1951]: E1213 02:09:26.305341 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:27.305453 kubelet[1951]: E1213 02:09:27.305417 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:28.306176 kubelet[1951]: E1213 02:09:28.306123 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:29.306868 kubelet[1951]: E1213 02:09:29.306821 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:30.307658 kubelet[1951]: E1213 02:09:30.307601 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:31.308734 kubelet[1951]: E1213 02:09:31.308698 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:32.309732 kubelet[1951]: E1213 02:09:32.309679 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:09:33.310679 kubelet[1951]: E1213 02:09:33.310644 1951 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"