Jan 14 13:04:58.133859 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 14 13:04:58.133895 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 14 13:04:58.133910 kernel: BIOS-provided physical RAM map: Jan 14 13:04:58.133921 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:04:58.133931 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:04:58.134007 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:04:58.134020 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:04:58.134032 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:04:58.134047 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:04:58.134058 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:04:58.134069 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:04:58.134080 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:04:58.134091 kernel: NX (Execute Disable) protection: active Jan 14 13:04:58.134101 kernel: APIC: Static calls initialized Jan 14 13:04:58.134116 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:04:58.134129 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Jan 14 13:04:58.134141 kernel: random: crng init done Jan 14 13:04:58.134152 kernel: secureboot: Secure boot disabled Jan 14 13:04:58.134164 kernel: SMBIOS 3.1.0 present. Jan 14 13:04:58.134177 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:04:58.134189 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:04:58.134201 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:04:58.134212 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:04:58.134224 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:04:58.134239 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:04:58.134250 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:04:58.134263 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:04:58.134275 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:04:58.134287 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:04:58.134299 kernel: tsc: Detected 2593.905 MHz processor Jan 14 13:04:58.134311 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:04:58.134323 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:04:58.134335 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:04:58.134350 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:04:58.134361 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:04:58.134373 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:04:58.134384 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:04:58.134396 kernel: Using GB pages for direct mapping Jan 14 13:04:58.134408 kernel: ACPI: Early table checksum verification disabled Jan 14 13:04:58.134420 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:04:58.134437 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134452 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134465 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:04:58.134478 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:04:58.134490 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134503 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134516 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134531 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134544 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134557 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134569 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134582 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:04:58.134595 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:04:58.134607 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:04:58.134620 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:04:58.134633 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:04:58.134648 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:04:58.134660 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:04:58.134673 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:04:58.134685 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:04:58.134698 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:04:58.134710 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:04:58.134723 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:04:58.134735 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:04:58.134748 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:04:58.134763 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:04:58.134776 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:04:58.134788 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:04:58.134801 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:04:58.134813 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:04:58.134826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:04:58.134838 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:04:58.134851 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:04:58.134866 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:04:58.134878 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:04:58.134891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:04:58.134904 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:04:58.134916 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:04:58.134928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:04:58.134950 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:04:58.134963 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:04:58.134975 kernel: Zone ranges: Jan 14 13:04:58.134991 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:04:58.135003 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:04:58.135015 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:04:58.135027 kernel: Movable zone start for each node Jan 14 13:04:58.135039 kernel: Early memory node ranges Jan 14 13:04:58.135050 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:04:58.135063 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:04:58.135075 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:04:58.135088 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:04:58.135103 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:04:58.135130 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:04:58.135143 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:04:58.135154 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:04:58.135165 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:04:58.135178 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:04:58.135191 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:04:58.135202 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:04:58.135217 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:04:58.135235 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:04:58.135247 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:04:58.135259 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:04:58.135272 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:04:58.135285 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:04:58.135298 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:04:58.135312 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:04:58.135325 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:04:58.135338 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:04:58.135354 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:04:58.135367 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:04:58.135383 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 14 13:04:58.135396 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:04:58.135409 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:04:58.135422 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:04:58.135436 kernel: Fallback order for Node 0: 0 Jan 14 13:04:58.135449 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:04:58.135465 kernel: Policy zone: Normal Jan 14 13:04:58.135489 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:04:58.135502 kernel: software IO TLB: area num 2. Jan 14 13:04:58.135519 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 312164K reserved, 0K cma-reserved) Jan 14 13:04:58.135533 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:04:58.135546 kernel: ftrace: allocating 37890 entries in 149 pages Jan 14 13:04:58.135559 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:04:58.135572 kernel: Dynamic Preempt: voluntary Jan 14 13:04:58.135585 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:04:58.135600 kernel: rcu: RCU event tracing is enabled. Jan 14 13:04:58.135614 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:04:58.135630 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:04:58.135644 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:04:58.135657 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:04:58.135670 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:04:58.135685 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:04:58.135698 kernel: Using NULL legacy PIC Jan 14 13:04:58.135714 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:04:58.135728 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:04:58.135741 kernel: Console: colour dummy device 80x25 Jan 14 13:04:58.135754 kernel: printk: console [tty1] enabled Jan 14 13:04:58.135767 kernel: printk: console [ttyS0] enabled Jan 14 13:04:58.135781 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:04:58.135794 kernel: ACPI: Core revision 20230628 Jan 14 13:04:58.135807 kernel: Failed to register legacy timer interrupt Jan 14 13:04:58.135821 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:04:58.135837 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:04:58.135850 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:04:58.135863 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:04:58.135876 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:04:58.135890 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:04:58.135904 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:04:58.135917 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:04:58.135930 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:04:58.135959 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Jan 14 13:04:58.135978 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:04:58.135998 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:04:58.136025 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:04:58.136037 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:04:58.136050 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:04:58.136063 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:04:58.136076 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:04:58.136091 kernel: RETBleed: Vulnerable Jan 14 13:04:58.136104 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:04:58.136116 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:04:58.136132 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:04:58.136144 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:04:58.136157 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:04:58.136169 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:04:58.136181 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:04:58.136193 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:04:58.136205 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:04:58.136217 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:04:58.136230 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:04:58.136243 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:04:58.136255 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:04:58.136270 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:04:58.136283 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:04:58.136296 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:04:58.136308 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:04:58.136320 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:04:58.136333 kernel: landlock: Up and running. Jan 14 13:04:58.136345 kernel: SELinux: Initializing. Jan 14 13:04:58.136358 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:04:58.136371 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:04:58.136383 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:04:58.136395 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:04:58.136411 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:04:58.136424 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:04:58.136438 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:04:58.136452 kernel: signal: max sigframe size: 3632 Jan 14 13:04:58.136465 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:04:58.136480 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:04:58.136493 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:04:58.136506 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:04:58.136520 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:04:58.136537 kernel: .... node #0, CPUs: #1 Jan 14 13:04:58.136551 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:04:58.136566 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:04:58.136580 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:04:58.136593 kernel: smpboot: Max logical packages: 1 Jan 14 13:04:58.136606 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 13:04:58.136619 kernel: devtmpfs: initialized Jan 14 13:04:58.136632 kernel: x86/mm: Memory block size: 128MB Jan 14 13:04:58.136645 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:04:58.136662 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:04:58.136674 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:04:58.136686 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:04:58.136702 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:04:58.136714 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:04:58.136728 kernel: audit: type=2000 audit(1736859897.028:1): state=initialized audit_enabled=0 res=1 Jan 14 13:04:58.136740 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:04:58.136753 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:04:58.136771 kernel: cpuidle: using governor menu Jan 14 13:04:58.136783 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:04:58.136796 kernel: dca service started, version 1.12.1 Jan 14 13:04:58.136810 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:04:58.136825 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:04:58.136841 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:04:58.136853 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:04:58.136865 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:04:58.136878 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:04:58.136895 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:04:58.136909 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:04:58.136923 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:04:58.136949 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:04:58.136963 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:04:58.136975 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:04:58.136988 kernel: ACPI: Interpreter enabled Jan 14 13:04:58.137000 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:04:58.137012 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:04:58.137028 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:04:58.137042 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:04:58.137056 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:04:58.137069 kernel: iommu: Default domain type: Translated Jan 14 13:04:58.137083 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:04:58.137095 kernel: efivars: Registered efivars operations Jan 14 13:04:58.137107 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:04:58.137121 kernel: PCI: System does not support PCI Jan 14 13:04:58.137134 kernel: vgaarb: loaded Jan 14 13:04:58.137148 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:04:58.137167 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:04:58.137182 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:04:58.137198 kernel: pnp: PnP ACPI init Jan 14 13:04:58.137211 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:04:58.137224 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:04:58.137238 kernel: NET: Registered PF_INET protocol family Jan 14 13:04:58.137251 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:04:58.137273 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:04:58.137289 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:04:58.137302 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:04:58.137314 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:04:58.137328 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:04:58.137342 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:04:58.137355 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:04:58.137369 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:04:58.137382 kernel: NET: Registered PF_XDP protocol family Jan 14 13:04:58.137396 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:04:58.137412 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:04:58.137426 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 13:04:58.137439 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:04:58.137453 kernel: Initialise system trusted keyrings Jan 14 13:04:58.137466 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:04:58.137479 kernel: Key type asymmetric registered Jan 14 13:04:58.137492 kernel: Asymmetric key parser 'x509' registered Jan 14 13:04:58.137505 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:04:58.137519 kernel: io scheduler mq-deadline registered Jan 14 13:04:58.137535 kernel: io scheduler kyber registered Jan 14 13:04:58.137548 kernel: io scheduler bfq registered Jan 14 13:04:58.137561 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:04:58.137574 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:04:58.137587 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:04:58.137600 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:04:58.137613 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:04:58.140916 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:04:58.141045 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:04:57 UTC (1736859897) Jan 14 13:04:58.141129 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:04:58.141141 kernel: intel_pstate: CPU model not supported Jan 14 13:04:58.141152 kernel: efifb: probing for efifb Jan 14 13:04:58.141160 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:04:58.141171 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:04:58.141180 kernel: efifb: scrolling: redraw Jan 14 13:04:58.141188 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:04:58.141199 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:04:58.141210 kernel: fb0: EFI VGA frame buffer device Jan 14 13:04:58.141220 kernel: pstore: Using crash dump compression: deflate Jan 14 13:04:58.141230 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:04:58.141240 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:04:58.141250 kernel: Segment Routing with IPv6 Jan 14 13:04:58.141260 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:04:58.141272 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:04:58.141281 kernel: Key type dns_resolver registered Jan 14 13:04:58.141292 kernel: IPI shorthand broadcast: enabled Jan 14 13:04:58.141307 kernel: sched_clock: Marking stable (885002700, 52539300)->(1182574800, -245032800) Jan 14 13:04:58.141316 kernel: registered taskstats version 1 Jan 14 13:04:58.141328 kernel: Loading compiled-in X.509 certificates Jan 14 13:04:58.141337 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 14 13:04:58.141347 kernel: Key type .fscrypt registered Jan 14 13:04:58.141356 kernel: Key type fscrypt-provisioning registered Jan 14 13:04:58.141366 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:04:58.141375 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:04:58.141385 kernel: ima: No architecture policies found Jan 14 13:04:58.141397 kernel: clk: Disabling unused clocks Jan 14 13:04:58.141408 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 14 13:04:58.141417 kernel: Write protecting the kernel read-only data: 38912k Jan 14 13:04:58.141425 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 14 13:04:58.141436 kernel: Run /init as init process Jan 14 13:04:58.141444 kernel: with arguments: Jan 14 13:04:58.141455 kernel: /init Jan 14 13:04:58.141464 kernel: with environment: Jan 14 13:04:58.141474 kernel: HOME=/ Jan 14 13:04:58.141487 kernel: TERM=linux Jan 14 13:04:58.141497 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:04:58.141510 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:04:58.141525 systemd[1]: Detected virtualization microsoft. Jan 14 13:04:58.141537 systemd[1]: Detected architecture x86-64. Jan 14 13:04:58.141549 systemd[1]: Running in initrd. Jan 14 13:04:58.141561 systemd[1]: No hostname configured, using default hostname. Jan 14 13:04:58.141574 systemd[1]: Hostname set to . Jan 14 13:04:58.141592 systemd[1]: Initializing machine ID from random generator. Jan 14 13:04:58.141604 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:04:58.141617 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:04:58.141629 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:04:58.141642 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:04:58.141656 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:04:58.141671 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:04:58.141690 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:04:58.141705 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:04:58.141719 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:04:58.141732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:04:58.141748 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:04:58.141762 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:04:58.141776 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:04:58.141793 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:04:58.141807 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:04:58.141819 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:04:58.141832 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:04:58.141842 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:04:58.141854 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:04:58.141864 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:04:58.141875 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:04:58.141886 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:04:58.141899 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:04:58.141910 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:04:58.141920 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:04:58.141928 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:04:58.141952 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:04:58.141964 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:04:58.141974 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:04:58.141982 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:58.142017 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:04:58.142045 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:04:58.142053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:04:58.142063 systemd-journald[177]: Journal started Jan 14 13:04:58.142089 systemd-journald[177]: Runtime Journal (/run/log/journal/16d10ae7ebe2472f8904a7e389bef6fe) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:04:58.133592 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:04:58.160486 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:04:58.160581 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:04:58.165673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:58.181141 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:04:58.193797 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:04:58.197955 kernel: Bridge firewalling registered Jan 14 13:04:58.198072 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:04:58.201574 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:04:58.216108 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:04:58.223146 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:04:58.229806 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:04:58.237757 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:04:58.245531 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:04:58.263091 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:04:58.272108 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:04:58.276173 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:04:58.285799 dracut-cmdline[204]: dracut-dracut-053 Jan 14 13:04:58.288840 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 14 13:04:58.314324 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:04:58.321929 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:04:58.332168 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:04:58.375360 systemd-resolved[255]: Positive Trust Anchors: Jan 14 13:04:58.375727 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:04:58.375764 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:04:58.379089 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 14 13:04:58.380067 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:04:58.385368 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:04:58.425961 kernel: SCSI subsystem initialized Jan 14 13:04:58.436957 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:04:58.448960 kernel: iscsi: registered transport (tcp) Jan 14 13:04:58.476355 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:04:58.476422 kernel: QLogic iSCSI HBA Driver Jan 14 13:04:58.513094 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:04:58.522122 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:04:58.551792 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:04:58.551888 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:04:58.555843 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:04:58.595966 kernel: raid6: avx512x4 gen() 18276 MB/s Jan 14 13:04:58.615958 kernel: raid6: avx512x2 gen() 18092 MB/s Jan 14 13:04:58.634948 kernel: raid6: avx512x1 gen() 18168 MB/s Jan 14 13:04:58.653950 kernel: raid6: avx2x4 gen() 18106 MB/s Jan 14 13:04:58.672951 kernel: raid6: avx2x2 gen() 18320 MB/s Jan 14 13:04:58.696092 kernel: raid6: avx2x1 gen() 14028 MB/s Jan 14 13:04:58.696136 kernel: raid6: using algorithm avx2x2 gen() 18320 MB/s Jan 14 13:04:58.717237 kernel: raid6: .... xor() 21962 MB/s, rmw enabled Jan 14 13:04:58.717269 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:04:58.739962 kernel: xor: automatically using best checksumming function avx Jan 14 13:04:58.880966 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:04:58.890383 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:04:58.901087 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:04:58.921774 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 14 13:04:58.926120 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:04:58.940443 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:04:58.953019 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 14 13:04:58.980375 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:04:58.992148 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:04:59.034077 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:04:59.048248 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:04:59.081558 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:04:59.089228 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:04:59.096839 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:04:59.103870 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:04:59.114146 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:04:59.132073 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:04:59.140130 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:04:59.161756 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:04:59.165024 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:04:59.190665 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:04:59.190699 kernel: AES CTR mode by8 optimization enabled Jan 14 13:04:59.190719 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:04:59.171461 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:04:59.174732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:04:59.175017 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:59.178147 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:59.206475 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:59.217678 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:04:59.218002 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:59.237118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:59.252959 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:04:59.258865 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:04:59.258930 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:04:59.269152 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 13:04:59.272964 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:04:59.277953 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:04:59.292838 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:04:59.292910 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 13:04:59.300103 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:04:59.310372 kernel: PTP clock support registered Jan 14 13:04:59.306404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:59.319170 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:04:59.325308 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:04:59.333007 kernel: scsi host1: storvsc_host_t Jan 14 13:04:59.333083 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:04:59.335423 kernel: scsi host0: storvsc_host_t Jan 14 13:04:59.335454 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:04:59.341716 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:04:59.341778 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:04:59.347531 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:04:59.347599 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:04:59.353772 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:04:59.865328 systemd-resolved[255]: Clock change detected. Flushing caches. Jan 14 13:04:59.893544 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:04:59.897678 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:04:59.897702 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:04:59.901591 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:04:59.916304 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:04:59.935204 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:04:59.935405 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:04:59.935575 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:04:59.935768 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:04:59.935923 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:04:59.935943 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:05:00.019517 kernel: hv_netvsc 000d3ab8-770d-000d-3ab8-770d000d3ab8 eth0: VF slot 1 added Jan 14 13:05:00.028625 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:05:00.032638 kernel: hv_pci 1deb1ee3-95c2-4496-a923-08061a24c3ad: PCI VMBus probing: Using version 0x10004 Jan 14 13:05:00.078979 kernel: hv_pci 1deb1ee3-95c2-4496-a923-08061a24c3ad: PCI host bridge to bus 95c2:00 Jan 14 13:05:00.079183 kernel: pci_bus 95c2:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:05:00.079357 kernel: pci_bus 95c2:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:05:00.079511 kernel: pci 95c2:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:05:00.079730 kernel: pci 95c2:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:05:00.079907 kernel: pci 95c2:00:02.0: enabling Extended Tags Jan 14 13:05:00.080071 kernel: pci 95c2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 95c2:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:05:00.080232 kernel: pci_bus 95c2:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:05:00.080400 kernel: pci 95c2:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:05:00.241245 kernel: mlx5_core 95c2:00:02.0: enabling device (0000 -> 0002) Jan 14 13:05:00.469586 kernel: mlx5_core 95c2:00:02.0: firmware version: 14.30.5000 Jan 14 13:05:00.469838 kernel: hv_netvsc 000d3ab8-770d-000d-3ab8-770d000d3ab8 eth0: VF registering: eth1 Jan 14 13:05:00.470004 kernel: mlx5_core 95c2:00:02.0 eth1: joined to eth0 Jan 14 13:05:00.470200 kernel: mlx5_core 95c2:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:05:00.476621 kernel: mlx5_core 95c2:00:02.0 enP38338s1: renamed from eth1 Jan 14 13:05:00.611422 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:05:00.695853 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:05:00.715652 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (459) Jan 14 13:05:00.732374 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:05:00.801632 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (451) Jan 14 13:05:00.815960 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:05:00.819660 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:05:00.837817 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:05:00.854177 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:05:00.861614 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:05:01.871635 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:05:01.872337 disk-uuid[608]: The operation has completed successfully. Jan 14 13:05:01.979021 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:05:01.979132 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:05:01.996764 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:05:02.005284 sh[694]: Success Jan 14 13:05:02.042726 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:05:02.336191 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:05:02.351693 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:05:02.353681 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:05:02.376340 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 14 13:05:02.376400 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:05:02.380178 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:05:02.383630 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:05:02.386374 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:05:02.734737 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:05:02.740684 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:05:02.753784 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:05:02.760058 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:05:02.774537 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:02.774628 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:05:02.777437 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:05:02.798196 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:05:02.810621 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:02.811087 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:05:02.821683 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:05:02.832799 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:05:02.865379 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:05:02.874888 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:05:02.895586 systemd-networkd[878]: lo: Link UP Jan 14 13:05:02.895596 systemd-networkd[878]: lo: Gained carrier Jan 14 13:05:02.897738 systemd-networkd[878]: Enumeration completed Jan 14 13:05:02.898037 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:05:02.907788 systemd[1]: Reached target network.target - Network. Jan 14 13:05:02.910714 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:05:02.910719 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:05:02.973632 kernel: mlx5_core 95c2:00:02.0 enP38338s1: Link up Jan 14 13:05:03.007173 kernel: hv_netvsc 000d3ab8-770d-000d-3ab8-770d000d3ab8 eth0: Data path switched to VF: enP38338s1 Jan 14 13:05:03.006628 systemd-networkd[878]: enP38338s1: Link UP Jan 14 13:05:03.006807 systemd-networkd[878]: eth0: Link UP Jan 14 13:05:03.007060 systemd-networkd[878]: eth0: Gained carrier Jan 14 13:05:03.007075 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:05:03.020293 systemd-networkd[878]: enP38338s1: Gained carrier Jan 14 13:05:03.067683 systemd-networkd[878]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 13:05:03.922273 ignition[829]: Ignition 2.20.0 Jan 14 13:05:03.922289 ignition[829]: Stage: fetch-offline Jan 14 13:05:03.922339 ignition[829]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:03.922349 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:03.922457 ignition[829]: parsed url from cmdline: "" Jan 14 13:05:03.922462 ignition[829]: no config URL provided Jan 14 13:05:03.922469 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:05:03.922479 ignition[829]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:05:03.922487 ignition[829]: failed to fetch config: resource requires networking Jan 14 13:05:03.924547 ignition[829]: Ignition finished successfully Jan 14 13:05:03.945175 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:05:03.957807 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:05:03.971648 ignition[887]: Ignition 2.20.0 Jan 14 13:05:03.971659 ignition[887]: Stage: fetch Jan 14 13:05:03.971861 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:03.971874 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:03.971972 ignition[887]: parsed url from cmdline: "" Jan 14 13:05:03.971975 ignition[887]: no config URL provided Jan 14 13:05:03.971991 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:05:03.972002 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:05:03.972027 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:05:04.063459 ignition[887]: GET result: OK Jan 14 13:05:04.063614 ignition[887]: config has been read from IMDS userdata Jan 14 13:05:04.063656 ignition[887]: parsing config with SHA512: c73e5219617a8d83ba8b00cf5f6572a28dab015ec1a862e2fbeaa93d1b203b0d161970f5409c36ce266edc69dfc0206919aa8f94d7ccc27f7cfb29353e9fbdd5 Jan 14 13:05:04.069938 unknown[887]: fetched base config from "system" Jan 14 13:05:04.069950 unknown[887]: fetched base config from "system" Jan 14 13:05:04.070378 ignition[887]: fetch: fetch complete Jan 14 13:05:04.069958 unknown[887]: fetched user config from "azure" Jan 14 13:05:04.070385 ignition[887]: fetch: fetch passed Jan 14 13:05:04.072246 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:05:04.070428 ignition[887]: Ignition finished successfully Jan 14 13:05:04.083803 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:05:04.099054 ignition[893]: Ignition 2.20.0 Jan 14 13:05:04.099065 ignition[893]: Stage: kargs Jan 14 13:05:04.099278 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:04.101723 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:05:04.099292 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:04.100179 ignition[893]: kargs: kargs passed Jan 14 13:05:04.100224 ignition[893]: Ignition finished successfully Jan 14 13:05:04.116876 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:05:04.135518 ignition[899]: Ignition 2.20.0 Jan 14 13:05:04.135530 ignition[899]: Stage: disks Jan 14 13:05:04.137473 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:05:04.135778 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:04.141301 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:05:04.135792 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:04.145569 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:05:04.136626 ignition[899]: disks: disks passed Jan 14 13:05:04.151668 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:05:04.136670 ignition[899]: Ignition finished successfully Jan 14 13:05:04.154372 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:05:04.157284 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:05:04.191810 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:05:04.253191 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:05:04.260183 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:05:04.272223 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:05:04.363624 kernel: EXT4-fs (sda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 14 13:05:04.363733 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:05:04.364489 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:05:04.380707 systemd-networkd[878]: eth0: Gained IPv6LL Jan 14 13:05:04.381115 systemd-networkd[878]: enP38338s1: Gained IPv6LL Jan 14 13:05:04.408709 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:05:04.414043 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:05:04.423723 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (918) Jan 14 13:05:04.427843 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:05:04.449777 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:04.449822 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:05:04.449840 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:05:04.449857 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:05:04.432092 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:05:04.432133 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:05:04.452052 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:05:04.462097 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:05:04.475808 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:05:05.157072 coreos-metadata[920]: Jan 14 13:05:05.157 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:05:05.163993 coreos-metadata[920]: Jan 14 13:05:05.163 INFO Fetch successful Jan 14 13:05:05.163993 coreos-metadata[920]: Jan 14 13:05:05.163 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:05:05.176782 coreos-metadata[920]: Jan 14 13:05:05.176 INFO Fetch successful Jan 14 13:05:05.183705 coreos-metadata[920]: Jan 14 13:05:05.181 INFO wrote hostname ci-4186.1.0-a-847249f34f to /sysroot/etc/hostname Jan 14 13:05:05.188057 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:05:05.191412 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:05:05.217491 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:05:05.222733 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:05:05.241733 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:05:06.083773 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:05:06.094704 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:05:06.105824 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:05:06.117023 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:06.109353 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:05:06.144941 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:05:06.153155 ignition[1037]: INFO : Ignition 2.20.0 Jan 14 13:05:06.153155 ignition[1037]: INFO : Stage: mount Jan 14 13:05:06.155529 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:06.155529 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:06.155529 ignition[1037]: INFO : mount: mount passed Jan 14 13:05:06.155529 ignition[1037]: INFO : Ignition finished successfully Jan 14 13:05:06.155238 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:05:06.177735 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:05:06.187522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:05:06.205620 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1049) Jan 14 13:05:06.210615 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:06.210660 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:05:06.222415 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:05:06.228654 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:05:06.230188 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:05:06.252030 ignition[1066]: INFO : Ignition 2.20.0 Jan 14 13:05:06.254306 ignition[1066]: INFO : Stage: files Jan 14 13:05:06.254306 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:06.254306 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:06.254306 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:05:06.267239 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:05:06.267239 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:05:06.337141 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:05:06.341403 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:05:06.341403 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:05:06.337661 unknown[1066]: wrote ssh authorized keys file for user: core Jan 14 13:05:06.368816 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:05:06.374737 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 14 13:05:06.431487 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 13:05:06.783817 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:05:06.783817 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:05:06.794787 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 14 13:05:07.298197 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 14 13:05:07.462055 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:05:07.468154 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:05:07.468154 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:05:07.468154 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:05:07.468154 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:05:07.468154 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:05:07.468154 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:05:07.468154 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:05:07.503381 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:05:07.503381 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:05:07.513711 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:05:07.513711 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:05:07.513711 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:05:07.513711 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:05:07.513711 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 14 13:05:08.125051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 14 13:05:09.441615 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:05:09.441615 ignition[1066]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 14 13:05:09.463264 ignition[1066]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:05:09.474857 ignition[1066]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:05:09.474857 ignition[1066]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 14 13:05:09.474857 ignition[1066]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:05:09.474857 ignition[1066]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:05:09.474857 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:05:09.474857 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:05:09.474857 ignition[1066]: INFO : files: files passed Jan 14 13:05:09.474857 ignition[1066]: INFO : Ignition finished successfully Jan 14 13:05:09.465307 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:05:09.491792 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:05:09.502759 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:05:09.507935 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:05:09.508189 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:05:09.524784 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:05:09.524784 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:05:09.527019 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:05:09.543085 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:05:09.550994 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:05:09.556761 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:05:09.587869 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:05:09.587983 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:05:09.595001 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:05:09.604613 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:05:09.607487 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:05:09.615808 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:05:09.630293 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:05:09.640755 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:05:09.651014 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:05:09.654552 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:05:09.663950 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:05:09.669232 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:05:09.669382 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:05:09.679050 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:05:09.682160 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:05:09.690062 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:05:09.698702 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:05:09.698915 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:05:09.699338 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:05:09.699761 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:05:09.700198 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:05:09.700611 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:05:09.701107 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:05:09.701514 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:05:09.701678 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:05:09.702485 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:05:09.702985 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:05:09.703379 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:05:09.727467 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:05:09.734700 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:05:09.740576 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:05:09.776190 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:05:09.776363 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:05:09.786461 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:05:09.786681 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:05:09.792091 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:05:09.792230 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:05:09.814950 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:05:09.821656 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:05:09.824743 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:05:09.824978 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:05:09.829125 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:05:09.847261 ignition[1118]: INFO : Ignition 2.20.0 Jan 14 13:05:09.847261 ignition[1118]: INFO : Stage: umount Jan 14 13:05:09.847261 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:09.847261 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:09.847261 ignition[1118]: INFO : umount: umount passed Jan 14 13:05:09.847261 ignition[1118]: INFO : Ignition finished successfully Jan 14 13:05:09.829358 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:05:09.833515 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:05:09.833632 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:05:09.844082 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:05:09.844197 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:05:09.850424 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:05:09.850522 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:05:09.853076 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:05:09.853116 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:05:09.856141 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:05:09.856177 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:05:09.859092 systemd[1]: Stopped target network.target - Network. Jan 14 13:05:09.861688 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:05:09.861756 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:05:09.867072 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:05:09.874415 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:05:09.877842 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:05:09.883139 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:05:09.885742 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:05:09.888555 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:05:09.888621 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:05:09.899474 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:05:09.899525 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:05:09.905256 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:05:09.905324 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:05:09.908224 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:05:09.908275 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:05:09.923446 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:05:09.926215 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:05:09.927489 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:05:09.934649 systemd-networkd[878]: eth0: DHCPv6 lease lost Jan 14 13:05:10.069645 kernel: hv_netvsc 000d3ab8-770d-000d-3ab8-770d000d3ab8 eth0: Data path switched from VF: enP38338s1 Jan 14 13:05:09.942537 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:05:09.942893 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:05:09.948219 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:05:09.948378 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:05:09.955366 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:05:09.955432 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:05:09.974775 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:05:09.980440 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:05:09.980509 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:05:09.984492 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:05:09.984542 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:05:09.989930 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:05:09.989976 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:05:09.990799 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:05:09.990836 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:05:09.991336 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:05:10.019940 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:05:10.020089 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:05:10.026362 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:05:10.026452 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:05:10.031889 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:05:10.031933 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:05:10.032036 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:05:10.032080 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:05:10.033472 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:05:10.033508 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:05:10.035445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:05:10.035482 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:05:10.053922 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:05:10.054768 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:05:10.054833 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:05:10.055292 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 13:05:10.055332 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:05:10.056181 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:05:10.056215 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:05:10.057123 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:05:10.057160 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:05:10.077933 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:05:10.078032 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:05:10.121251 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:05:10.121379 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:05:10.635628 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:05:10.635798 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:05:10.639326 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:05:10.644265 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:05:10.644333 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:05:10.659812 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:05:10.679127 systemd[1]: Switching root. Jan 14 13:05:10.726953 systemd-journald[177]: Journal stopped Jan 14 13:04:58.133859 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 14 13:04:58.133895 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 14 13:04:58.133910 kernel: BIOS-provided physical RAM map: Jan 14 13:04:58.133921 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:04:58.133931 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:04:58.134007 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:04:58.134020 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:04:58.134032 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:04:58.134047 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:04:58.134058 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:04:58.134069 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:04:58.134080 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:04:58.134091 kernel: NX (Execute Disable) protection: active Jan 14 13:04:58.134101 kernel: APIC: Static calls initialized Jan 14 13:04:58.134116 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:04:58.134129 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Jan 14 13:04:58.134141 kernel: random: crng init done Jan 14 13:04:58.134152 kernel: secureboot: Secure boot disabled Jan 14 13:04:58.134164 kernel: SMBIOS 3.1.0 present. Jan 14 13:04:58.134177 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:04:58.134189 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:04:58.134201 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:04:58.134212 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:04:58.134224 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:04:58.134239 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:04:58.134250 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:04:58.134263 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:04:58.134275 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:04:58.134287 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:04:58.134299 kernel: tsc: Detected 2593.905 MHz processor Jan 14 13:04:58.134311 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:04:58.134323 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:04:58.134335 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:04:58.134350 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:04:58.134361 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:04:58.134373 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:04:58.134384 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:04:58.134396 kernel: Using GB pages for direct mapping Jan 14 13:04:58.134408 kernel: ACPI: Early table checksum verification disabled Jan 14 13:04:58.134420 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:04:58.134437 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134452 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134465 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:04:58.134478 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:04:58.134490 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134503 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134516 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134531 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134544 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134557 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134569 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:04:58.134582 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:04:58.134595 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:04:58.134607 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:04:58.134620 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:04:58.134633 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:04:58.134648 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:04:58.134660 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:04:58.134673 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:04:58.134685 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:04:58.134698 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:04:58.134710 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:04:58.134723 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:04:58.134735 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:04:58.134748 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:04:58.134763 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:04:58.134776 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:04:58.134788 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:04:58.134801 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:04:58.134813 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:04:58.134826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:04:58.134838 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:04:58.134851 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:04:58.134866 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:04:58.134878 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:04:58.134891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:04:58.134904 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:04:58.134916 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:04:58.134928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:04:58.134950 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:04:58.134963 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:04:58.134975 kernel: Zone ranges: Jan 14 13:04:58.134991 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:04:58.135003 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:04:58.135015 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:04:58.135027 kernel: Movable zone start for each node Jan 14 13:04:58.135039 kernel: Early memory node ranges Jan 14 13:04:58.135050 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:04:58.135063 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:04:58.135075 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:04:58.135088 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:04:58.135103 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:04:58.135130 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:04:58.135143 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:04:58.135154 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:04:58.135165 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:04:58.135178 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:04:58.135191 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:04:58.135202 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:04:58.135217 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:04:58.135235 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:04:58.135247 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:04:58.135259 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:04:58.135272 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:04:58.135285 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:04:58.135298 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:04:58.135312 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:04:58.135325 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:04:58.135338 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:04:58.135354 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:04:58.135367 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:04:58.135383 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 14 13:04:58.135396 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:04:58.135409 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:04:58.135422 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:04:58.135436 kernel: Fallback order for Node 0: 0 Jan 14 13:04:58.135449 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:04:58.135465 kernel: Policy zone: Normal Jan 14 13:04:58.135489 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:04:58.135502 kernel: software IO TLB: area num 2. Jan 14 13:04:58.135519 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 312164K reserved, 0K cma-reserved) Jan 14 13:04:58.135533 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:04:58.135546 kernel: ftrace: allocating 37890 entries in 149 pages Jan 14 13:04:58.135559 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:04:58.135572 kernel: Dynamic Preempt: voluntary Jan 14 13:04:58.135585 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:04:58.135600 kernel: rcu: RCU event tracing is enabled. Jan 14 13:04:58.135614 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:04:58.135630 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:04:58.135644 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:04:58.135657 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:04:58.135670 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:04:58.135685 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:04:58.135698 kernel: Using NULL legacy PIC Jan 14 13:04:58.135714 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:04:58.135728 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:04:58.135741 kernel: Console: colour dummy device 80x25 Jan 14 13:04:58.135754 kernel: printk: console [tty1] enabled Jan 14 13:04:58.135767 kernel: printk: console [ttyS0] enabled Jan 14 13:04:58.135781 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:04:58.135794 kernel: ACPI: Core revision 20230628 Jan 14 13:04:58.135807 kernel: Failed to register legacy timer interrupt Jan 14 13:04:58.135821 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:04:58.135837 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:04:58.135850 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:04:58.135863 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:04:58.135876 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:04:58.135890 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:04:58.135904 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:04:58.135917 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:04:58.135930 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:04:58.135959 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Jan 14 13:04:58.135978 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:04:58.135998 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:04:58.136025 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:04:58.136037 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:04:58.136050 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:04:58.136063 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:04:58.136076 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:04:58.136091 kernel: RETBleed: Vulnerable Jan 14 13:04:58.136104 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:04:58.136116 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:04:58.136132 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:04:58.136144 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:04:58.136157 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:04:58.136169 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:04:58.136181 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:04:58.136193 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:04:58.136205 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:04:58.136217 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:04:58.136230 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:04:58.136243 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:04:58.136255 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:04:58.136270 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:04:58.136283 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:04:58.136296 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:04:58.136308 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:04:58.136320 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:04:58.136333 kernel: landlock: Up and running. Jan 14 13:04:58.136345 kernel: SELinux: Initializing. Jan 14 13:04:58.136358 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:04:58.136371 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:04:58.136383 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:04:58.136395 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:04:58.136411 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:04:58.136424 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:04:58.136438 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:04:58.136452 kernel: signal: max sigframe size: 3632 Jan 14 13:04:58.136465 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:04:58.136480 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:04:58.136493 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:04:58.136506 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:04:58.136520 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:04:58.136537 kernel: .... node #0, CPUs: #1 Jan 14 13:04:58.136551 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:04:58.136566 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:04:58.136580 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:04:58.136593 kernel: smpboot: Max logical packages: 1 Jan 14 13:04:58.136606 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 13:04:58.136619 kernel: devtmpfs: initialized Jan 14 13:04:58.136632 kernel: x86/mm: Memory block size: 128MB Jan 14 13:04:58.136645 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:04:58.136662 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:04:58.136674 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:04:58.136686 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:04:58.136702 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:04:58.136714 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:04:58.136728 kernel: audit: type=2000 audit(1736859897.028:1): state=initialized audit_enabled=0 res=1 Jan 14 13:04:58.136740 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:04:58.136753 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:04:58.136771 kernel: cpuidle: using governor menu Jan 14 13:04:58.136783 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:04:58.136796 kernel: dca service started, version 1.12.1 Jan 14 13:04:58.136810 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:04:58.136825 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:04:58.136841 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:04:58.136853 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:04:58.136865 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:04:58.136878 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:04:58.136895 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:04:58.136909 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:04:58.136923 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:04:58.136949 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:04:58.136963 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:04:58.136975 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:04:58.136988 kernel: ACPI: Interpreter enabled Jan 14 13:04:58.137000 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:04:58.137012 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:04:58.137028 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:04:58.137042 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:04:58.137056 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:04:58.137069 kernel: iommu: Default domain type: Translated Jan 14 13:04:58.137083 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:04:58.137095 kernel: efivars: Registered efivars operations Jan 14 13:04:58.137107 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:04:58.137121 kernel: PCI: System does not support PCI Jan 14 13:04:58.137134 kernel: vgaarb: loaded Jan 14 13:04:58.137148 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:04:58.137167 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:04:58.137182 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:04:58.137198 kernel: pnp: PnP ACPI init Jan 14 13:04:58.137211 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:04:58.137224 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:04:58.137238 kernel: NET: Registered PF_INET protocol family Jan 14 13:04:58.137251 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:04:58.137273 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:04:58.137289 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:04:58.137302 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:04:58.137314 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:04:58.137328 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:04:58.137342 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:04:58.137355 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:04:58.137369 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:04:58.137382 kernel: NET: Registered PF_XDP protocol family Jan 14 13:04:58.137396 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:04:58.137412 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:04:58.137426 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 13:04:58.137439 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:04:58.137453 kernel: Initialise system trusted keyrings Jan 14 13:04:58.137466 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:04:58.137479 kernel: Key type asymmetric registered Jan 14 13:04:58.137492 kernel: Asymmetric key parser 'x509' registered Jan 14 13:04:58.137505 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:04:58.137519 kernel: io scheduler mq-deadline registered Jan 14 13:04:58.137535 kernel: io scheduler kyber registered Jan 14 13:04:58.137548 kernel: io scheduler bfq registered Jan 14 13:04:58.137561 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:04:58.137574 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:04:58.137587 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:04:58.137600 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:04:58.137613 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:04:58.140916 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:04:58.141045 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:04:57 UTC (1736859897) Jan 14 13:04:58.141129 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:04:58.141141 kernel: intel_pstate: CPU model not supported Jan 14 13:04:58.141152 kernel: efifb: probing for efifb Jan 14 13:04:58.141160 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:04:58.141171 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:04:58.141180 kernel: efifb: scrolling: redraw Jan 14 13:04:58.141188 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:04:58.141199 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:04:58.141210 kernel: fb0: EFI VGA frame buffer device Jan 14 13:04:58.141220 kernel: pstore: Using crash dump compression: deflate Jan 14 13:04:58.141230 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:04:58.141240 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:04:58.141250 kernel: Segment Routing with IPv6 Jan 14 13:04:58.141260 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:04:58.141272 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:04:58.141281 kernel: Key type dns_resolver registered Jan 14 13:04:58.141292 kernel: IPI shorthand broadcast: enabled Jan 14 13:04:58.141307 kernel: sched_clock: Marking stable (885002700, 52539300)->(1182574800, -245032800) Jan 14 13:04:58.141316 kernel: registered taskstats version 1 Jan 14 13:04:58.141328 kernel: Loading compiled-in X.509 certificates Jan 14 13:04:58.141337 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 14 13:04:58.141347 kernel: Key type .fscrypt registered Jan 14 13:04:58.141356 kernel: Key type fscrypt-provisioning registered Jan 14 13:04:58.141366 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:04:58.141375 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:04:58.141385 kernel: ima: No architecture policies found Jan 14 13:04:58.141397 kernel: clk: Disabling unused clocks Jan 14 13:04:58.141408 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 14 13:04:58.141417 kernel: Write protecting the kernel read-only data: 38912k Jan 14 13:04:58.141425 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 14 13:04:58.141436 kernel: Run /init as init process Jan 14 13:04:58.141444 kernel: with arguments: Jan 14 13:04:58.141455 kernel: /init Jan 14 13:04:58.141464 kernel: with environment: Jan 14 13:04:58.141474 kernel: HOME=/ Jan 14 13:04:58.141487 kernel: TERM=linux Jan 14 13:04:58.141497 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:04:58.141510 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:04:58.141525 systemd[1]: Detected virtualization microsoft. Jan 14 13:04:58.141537 systemd[1]: Detected architecture x86-64. Jan 14 13:04:58.141549 systemd[1]: Running in initrd. Jan 14 13:04:58.141561 systemd[1]: No hostname configured, using default hostname. Jan 14 13:04:58.141574 systemd[1]: Hostname set to . Jan 14 13:04:58.141592 systemd[1]: Initializing machine ID from random generator. Jan 14 13:04:58.141604 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:04:58.141617 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:04:58.141629 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:04:58.141642 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:04:58.141656 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:04:58.141671 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:04:58.141690 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:04:58.141705 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:04:58.141719 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:04:58.141732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:04:58.141748 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:04:58.141762 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:04:58.141776 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:04:58.141793 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:04:58.141807 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:04:58.141819 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:04:58.141832 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:04:58.141842 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:04:58.141854 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:04:58.141864 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:04:58.141875 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:04:58.141886 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:04:58.141899 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:04:58.141910 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:04:58.141920 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:04:58.141928 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:04:58.141952 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:04:58.141964 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:04:58.141974 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:04:58.141982 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:58.142017 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:04:58.142045 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:04:58.142053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:04:58.142063 systemd-journald[177]: Journal started Jan 14 13:04:58.142089 systemd-journald[177]: Runtime Journal (/run/log/journal/16d10ae7ebe2472f8904a7e389bef6fe) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:04:58.133592 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:04:58.160486 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:04:58.160581 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:04:58.165673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:58.181141 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:04:58.193797 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:04:58.197955 kernel: Bridge firewalling registered Jan 14 13:04:58.198072 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:04:58.201574 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:04:58.216108 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:04:58.223146 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:04:58.229806 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:04:58.237757 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:04:58.245531 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:04:58.263091 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:04:58.272108 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:04:58.276173 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:04:58.285799 dracut-cmdline[204]: dracut-dracut-053 Jan 14 13:04:58.288840 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 14 13:04:58.314324 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:04:58.321929 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:04:58.332168 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:04:58.375360 systemd-resolved[255]: Positive Trust Anchors: Jan 14 13:04:58.375727 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:04:58.375764 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:04:58.379089 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 14 13:04:58.380067 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:04:58.385368 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:04:58.425961 kernel: SCSI subsystem initialized Jan 14 13:04:58.436957 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:04:58.448960 kernel: iscsi: registered transport (tcp) Jan 14 13:04:58.476355 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:04:58.476422 kernel: QLogic iSCSI HBA Driver Jan 14 13:04:58.513094 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:04:58.522122 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:04:58.551792 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:04:58.551888 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:04:58.555843 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:04:58.595966 kernel: raid6: avx512x4 gen() 18276 MB/s Jan 14 13:04:58.615958 kernel: raid6: avx512x2 gen() 18092 MB/s Jan 14 13:04:58.634948 kernel: raid6: avx512x1 gen() 18168 MB/s Jan 14 13:04:58.653950 kernel: raid6: avx2x4 gen() 18106 MB/s Jan 14 13:04:58.672951 kernel: raid6: avx2x2 gen() 18320 MB/s Jan 14 13:04:58.696092 kernel: raid6: avx2x1 gen() 14028 MB/s Jan 14 13:04:58.696136 kernel: raid6: using algorithm avx2x2 gen() 18320 MB/s Jan 14 13:04:58.717237 kernel: raid6: .... xor() 21962 MB/s, rmw enabled Jan 14 13:04:58.717269 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:04:58.739962 kernel: xor: automatically using best checksumming function avx Jan 14 13:04:58.880966 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:04:58.890383 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:04:58.901087 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:04:58.921774 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 14 13:04:58.926120 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:04:58.940443 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:04:58.953019 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 14 13:04:58.980375 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:04:58.992148 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:04:59.034077 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:04:59.048248 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:04:59.081558 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:04:59.089228 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:04:59.096839 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:04:59.103870 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:04:59.114146 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:04:59.132073 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:04:59.140130 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:04:59.161756 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:04:59.165024 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:04:59.190665 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:04:59.190699 kernel: AES CTR mode by8 optimization enabled Jan 14 13:04:59.190719 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:04:59.171461 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:04:59.174732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:04:59.175017 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:59.178147 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:59.206475 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:59.217678 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:04:59.218002 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:59.237118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:04:59.252959 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:04:59.258865 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:04:59.258930 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:04:59.269152 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 13:04:59.272964 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:04:59.277953 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:04:59.292838 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:04:59.292910 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 13:04:59.300103 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:04:59.310372 kernel: PTP clock support registered Jan 14 13:04:59.306404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:04:59.319170 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:04:59.325308 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:04:59.333007 kernel: scsi host1: storvsc_host_t Jan 14 13:04:59.333083 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:04:59.335423 kernel: scsi host0: storvsc_host_t Jan 14 13:04:59.335454 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:04:59.341716 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:04:59.341778 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:04:59.347531 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:04:59.347599 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:04:59.353772 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:04:59.865328 systemd-resolved[255]: Clock change detected. Flushing caches. Jan 14 13:04:59.893544 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:04:59.897678 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:04:59.897702 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:04:59.901591 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:04:59.916304 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:04:59.935204 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:04:59.935405 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:04:59.935575 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:04:59.935768 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:04:59.935923 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:04:59.935943 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:05:00.019517 kernel: hv_netvsc 000d3ab8-770d-000d-3ab8-770d000d3ab8 eth0: VF slot 1 added Jan 14 13:05:00.028625 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:05:00.032638 kernel: hv_pci 1deb1ee3-95c2-4496-a923-08061a24c3ad: PCI VMBus probing: Using version 0x10004 Jan 14 13:05:00.078979 kernel: hv_pci 1deb1ee3-95c2-4496-a923-08061a24c3ad: PCI host bridge to bus 95c2:00 Jan 14 13:05:00.079183 kernel: pci_bus 95c2:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:05:00.079357 kernel: pci_bus 95c2:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:05:00.079511 kernel: pci 95c2:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:05:00.079730 kernel: pci 95c2:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:05:00.079907 kernel: pci 95c2:00:02.0: enabling Extended Tags Jan 14 13:05:00.080071 kernel: pci 95c2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 95c2:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:05:00.080232 kernel: pci_bus 95c2:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:05:00.080400 kernel: pci 95c2:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:05:00.241245 kernel: mlx5_core 95c2:00:02.0: enabling device (0000 -> 0002) Jan 14 13:05:00.469586 kernel: mlx5_core 95c2:00:02.0: firmware version: 14.30.5000 Jan 14 13:05:00.469838 kernel: hv_netvsc 000d3ab8-770d-000d-3ab8-770d000d3ab8 eth0: VF registering: eth1 Jan 14 13:05:00.470004 kernel: mlx5_core 95c2:00:02.0 eth1: joined to eth0 Jan 14 13:05:00.470200 kernel: mlx5_core 95c2:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:05:00.476621 kernel: mlx5_core 95c2:00:02.0 enP38338s1: renamed from eth1 Jan 14 13:05:00.611422 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:05:00.695853 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:05:00.715652 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (459) Jan 14 13:05:00.732374 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:05:00.801632 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (451) Jan 14 13:05:00.815960 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:05:00.819660 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:05:00.837817 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:05:00.854177 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:05:00.861614 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:05:01.871635 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:05:01.872337 disk-uuid[608]: The operation has completed successfully. Jan 14 13:05:01.979021 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:05:01.979132 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:05:01.996764 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:05:02.005284 sh[694]: Success Jan 14 13:05:02.042726 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:05:02.336191 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:05:02.351693 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:05:02.353681 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:05:02.376340 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 14 13:05:02.376400 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:05:02.380178 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:05:02.383630 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:05:02.386374 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:05:02.734737 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:05:02.740684 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:05:02.753784 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:05:02.760058 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:05:02.774537 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:02.774628 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:05:02.777437 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:05:02.798196 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:05:02.810621 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:02.811087 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:05:02.821683 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:05:02.832799 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:05:02.865379 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:05:02.874888 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:05:02.895586 systemd-networkd[878]: lo: Link UP Jan 14 13:05:02.895596 systemd-networkd[878]: lo: Gained carrier Jan 14 13:05:02.897738 systemd-networkd[878]: Enumeration completed Jan 14 13:05:02.898037 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:05:02.907788 systemd[1]: Reached target network.target - Network. Jan 14 13:05:02.910714 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:05:02.910719 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:05:02.973632 kernel: mlx5_core 95c2:00:02.0 enP38338s1: Link up Jan 14 13:05:03.007173 kernel: hv_netvsc 000d3ab8-770d-000d-3ab8-770d000d3ab8 eth0: Data path switched to VF: enP38338s1 Jan 14 13:05:03.006628 systemd-networkd[878]: enP38338s1: Link UP Jan 14 13:05:03.006807 systemd-networkd[878]: eth0: Link UP Jan 14 13:05:03.007060 systemd-networkd[878]: eth0: Gained carrier Jan 14 13:05:03.007075 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:05:03.020293 systemd-networkd[878]: enP38338s1: Gained carrier Jan 14 13:05:03.067683 systemd-networkd[878]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 13:05:03.922273 ignition[829]: Ignition 2.20.0 Jan 14 13:05:03.922289 ignition[829]: Stage: fetch-offline Jan 14 13:05:03.922339 ignition[829]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:03.922349 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:03.922457 ignition[829]: parsed url from cmdline: "" Jan 14 13:05:03.922462 ignition[829]: no config URL provided Jan 14 13:05:03.922469 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:05:03.922479 ignition[829]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:05:03.922487 ignition[829]: failed to fetch config: resource requires networking Jan 14 13:05:03.924547 ignition[829]: Ignition finished successfully Jan 14 13:05:03.945175 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:05:03.957807 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:05:03.971648 ignition[887]: Ignition 2.20.0 Jan 14 13:05:03.971659 ignition[887]: Stage: fetch Jan 14 13:05:03.971861 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:03.971874 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:03.971972 ignition[887]: parsed url from cmdline: "" Jan 14 13:05:03.971975 ignition[887]: no config URL provided Jan 14 13:05:03.971991 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:05:03.972002 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:05:03.972027 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:05:04.063459 ignition[887]: GET result: OK Jan 14 13:05:04.063614 ignition[887]: config has been read from IMDS userdata Jan 14 13:05:04.063656 ignition[887]: parsing config with SHA512: c73e5219617a8d83ba8b00cf5f6572a28dab015ec1a862e2fbeaa93d1b203b0d161970f5409c36ce266edc69dfc0206919aa8f94d7ccc27f7cfb29353e9fbdd5 Jan 14 13:05:04.069938 unknown[887]: fetched base config from "system" Jan 14 13:05:04.069950 unknown[887]: fetched base config from "system" Jan 14 13:05:04.070378 ignition[887]: fetch: fetch complete Jan 14 13:05:04.069958 unknown[887]: fetched user config from "azure" Jan 14 13:05:04.070385 ignition[887]: fetch: fetch passed Jan 14 13:05:04.072246 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:05:04.070428 ignition[887]: Ignition finished successfully Jan 14 13:05:04.083803 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:05:04.099054 ignition[893]: Ignition 2.20.0 Jan 14 13:05:04.099065 ignition[893]: Stage: kargs Jan 14 13:05:04.099278 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:04.101723 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:05:04.099292 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:04.100179 ignition[893]: kargs: kargs passed Jan 14 13:05:04.100224 ignition[893]: Ignition finished successfully Jan 14 13:05:04.116876 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:05:04.135518 ignition[899]: Ignition 2.20.0 Jan 14 13:05:04.135530 ignition[899]: Stage: disks Jan 14 13:05:04.137473 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:05:04.135778 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:04.141301 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:05:04.135792 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:04.145569 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:05:04.136626 ignition[899]: disks: disks passed Jan 14 13:05:04.151668 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:05:04.136670 ignition[899]: Ignition finished successfully Jan 14 13:05:04.154372 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:05:04.157284 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:05:04.191810 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:05:04.253191 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:05:04.260183 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:05:04.272223 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:05:04.363624 kernel: EXT4-fs (sda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 14 13:05:04.363733 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:05:04.364489 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:05:04.380707 systemd-networkd[878]: eth0: Gained IPv6LL Jan 14 13:05:04.381115 systemd-networkd[878]: enP38338s1: Gained IPv6LL Jan 14 13:05:04.408709 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:05:04.414043 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:05:04.423723 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (918) Jan 14 13:05:04.427843 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:05:04.449777 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:04.449822 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:05:04.449840 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:05:04.449857 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:05:04.432092 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:05:04.432133 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:05:04.452052 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:05:04.462097 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:05:04.475808 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:05:05.157072 coreos-metadata[920]: Jan 14 13:05:05.157 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:05:05.163993 coreos-metadata[920]: Jan 14 13:05:05.163 INFO Fetch successful Jan 14 13:05:05.163993 coreos-metadata[920]: Jan 14 13:05:05.163 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:05:05.176782 coreos-metadata[920]: Jan 14 13:05:05.176 INFO Fetch successful Jan 14 13:05:05.183705 coreos-metadata[920]: Jan 14 13:05:05.181 INFO wrote hostname ci-4186.1.0-a-847249f34f to /sysroot/etc/hostname Jan 14 13:05:05.188057 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:05:05.191412 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:05:05.217491 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:05:05.222733 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:05:05.241733 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:05:06.083773 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:05:06.094704 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:05:06.105824 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:05:06.117023 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:06.109353 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:05:06.144941 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:05:06.153155 ignition[1037]: INFO : Ignition 2.20.0 Jan 14 13:05:06.153155 ignition[1037]: INFO : Stage: mount Jan 14 13:05:06.155529 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:06.155529 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:06.155529 ignition[1037]: INFO : mount: mount passed Jan 14 13:05:06.155529 ignition[1037]: INFO : Ignition finished successfully Jan 14 13:05:06.155238 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:05:06.177735 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:05:06.187522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:05:06.205620 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1049) Jan 14 13:05:06.210615 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 14 13:05:06.210660 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:05:06.222415 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:05:06.228654 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:05:06.230188 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:05:06.252030 ignition[1066]: INFO : Ignition 2.20.0 Jan 14 13:05:06.254306 ignition[1066]: INFO : Stage: files Jan 14 13:05:06.254306 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:06.254306 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:06.254306 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:05:06.267239 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:05:06.267239 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:05:06.337141 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:05:06.341403 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:05:06.341403 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:05:06.337661 unknown[1066]: wrote ssh authorized keys file for user: core Jan 14 13:05:06.368816 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:05:06.374737 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 14 13:05:06.431487 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 13:05:06.783817 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:05:06.783817 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:05:06.794787 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 14 13:05:07.298197 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 14 13:05:07.462055 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:05:07.468154 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:05:07.468154 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:05:07.468154 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:05:07.468154 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:05:07.468154 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:05:07.468154 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:05:07.468154 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:05:07.503381 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:05:07.503381 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:05:07.513711 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:05:07.513711 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:05:07.513711 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:05:07.513711 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:05:07.513711 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 14 13:05:08.125051 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 14 13:05:09.441615 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 13:05:09.441615 ignition[1066]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 14 13:05:09.463264 ignition[1066]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:05:09.474857 ignition[1066]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:05:09.474857 ignition[1066]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 14 13:05:09.474857 ignition[1066]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:05:09.474857 ignition[1066]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:05:09.474857 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:05:09.474857 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:05:09.474857 ignition[1066]: INFO : files: files passed Jan 14 13:05:09.474857 ignition[1066]: INFO : Ignition finished successfully Jan 14 13:05:09.465307 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:05:09.491792 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:05:09.502759 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:05:09.507935 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:05:09.508189 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:05:09.524784 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:05:09.524784 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:05:09.527019 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:05:09.543085 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:05:09.550994 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:05:09.556761 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:05:09.587869 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:05:09.587983 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:05:09.595001 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:05:09.604613 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:05:09.607487 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:05:09.615808 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:05:09.630293 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:05:09.640755 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:05:09.651014 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:05:09.654552 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:05:09.663950 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:05:09.669232 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:05:09.669382 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:05:09.679050 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:05:09.682160 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:05:09.690062 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:05:09.698702 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:05:09.698915 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:05:09.699338 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:05:09.699761 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:05:09.700198 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:05:09.700611 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:05:09.701107 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:05:09.701514 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:05:09.701678 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:05:09.702485 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:05:09.702985 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:05:09.703379 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:05:09.727467 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:05:09.734700 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:05:09.740576 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:05:09.776190 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:05:09.776363 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:05:09.786461 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:05:09.786681 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:05:09.792091 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:05:09.792230 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:05:09.814950 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:05:09.821656 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:05:09.824743 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:05:09.824978 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:05:09.829125 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:05:09.847261 ignition[1118]: INFO : Ignition 2.20.0 Jan 14 13:05:09.847261 ignition[1118]: INFO : Stage: umount Jan 14 13:05:09.847261 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:05:09.847261 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:05:09.847261 ignition[1118]: INFO : umount: umount passed Jan 14 13:05:09.847261 ignition[1118]: INFO : Ignition finished successfully Jan 14 13:05:09.829358 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:05:09.833515 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:05:09.833632 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:05:09.844082 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:05:09.844197 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:05:09.850424 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:05:09.850522 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:05:09.853076 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:05:09.853116 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:05:09.856141 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:05:09.856177 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:05:09.859092 systemd[1]: Stopped target network.target - Network. Jan 14 13:05:09.861688 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:05:09.861756 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:05:09.867072 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:05:09.874415 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:05:09.877842 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:05:09.883139 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:05:09.885742 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:05:09.888555 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:05:09.888621 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:05:09.899474 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:05:09.899525 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:05:09.905256 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:05:09.905324 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:05:09.908224 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:05:09.908275 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:05:09.923446 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:05:09.926215 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:05:09.927489 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:05:09.934649 systemd-networkd[878]: eth0: DHCPv6 lease lost Jan 14 13:05:10.069645 kernel: hv_netvsc 000d3ab8-770d-000d-3ab8-770d000d3ab8 eth0: Data path switched from VF: enP38338s1 Jan 14 13:05:09.942537 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:05:09.942893 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:05:09.948219 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:05:09.948378 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:05:09.955366 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:05:09.955432 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:05:09.974775 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:05:09.980440 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:05:09.980509 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:05:09.984492 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:05:09.984542 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:05:09.989930 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:05:09.989976 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:05:09.990799 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:05:09.990836 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:05:09.991336 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:05:10.019940 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:05:10.020089 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:05:10.026362 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:05:10.026452 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:05:10.031889 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:05:10.031933 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:05:10.032036 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:05:10.032080 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:05:10.033472 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:05:10.033508 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:05:10.035445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:05:10.035482 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:05:10.053922 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:05:10.054768 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:05:10.054833 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:05:10.055292 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 13:05:10.055332 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:05:10.056181 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:05:10.056215 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:05:10.057123 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:05:10.057160 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:05:10.077933 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:05:10.078032 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:05:10.121251 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:05:10.121379 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:05:10.635628 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:05:10.635798 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:05:10.639326 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:05:10.644265 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:05:10.644333 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:05:10.659812 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:05:10.679127 systemd[1]: Switching root. Jan 14 13:05:10.726953 systemd-journald[177]: Journal stopped Jan 14 13:05:15.425857 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 14 13:05:15.425902 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 13:05:15.425923 kernel: SELinux: policy capability open_perms=1 Jan 14 13:05:15.425938 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 13:05:15.425951 kernel: SELinux: policy capability always_check_network=0 Jan 14 13:05:15.425965 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 13:05:15.425981 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 13:05:15.425998 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 13:05:15.426012 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 13:05:15.426026 kernel: audit: type=1403 audit(1736859912.325:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 13:05:15.426042 systemd[1]: Successfully loaded SELinux policy in 129.992ms. Jan 14 13:05:15.426059 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.729ms. Jan 14 13:05:15.426077 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:05:15.426093 systemd[1]: Detected virtualization microsoft. Jan 14 13:05:15.426113 systemd[1]: Detected architecture x86-64. Jan 14 13:05:15.426129 systemd[1]: Detected first boot. Jan 14 13:05:15.426146 systemd[1]: Hostname set to . Jan 14 13:05:15.426162 systemd[1]: Initializing machine ID from random generator. Jan 14 13:05:15.426178 zram_generator::config[1160]: No configuration found. Jan 14 13:05:15.426198 systemd[1]: Populated /etc with preset unit settings. Jan 14 13:05:15.426214 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 13:05:15.426230 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 13:05:15.426246 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 13:05:15.426265 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 13:05:15.426281 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 13:05:15.426298 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 13:05:15.426317 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 13:05:15.426334 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 13:05:15.426351 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 13:05:15.426368 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 13:05:15.426384 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 13:05:15.426401 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:05:15.426417 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:05:15.426434 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 13:05:15.426453 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 13:05:15.426470 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 13:05:15.426487 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:05:15.426503 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 13:05:15.426520 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:05:15.426537 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 13:05:15.426558 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 13:05:15.426575 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 13:05:15.426595 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 13:05:15.429269 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:05:15.429294 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:05:15.429313 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:05:15.429330 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:05:15.429347 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 13:05:15.429365 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 13:05:15.429382 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:05:15.429404 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:05:15.429422 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:05:15.429440 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 13:05:15.429457 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 13:05:15.429478 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 13:05:15.429495 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 13:05:15.429513 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:05:15.429531 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 13:05:15.429548 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 13:05:15.429565 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 13:05:15.429584 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 13:05:15.429610 systemd[1]: Reached target machines.target - Containers. Jan 14 13:05:15.429631 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 13:05:15.429649 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:05:15.429667 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:05:15.429686 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 13:05:15.429704 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:05:15.429721 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:05:15.429739 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:05:15.429756 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 13:05:15.429774 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:05:15.429794 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 13:05:15.429812 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 13:05:15.429829 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 13:05:15.429847 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 13:05:15.429864 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 13:05:15.429881 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:05:15.429899 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:05:15.429916 kernel: loop: module loaded Jan 14 13:05:15.429935 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:05:15.429952 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 13:05:15.429970 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:05:15.429987 systemd[1]: verity-setup.service: Deactivated successfully. Jan 14 13:05:15.430004 kernel: fuse: init (API version 7.39) Jan 14 13:05:15.430020 systemd[1]: Stopped verity-setup.service. Jan 14 13:05:15.430038 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:05:15.430057 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 13:05:15.430098 systemd-journald[1266]: Collecting audit messages is disabled. Jan 14 13:05:15.434005 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 13:05:15.434034 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 13:05:15.434049 kernel: ACPI: bus type drm_connector registered Jan 14 13:05:15.434066 systemd-journald[1266]: Journal started Jan 14 13:05:15.434103 systemd-journald[1266]: Runtime Journal (/run/log/journal/78cfd87ac8e0449dbe8ff4f381b5b015) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:05:14.620796 systemd[1]: Queued start job for default target multi-user.target. Jan 14 13:05:14.758695 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 13:05:14.759060 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 13:05:15.442062 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 13:05:15.449393 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:05:15.450806 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 13:05:15.454338 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 13:05:15.457731 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 13:05:15.462666 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:05:15.467301 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 13:05:15.467596 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 13:05:15.472024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:05:15.472334 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:05:15.476216 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:05:15.476508 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:05:15.480947 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:05:15.481258 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:05:15.488494 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 13:05:15.488808 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 13:05:15.492536 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:05:15.492889 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:05:15.496733 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:05:15.500438 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:05:15.504238 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 13:05:15.508224 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:05:15.521525 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:05:15.529697 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 13:05:15.534204 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 13:05:15.537723 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 13:05:15.537865 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:05:15.541843 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 14 13:05:15.546845 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 13:05:15.555743 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 13:05:15.558788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:05:15.619806 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 13:05:15.624441 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 13:05:15.627595 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:05:15.630586 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 13:05:15.633643 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:05:15.643356 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:05:15.657826 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 13:05:15.665617 systemd-journald[1266]: Time spent on flushing to /var/log/journal/78cfd87ac8e0449dbe8ff4f381b5b015 is 29.356ms for 961 entries. Jan 14 13:05:15.665617 systemd-journald[1266]: System Journal (/var/log/journal/78cfd87ac8e0449dbe8ff4f381b5b015) is 8.0M, max 2.6G, 2.6G free. Jan 14 13:05:15.719450 systemd-journald[1266]: Received client request to flush runtime journal. Jan 14 13:05:15.665634 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:05:15.678839 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 14 13:05:15.684621 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 13:05:15.691293 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 13:05:15.695233 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 13:05:15.706562 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 13:05:15.715421 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 13:05:15.725617 kernel: loop0: detected capacity change from 0 to 141000 Jan 14 13:05:15.733790 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 14 13:05:15.738326 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 13:05:15.745334 udevadm[1301]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 14 13:05:15.767477 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:05:15.791609 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 13:05:15.793244 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 14 13:05:15.824089 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Jan 14 13:05:15.824115 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Jan 14 13:05:15.831557 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:05:15.840839 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 13:05:16.113929 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 13:05:16.127773 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:05:16.146872 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Jan 14 13:05:16.146899 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Jan 14 13:05:16.153443 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:05:16.160356 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 13:05:16.201657 kernel: loop1: detected capacity change from 0 to 28304 Jan 14 13:05:16.622646 kernel: loop2: detected capacity change from 0 to 210664 Jan 14 13:05:16.674630 kernel: loop3: detected capacity change from 0 to 138184 Jan 14 13:05:17.042169 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 13:05:17.054847 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:05:17.076876 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jan 14 13:05:17.281345 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:05:17.293005 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:05:17.322204 kernel: loop4: detected capacity change from 0 to 141000 Jan 14 13:05:17.353732 kernel: loop5: detected capacity change from 0 to 28304 Jan 14 13:05:17.373848 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 13:05:17.389414 kernel: loop6: detected capacity change from 0 to 210664 Jan 14 13:05:17.409330 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 13:05:17.438620 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 13:05:17.447627 kernel: loop7: detected capacity change from 0 to 138184 Jan 14 13:05:17.471621 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 13:05:17.486161 kernel: hv_vmbus: registering driver hv_balloon Jan 14 13:05:17.486278 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 13:05:17.494100 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 13:05:17.494177 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 13:05:17.503359 kernel: Console: switching to colour dummy device 80x25 Jan 14 13:05:17.504069 (sd-merge)[1349]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 14 13:05:17.505625 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:05:17.507374 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 13:05:17.509132 (sd-merge)[1349]: Merged extensions into '/usr'. Jan 14 13:05:17.590440 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 13:05:17.590497 systemd[1]: Reloading... Jan 14 13:05:17.791625 zram_generator::config[1403]: No configuration found. Jan 14 13:05:17.839419 systemd-networkd[1334]: lo: Link UP Jan 14 13:05:17.840652 systemd-networkd[1334]: lo: Gained carrier Jan 14 13:05:17.854275 systemd-networkd[1334]: Enumeration completed Jan 14 13:05:17.856120 systemd-networkd[1334]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:05:17.857898 systemd-networkd[1334]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:05:17.919661 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1333) Jan 14 13:05:17.929631 kernel: mlx5_core 95c2:00:02.0 enP38338s1: Link up Jan 14 13:05:17.950929 kernel: hv_netvsc 000d3ab8-770d-000d-3ab8-770d000d3ab8 eth0: Data path switched to VF: enP38338s1 Jan 14 13:05:17.955211 systemd-networkd[1334]: enP38338s1: Link UP Jan 14 13:05:17.955349 systemd-networkd[1334]: eth0: Link UP Jan 14 13:05:17.955356 systemd-networkd[1334]: eth0: Gained carrier Jan 14 13:05:17.955377 systemd-networkd[1334]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:05:17.960389 systemd-networkd[1334]: enP38338s1: Gained carrier Jan 14 13:05:17.985717 systemd-networkd[1334]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 13:05:17.997801 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 14 13:05:18.146315 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:05:18.234622 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:05:18.239608 systemd[1]: Reloading finished in 645 ms. Jan 14 13:05:18.266193 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:05:18.270040 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 13:05:18.309892 systemd[1]: Starting ensure-sysext.service... Jan 14 13:05:18.317718 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 13:05:18.326919 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 13:05:18.331672 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:05:18.342903 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:05:18.351253 systemd[1]: Reloading requested from client PID 1518 ('systemctl') (unit ensure-sysext.service)... Jan 14 13:05:18.351275 systemd[1]: Reloading... Jan 14 13:05:18.396351 systemd-tmpfiles[1521]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 13:05:18.397748 systemd-tmpfiles[1521]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 14 13:05:18.399130 systemd-tmpfiles[1521]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 14 13:05:18.399548 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. Jan 14 13:05:18.399641 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. Jan 14 13:05:18.422945 systemd-tmpfiles[1521]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:05:18.422963 systemd-tmpfiles[1521]: Skipping /boot Jan 14 13:05:18.445634 zram_generator::config[1555]: No configuration found. Jan 14 13:05:18.454521 systemd-tmpfiles[1521]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:05:18.454542 systemd-tmpfiles[1521]: Skipping /boot Jan 14 13:05:18.594156 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:05:18.680856 systemd[1]: Reloading finished in 329 ms. Jan 14 13:05:18.697846 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 14 13:05:18.708100 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 13:05:18.714211 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:05:18.729013 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:05:18.736708 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 13:05:18.750015 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 14 13:05:18.754890 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 13:05:18.764777 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:05:18.770613 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 13:05:18.775848 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:05:18.783583 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:05:18.783947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:05:18.791915 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:05:18.805308 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:05:18.823918 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:05:18.827343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:05:18.827528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:05:18.828822 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:05:18.829012 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:05:18.832822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:05:18.832993 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:05:18.844270 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:05:18.844470 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:05:18.859136 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:05:18.859548 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:05:18.869071 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:05:18.882349 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:05:18.896629 lvm[1625]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:05:18.899737 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:05:18.906953 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:05:18.915332 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:05:18.915665 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 13:05:18.922083 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:05:18.925178 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 13:05:18.929761 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 13:05:18.934153 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 14 13:05:18.939178 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:05:18.939350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:05:18.943018 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:05:18.943185 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:05:18.946790 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:05:18.946977 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:05:18.951272 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:05:18.951459 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:05:18.964813 systemd-resolved[1627]: Positive Trust Anchors: Jan 14 13:05:18.964836 systemd-resolved[1627]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:05:18.964889 systemd-resolved[1627]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:05:18.966820 systemd[1]: Finished ensure-sysext.service. Jan 14 13:05:18.972855 systemd-networkd[1334]: eth0: Gained IPv6LL Jan 14 13:05:18.975697 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:05:18.985825 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 14 13:05:18.989530 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:05:18.995035 lvm[1664]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:05:18.989627 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:05:18.990026 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 13:05:19.000265 systemd-resolved[1627]: Using system hostname 'ci-4186.1.0-a-847249f34f'. Jan 14 13:05:19.003183 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:05:19.006719 systemd[1]: Reached target network.target - Network. Jan 14 13:05:19.009531 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 13:05:19.014378 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:05:19.031744 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 14 13:05:19.042220 augenrules[1670]: No rules Jan 14 13:05:19.043660 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:05:19.043891 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:05:19.100741 systemd-networkd[1334]: enP38338s1: Gained IPv6LL Jan 14 13:05:19.539255 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 13:05:19.544025 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 13:05:22.031815 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 13:05:22.045623 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 13:05:22.053789 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 13:05:22.065243 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 13:05:22.068957 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:05:22.072015 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 13:05:22.075417 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 13:05:22.079342 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 13:05:22.082366 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 13:05:22.085858 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 13:05:22.089880 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 13:05:22.089920 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:05:22.092522 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:05:22.096436 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 13:05:22.101049 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 13:05:22.114685 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 13:05:22.118315 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 13:05:22.121786 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:05:22.124590 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:05:22.127382 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:05:22.127425 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:05:22.136735 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 13:05:22.141761 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 13:05:22.151798 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 13:05:22.159823 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 13:05:22.172829 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 13:05:22.183877 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 13:05:22.186795 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 13:05:22.186852 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 14 13:05:22.193816 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 13:05:22.197932 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 13:05:22.199777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:05:22.205794 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 13:05:22.211298 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 13:05:22.214519 (chronyd)[1682]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 14 13:05:22.218779 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 13:05:22.225413 jq[1689]: false Jan 14 13:05:22.225805 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 13:05:22.237739 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 13:05:22.248888 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 13:05:22.252347 KVP[1691]: KVP starting; pid is:1691 Jan 14 13:05:22.253142 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 13:05:22.253823 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 13:05:22.266573 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 13:05:22.276900 chronyd[1710]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 14 13:05:22.281772 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 13:05:22.291022 extend-filesystems[1690]: Found loop4 Jan 14 13:05:22.291022 extend-filesystems[1690]: Found loop5 Jan 14 13:05:22.291022 extend-filesystems[1690]: Found loop6 Jan 14 13:05:22.291022 extend-filesystems[1690]: Found loop7 Jan 14 13:05:22.291022 extend-filesystems[1690]: Found sda Jan 14 13:05:22.291022 extend-filesystems[1690]: Found sda1 Jan 14 13:05:22.291022 extend-filesystems[1690]: Found sda2 Jan 14 13:05:22.291022 extend-filesystems[1690]: Found sda3 Jan 14 13:05:22.291022 extend-filesystems[1690]: Found usr Jan 14 13:05:22.291022 extend-filesystems[1690]: Found sda4 Jan 14 13:05:22.291022 extend-filesystems[1690]: Found sda6 Jan 14 13:05:22.291022 extend-filesystems[1690]: Found sda7 Jan 14 13:05:22.291022 extend-filesystems[1690]: Found sda9 Jan 14 13:05:22.291022 extend-filesystems[1690]: Checking size of /dev/sda9 Jan 14 13:05:22.338979 kernel: hv_utils: KVP IC version 4.0 Jan 14 13:05:22.328651 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 13:05:22.294266 KVP[1691]: KVP LIC Version: 3.1 Jan 14 13:05:22.334437 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 13:05:22.339255 jq[1712]: true Jan 14 13:05:22.341505 dbus-daemon[1685]: [system] SELinux support is enabled Jan 14 13:05:22.344993 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 13:05:22.353867 chronyd[1710]: Timezone right/UTC failed leap second check, ignoring Jan 14 13:05:22.354149 chronyd[1710]: Loaded seccomp filter (level 2) Jan 14 13:05:22.356489 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 13:05:22.359840 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 13:05:22.360671 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 13:05:22.375136 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 13:05:22.375377 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 13:05:22.404072 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 13:05:22.404127 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 13:05:22.410783 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 13:05:22.410838 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 13:05:22.425063 extend-filesystems[1690]: Old size kept for /dev/sda9 Jan 14 13:05:22.425063 extend-filesystems[1690]: Found sr0 Jan 14 13:05:22.428682 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 13:05:22.433690 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 13:05:22.434077 jq[1722]: true Jan 14 13:05:22.444918 coreos-metadata[1684]: Jan 14 13:05:22.443 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:05:22.453031 coreos-metadata[1684]: Jan 14 13:05:22.447 INFO Fetch successful Jan 14 13:05:22.453031 coreos-metadata[1684]: Jan 14 13:05:22.449 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 13:05:22.453174 (ntainerd)[1723]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 14 13:05:22.456624 coreos-metadata[1684]: Jan 14 13:05:22.455 INFO Fetch successful Jan 14 13:05:22.456624 coreos-metadata[1684]: Jan 14 13:05:22.456 INFO Fetching http://168.63.129.16/machine/e6ff23df-e6b7-4fd0-9a28-929d66ca7367/f2a9ea11%2Ddc2e%2D460e%2Da7d1%2Da58bf8a2cc87.%5Fci%2D4186.1.0%2Da%2D847249f34f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 13:05:22.458196 coreos-metadata[1684]: Jan 14 13:05:22.457 INFO Fetch successful Jan 14 13:05:22.458667 coreos-metadata[1684]: Jan 14 13:05:22.458 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:05:22.462473 update_engine[1702]: I20250114 13:05:22.462064 1702 main.cc:92] Flatcar Update Engine starting Jan 14 13:05:22.466757 systemd[1]: Started update-engine.service - Update Engine. Jan 14 13:05:22.470627 update_engine[1702]: I20250114 13:05:22.469780 1702 update_check_scheduler.cc:74] Next update check in 7m35s Jan 14 13:05:22.472634 coreos-metadata[1684]: Jan 14 13:05:22.472 INFO Fetch successful Jan 14 13:05:22.478129 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 13:05:22.500718 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 13:05:22.520517 tar[1721]: linux-amd64/helm Jan 14 13:05:22.539480 systemd-logind[1701]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 13:05:22.543754 systemd-logind[1701]: New seat seat0. Jan 14 13:05:22.556378 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 13:05:22.564563 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 13:05:22.573455 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 13:05:22.635776 bash[1766]: Updated "/home/core/.ssh/authorized_keys" Jan 14 13:05:22.637676 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 13:05:22.643553 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 13:05:22.686627 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1772) Jan 14 13:05:22.895125 locksmithd[1742]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 13:05:23.519523 tar[1721]: linux-amd64/LICENSE Jan 14 13:05:23.519827 tar[1721]: linux-amd64/README.md Jan 14 13:05:23.539157 sshd_keygen[1714]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 13:05:23.541305 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 13:05:23.571503 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 13:05:23.585118 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 13:05:23.599073 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 13:05:23.602746 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 13:05:23.602981 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 13:05:23.621681 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 13:05:23.630779 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 13:05:23.649661 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 13:05:23.660081 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 13:05:23.665799 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 13:05:23.670517 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 13:05:23.876680 containerd[1723]: time="2025-01-14T13:05:23.875130300Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 14 13:05:23.908971 containerd[1723]: time="2025-01-14T13:05:23.908911900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:05:23.910754 containerd[1723]: time="2025-01-14T13:05:23.910710000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:05:23.910754 containerd[1723]: time="2025-01-14T13:05:23.910742500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 14 13:05:23.910904 containerd[1723]: time="2025-01-14T13:05:23.910762900Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 14 13:05:23.910980 containerd[1723]: time="2025-01-14T13:05:23.910956700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 14 13:05:23.911035 containerd[1723]: time="2025-01-14T13:05:23.910983700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 14 13:05:23.911092 containerd[1723]: time="2025-01-14T13:05:23.911068800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:05:23.911130 containerd[1723]: time="2025-01-14T13:05:23.911089600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:05:23.911299 containerd[1723]: time="2025-01-14T13:05:23.911274400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:05:23.911299 containerd[1723]: time="2025-01-14T13:05:23.911294700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 14 13:05:23.911380 containerd[1723]: time="2025-01-14T13:05:23.911313000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:05:23.911380 containerd[1723]: time="2025-01-14T13:05:23.911327100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 14 13:05:23.911456 containerd[1723]: time="2025-01-14T13:05:23.911425100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:05:23.911677 containerd[1723]: time="2025-01-14T13:05:23.911649600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:05:23.911836 containerd[1723]: time="2025-01-14T13:05:23.911811100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:05:23.911836 containerd[1723]: time="2025-01-14T13:05:23.911831900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 14 13:05:23.912142 containerd[1723]: time="2025-01-14T13:05:23.911949500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 14 13:05:23.912142 containerd[1723]: time="2025-01-14T13:05:23.912012300Z" level=info msg="metadata content store policy set" policy=shared Jan 14 13:05:23.934842 containerd[1723]: time="2025-01-14T13:05:23.934049200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 14 13:05:23.934842 containerd[1723]: time="2025-01-14T13:05:23.934130800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 14 13:05:23.934842 containerd[1723]: time="2025-01-14T13:05:23.934162000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 14 13:05:23.934842 containerd[1723]: time="2025-01-14T13:05:23.934183200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 14 13:05:23.934842 containerd[1723]: time="2025-01-14T13:05:23.934203400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 14 13:05:23.934842 containerd[1723]: time="2025-01-14T13:05:23.934387700Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 14 13:05:23.934842 containerd[1723]: time="2025-01-14T13:05:23.934730300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 14 13:05:23.935164 containerd[1723]: time="2025-01-14T13:05:23.934864800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 14 13:05:23.935164 containerd[1723]: time="2025-01-14T13:05:23.934884500Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 14 13:05:23.935164 containerd[1723]: time="2025-01-14T13:05:23.934903100Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 14 13:05:23.935164 containerd[1723]: time="2025-01-14T13:05:23.934920800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 14 13:05:23.935164 containerd[1723]: time="2025-01-14T13:05:23.934937400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 14 13:05:23.935164 containerd[1723]: time="2025-01-14T13:05:23.934952700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 14 13:05:23.935164 containerd[1723]: time="2025-01-14T13:05:23.934970100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 14 13:05:23.935164 containerd[1723]: time="2025-01-14T13:05:23.934989000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 14 13:05:23.935164 containerd[1723]: time="2025-01-14T13:05:23.935012100Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 14 13:05:23.935164 containerd[1723]: time="2025-01-14T13:05:23.935029300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 14 13:05:23.935164 containerd[1723]: time="2025-01-14T13:05:23.935046300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 14 13:05:23.935164 containerd[1723]: time="2025-01-14T13:05:23.935075000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935164 containerd[1723]: time="2025-01-14T13:05:23.935099100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935164 containerd[1723]: time="2025-01-14T13:05:23.935115900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935635 containerd[1723]: time="2025-01-14T13:05:23.935134700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935635 containerd[1723]: time="2025-01-14T13:05:23.935151600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935635 containerd[1723]: time="2025-01-14T13:05:23.935170700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935635 containerd[1723]: time="2025-01-14T13:05:23.935187200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935635 containerd[1723]: time="2025-01-14T13:05:23.935220300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935635 containerd[1723]: time="2025-01-14T13:05:23.935239900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935635 containerd[1723]: time="2025-01-14T13:05:23.935259900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935635 containerd[1723]: time="2025-01-14T13:05:23.935276900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935635 containerd[1723]: time="2025-01-14T13:05:23.935295800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935635 containerd[1723]: time="2025-01-14T13:05:23.935312900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935635 containerd[1723]: time="2025-01-14T13:05:23.935332300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 14 13:05:23.935635 containerd[1723]: time="2025-01-14T13:05:23.935360400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935635 containerd[1723]: time="2025-01-14T13:05:23.935380300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.935635 containerd[1723]: time="2025-01-14T13:05:23.935396000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 14 13:05:23.936125 containerd[1723]: time="2025-01-14T13:05:23.935450700Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 14 13:05:23.936125 containerd[1723]: time="2025-01-14T13:05:23.935474300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 14 13:05:23.936125 containerd[1723]: time="2025-01-14T13:05:23.935489900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 14 13:05:23.936125 containerd[1723]: time="2025-01-14T13:05:23.935506900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 14 13:05:23.936125 containerd[1723]: time="2025-01-14T13:05:23.935522000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.936125 containerd[1723]: time="2025-01-14T13:05:23.935539300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 14 13:05:23.936125 containerd[1723]: time="2025-01-14T13:05:23.935553500Z" level=info msg="NRI interface is disabled by configuration." Jan 14 13:05:23.936125 containerd[1723]: time="2025-01-14T13:05:23.935568800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 14 13:05:23.936384 containerd[1723]: time="2025-01-14T13:05:23.936102200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 14 13:05:23.936384 containerd[1723]: time="2025-01-14T13:05:23.936168400Z" level=info msg="Connect containerd service" Jan 14 13:05:23.936384 containerd[1723]: time="2025-01-14T13:05:23.936208600Z" level=info msg="using legacy CRI server" Jan 14 13:05:23.936384 containerd[1723]: time="2025-01-14T13:05:23.936218700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 13:05:23.936384 containerd[1723]: time="2025-01-14T13:05:23.936380400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 14 13:05:23.937502 containerd[1723]: time="2025-01-14T13:05:23.937081000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:05:23.937502 containerd[1723]: time="2025-01-14T13:05:23.937373400Z" level=info msg="Start subscribing containerd event" Jan 14 13:05:23.937502 containerd[1723]: time="2025-01-14T13:05:23.937427000Z" level=info msg="Start recovering state" Jan 14 13:05:23.937502 containerd[1723]: time="2025-01-14T13:05:23.937497800Z" level=info msg="Start event monitor" Jan 14 13:05:23.939018 containerd[1723]: time="2025-01-14T13:05:23.937521600Z" level=info msg="Start snapshots syncer" Jan 14 13:05:23.939018 containerd[1723]: time="2025-01-14T13:05:23.937534100Z" level=info msg="Start cni network conf syncer for default" Jan 14 13:05:23.939018 containerd[1723]: time="2025-01-14T13:05:23.937543500Z" level=info msg="Start streaming server" Jan 14 13:05:23.939018 containerd[1723]: time="2025-01-14T13:05:23.937921100Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 13:05:23.939018 containerd[1723]: time="2025-01-14T13:05:23.938076000Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 13:05:23.939269 containerd[1723]: time="2025-01-14T13:05:23.939244900Z" level=info msg="containerd successfully booted in 0.065143s" Jan 14 13:05:23.939337 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 13:05:23.971748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:05:23.976696 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 13:05:23.981677 systemd[1]: Startup finished in 1.376s (firmware) + 29.365s (loader) + 1.033s (kernel) + 13.975s (initrd) + 11.784s (userspace) = 57.535s. Jan 14 13:05:24.056060 (kubelet)[1870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:05:24.096105 agetty[1861]: failed to open credentials directory Jan 14 13:05:24.097144 agetty[1860]: failed to open credentials directory Jan 14 13:05:24.349352 login[1860]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:05:24.352814 login[1861]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:05:24.365696 systemd-logind[1701]: New session 2 of user core. Jan 14 13:05:24.367688 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 13:05:24.374883 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 13:05:24.378104 systemd-logind[1701]: New session 1 of user core. Jan 14 13:05:24.399821 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 13:05:24.406943 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 13:05:24.416762 (systemd)[1882]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 13:05:24.618767 systemd[1882]: Queued start job for default target default.target. Jan 14 13:05:24.624865 systemd[1882]: Created slice app.slice - User Application Slice. Jan 14 13:05:24.624904 systemd[1882]: Reached target paths.target - Paths. Jan 14 13:05:24.624923 systemd[1882]: Reached target timers.target - Timers. Jan 14 13:05:24.626452 systemd[1882]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 13:05:24.646801 systemd[1882]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 13:05:24.646957 systemd[1882]: Reached target sockets.target - Sockets. Jan 14 13:05:24.646981 systemd[1882]: Reached target basic.target - Basic System. Jan 14 13:05:24.647033 systemd[1882]: Reached target default.target - Main User Target. Jan 14 13:05:24.647072 systemd[1882]: Startup finished in 222ms. Jan 14 13:05:24.647218 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 13:05:24.653982 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 13:05:24.655119 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 13:05:24.793481 kubelet[1870]: E0114 13:05:24.793419 1870 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:05:24.796085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:05:24.796298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:05:25.537031 waagent[1858]: 2025-01-14T13:05:25.536928Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 14 13:05:25.540769 waagent[1858]: 2025-01-14T13:05:25.540693Z INFO Daemon Daemon OS: flatcar 4186.1.0 Jan 14 13:05:25.543476 waagent[1858]: 2025-01-14T13:05:25.543411Z INFO Daemon Daemon Python: 3.11.10 Jan 14 13:05:25.547859 waagent[1858]: 2025-01-14T13:05:25.547787Z INFO Daemon Daemon Run daemon Jan 14 13:05:25.550216 waagent[1858]: 2025-01-14T13:05:25.550160Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.0' Jan 14 13:05:25.555140 waagent[1858]: 2025-01-14T13:05:25.555079Z INFO Daemon Daemon Using waagent for provisioning Jan 14 13:05:25.558395 waagent[1858]: 2025-01-14T13:05:25.558342Z INFO Daemon Daemon Activate resource disk Jan 14 13:05:25.561008 waagent[1858]: 2025-01-14T13:05:25.560946Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 13:05:25.570207 waagent[1858]: 2025-01-14T13:05:25.570135Z INFO Daemon Daemon Found device: None Jan 14 13:05:25.576956 waagent[1858]: 2025-01-14T13:05:25.576886Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 13:05:25.608073 waagent[1858]: 2025-01-14T13:05:25.581484Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 13:05:25.608073 waagent[1858]: 2025-01-14T13:05:25.582198Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:05:25.608073 waagent[1858]: 2025-01-14T13:05:25.582368Z INFO Daemon Daemon Running default provisioning handler Jan 14 13:05:25.608073 waagent[1858]: 2025-01-14T13:05:25.590820Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 13:05:25.608073 waagent[1858]: 2025-01-14T13:05:25.593624Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 13:05:25.608073 waagent[1858]: 2025-01-14T13:05:25.594441Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 13:05:25.608073 waagent[1858]: 2025-01-14T13:05:25.595509Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 13:05:25.699621 waagent[1858]: 2025-01-14T13:05:25.696914Z INFO Daemon Daemon Successfully mounted dvd Jan 14 13:05:25.711474 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 13:05:25.713431 waagent[1858]: 2025-01-14T13:05:25.713010Z INFO Daemon Daemon Detect protocol endpoint Jan 14 13:05:25.716043 waagent[1858]: 2025-01-14T13:05:25.715888Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:05:25.729555 waagent[1858]: 2025-01-14T13:05:25.716139Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 13:05:25.729555 waagent[1858]: 2025-01-14T13:05:25.717086Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 13:05:25.729555 waagent[1858]: 2025-01-14T13:05:25.717719Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 13:05:25.729555 waagent[1858]: 2025-01-14T13:05:25.718048Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 13:05:25.743032 waagent[1858]: 2025-01-14T13:05:25.742971Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 13:05:25.751920 waagent[1858]: 2025-01-14T13:05:25.743493Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 13:05:25.751920 waagent[1858]: 2025-01-14T13:05:25.744271Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 13:05:25.829075 waagent[1858]: 2025-01-14T13:05:25.828911Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 13:05:25.832500 waagent[1858]: 2025-01-14T13:05:25.832365Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 13:05:25.837700 waagent[1858]: 2025-01-14T13:05:25.837644Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:05:26.735021 waagent[1858]: 2025-01-14T13:05:26.734942Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 14 13:05:26.755193 waagent[1858]: 2025-01-14T13:05:26.735922Z INFO Daemon Jan 14 13:05:26.755193 waagent[1858]: 2025-01-14T13:05:26.737110Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6d2acdc5-a5af-469f-bd74-42aba09ccd56 eTag: 2722384851472531648 source: Fabric] Jan 14 13:05:26.755193 waagent[1858]: 2025-01-14T13:05:26.738436Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 13:05:26.755193 waagent[1858]: 2025-01-14T13:05:26.739580Z INFO Daemon Jan 14 13:05:26.755193 waagent[1858]: 2025-01-14T13:05:26.740539Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:05:26.755193 waagent[1858]: 2025-01-14T13:05:26.744852Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 13:05:26.815391 waagent[1858]: 2025-01-14T13:05:26.815300Z INFO Daemon Downloaded certificate {'thumbprint': 'FB5D1FF01793D7297430147F71FA609F4E74EAEA', 'hasPrivateKey': True} Jan 14 13:05:26.820834 waagent[1858]: 2025-01-14T13:05:26.820760Z INFO Daemon Fetch goal state completed Jan 14 13:05:26.829025 waagent[1858]: 2025-01-14T13:05:26.828973Z INFO Daemon Daemon Starting provisioning Jan 14 13:05:26.837019 waagent[1858]: 2025-01-14T13:05:26.829213Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 13:05:26.837019 waagent[1858]: 2025-01-14T13:05:26.830262Z INFO Daemon Daemon Set hostname [ci-4186.1.0-a-847249f34f] Jan 14 13:05:26.875337 waagent[1858]: 2025-01-14T13:05:26.875235Z INFO Daemon Daemon Publish hostname [ci-4186.1.0-a-847249f34f] Jan 14 13:05:26.885933 waagent[1858]: 2025-01-14T13:05:26.875907Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 13:05:26.885933 waagent[1858]: 2025-01-14T13:05:26.876417Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 13:05:26.901985 systemd-networkd[1334]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:05:26.901995 systemd-networkd[1334]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:05:26.902043 systemd-networkd[1334]: eth0: DHCP lease lost Jan 14 13:05:26.903265 waagent[1858]: 2025-01-14T13:05:26.903176Z INFO Daemon Daemon Create user account if not exists Jan 14 13:05:26.924005 waagent[1858]: 2025-01-14T13:05:26.903555Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 13:05:26.924005 waagent[1858]: 2025-01-14T13:05:26.904538Z INFO Daemon Daemon Configure sudoer Jan 14 13:05:26.924005 waagent[1858]: 2025-01-14T13:05:26.905274Z INFO Daemon Daemon Configure sshd Jan 14 13:05:26.924005 waagent[1858]: 2025-01-14T13:05:26.906250Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 13:05:26.924005 waagent[1858]: 2025-01-14T13:05:26.906443Z INFO Daemon Daemon Deploy ssh public key. Jan 14 13:05:26.926739 systemd-networkd[1334]: eth0: DHCPv6 lease lost Jan 14 13:05:26.965689 systemd-networkd[1334]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 13:05:28.027149 waagent[1858]: 2025-01-14T13:05:28.027072Z INFO Daemon Daemon Provisioning complete Jan 14 13:05:28.043066 waagent[1858]: 2025-01-14T13:05:28.042987Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 13:05:28.051643 waagent[1858]: 2025-01-14T13:05:28.043369Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 13:05:28.051643 waagent[1858]: 2025-01-14T13:05:28.045512Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 14 13:05:28.171260 waagent[1936]: 2025-01-14T13:05:28.171153Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 14 13:05:28.171715 waagent[1936]: 2025-01-14T13:05:28.171322Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.0 Jan 14 13:05:28.171715 waagent[1936]: 2025-01-14T13:05:28.171403Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 14 13:05:28.446066 waagent[1936]: 2025-01-14T13:05:28.445895Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 14 13:05:28.446292 waagent[1936]: 2025-01-14T13:05:28.446226Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:05:28.446414 waagent[1936]: 2025-01-14T13:05:28.446362Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:05:28.454397 waagent[1936]: 2025-01-14T13:05:28.454324Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:05:28.459955 waagent[1936]: 2025-01-14T13:05:28.459901Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 14 13:05:28.460424 waagent[1936]: 2025-01-14T13:05:28.460370Z INFO ExtHandler Jan 14 13:05:28.460514 waagent[1936]: 2025-01-14T13:05:28.460461Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3c2e853c-d29c-4079-8d1f-6f0b599bdf5a eTag: 2722384851472531648 source: Fabric] Jan 14 13:05:28.460845 waagent[1936]: 2025-01-14T13:05:28.460796Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 13:05:28.461384 waagent[1936]: 2025-01-14T13:05:28.461334Z INFO ExtHandler Jan 14 13:05:28.461445 waagent[1936]: 2025-01-14T13:05:28.461413Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:05:28.465241 waagent[1936]: 2025-01-14T13:05:28.465195Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 13:05:28.539404 waagent[1936]: 2025-01-14T13:05:28.539313Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FB5D1FF01793D7297430147F71FA609F4E74EAEA', 'hasPrivateKey': True} Jan 14 13:05:28.539946 waagent[1936]: 2025-01-14T13:05:28.539887Z INFO ExtHandler Fetch goal state completed Jan 14 13:05:28.556105 waagent[1936]: 2025-01-14T13:05:28.556024Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1936 Jan 14 13:05:28.556271 waagent[1936]: 2025-01-14T13:05:28.556219Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 13:05:28.557874 waagent[1936]: 2025-01-14T13:05:28.557816Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 13:05:28.558235 waagent[1936]: 2025-01-14T13:05:28.558185Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 13:05:28.585392 waagent[1936]: 2025-01-14T13:05:28.585331Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 13:05:28.585685 waagent[1936]: 2025-01-14T13:05:28.585627Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 13:05:28.592679 waagent[1936]: 2025-01-14T13:05:28.592629Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 13:05:28.600171 systemd[1]: Reloading requested from client PID 1949 ('systemctl') (unit waagent.service)... Jan 14 13:05:28.600189 systemd[1]: Reloading... Jan 14 13:05:28.696636 zram_generator::config[1979]: No configuration found. Jan 14 13:05:28.814984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:05:28.901768 systemd[1]: Reloading finished in 301 ms. Jan 14 13:05:28.929635 waagent[1936]: 2025-01-14T13:05:28.927845Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 14 13:05:28.935827 systemd[1]: Reloading requested from client PID 2040 ('systemctl') (unit waagent.service)... Jan 14 13:05:28.935842 systemd[1]: Reloading... Jan 14 13:05:29.020725 zram_generator::config[2073]: No configuration found. Jan 14 13:05:29.147504 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:05:29.241746 systemd[1]: Reloading finished in 305 ms. Jan 14 13:05:29.271808 waagent[1936]: 2025-01-14T13:05:29.267125Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 13:05:29.271808 waagent[1936]: 2025-01-14T13:05:29.267340Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 13:05:29.606787 waagent[1936]: 2025-01-14T13:05:29.606568Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 13:05:29.607409 waagent[1936]: 2025-01-14T13:05:29.607330Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 14 13:05:29.608354 waagent[1936]: 2025-01-14T13:05:29.608266Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 13:05:29.608493 waagent[1936]: 2025-01-14T13:05:29.608430Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:05:29.609052 waagent[1936]: 2025-01-14T13:05:29.608985Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 13:05:29.609213 waagent[1936]: 2025-01-14T13:05:29.609154Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:05:29.609315 waagent[1936]: 2025-01-14T13:05:29.609261Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:05:29.610062 waagent[1936]: 2025-01-14T13:05:29.609992Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 13:05:29.610179 waagent[1936]: 2025-01-14T13:05:29.610125Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 13:05:29.610890 waagent[1936]: 2025-01-14T13:05:29.610834Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 13:05:29.611049 waagent[1936]: 2025-01-14T13:05:29.610987Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 13:05:29.611200 waagent[1936]: 2025-01-14T13:05:29.611146Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 13:05:29.611423 waagent[1936]: 2025-01-14T13:05:29.611373Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 13:05:29.611556 waagent[1936]: 2025-01-14T13:05:29.611500Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:05:29.612800 waagent[1936]: 2025-01-14T13:05:29.612753Z INFO EnvHandler ExtHandler Configure routes Jan 14 13:05:29.613057 waagent[1936]: 2025-01-14T13:05:29.613015Z INFO EnvHandler ExtHandler Gateway:None Jan 14 13:05:29.613161 waagent[1936]: 2025-01-14T13:05:29.613111Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 13:05:29.613161 waagent[1936]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 13:05:29.613161 waagent[1936]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 13:05:29.613161 waagent[1936]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 13:05:29.613161 waagent[1936]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:05:29.613161 waagent[1936]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:05:29.613161 waagent[1936]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:05:29.614120 waagent[1936]: 2025-01-14T13:05:29.614078Z INFO EnvHandler ExtHandler Routes:None Jan 14 13:05:29.619184 waagent[1936]: 2025-01-14T13:05:29.619134Z INFO ExtHandler ExtHandler Jan 14 13:05:29.619294 waagent[1936]: 2025-01-14T13:05:29.619243Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 0b89edc7-35cb-40a5-bbbc-3b32346610bc correlation 3ac1cb51-f8ca-45c6-b86a-17dd09361c88 created: 2025-01-14T13:04:12.164841Z] Jan 14 13:05:29.620568 waagent[1936]: 2025-01-14T13:05:29.620526Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 13:05:29.621668 waagent[1936]: 2025-01-14T13:05:29.621622Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 14 13:05:29.656704 waagent[1936]: 2025-01-14T13:05:29.656550Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F91BE1A9-DCF1-46FE-9AF6-94F5FE811EDA;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 14 13:05:29.692626 waagent[1936]: 2025-01-14T13:05:29.692386Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 14 13:05:29.692626 waagent[1936]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:05:29.692626 waagent[1936]: pkts bytes target prot opt in out source destination Jan 14 13:05:29.692626 waagent[1936]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:05:29.692626 waagent[1936]: pkts bytes target prot opt in out source destination Jan 14 13:05:29.692626 waagent[1936]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:05:29.692626 waagent[1936]: pkts bytes target prot opt in out source destination Jan 14 13:05:29.692626 waagent[1936]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:05:29.692626 waagent[1936]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:05:29.692626 waagent[1936]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:05:29.694411 waagent[1936]: 2025-01-14T13:05:29.694339Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 13:05:29.694411 waagent[1936]: Executing ['ip', '-a', '-o', 'link']: Jan 14 13:05:29.694411 waagent[1936]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 13:05:29.694411 waagent[1936]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b8:77:0d brd ff:ff:ff:ff:ff:ff Jan 14 13:05:29.694411 waagent[1936]: 3: enP38338s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b8:77:0d brd ff:ff:ff:ff:ff:ff\ altname enP38338p0s2 Jan 14 13:05:29.694411 waagent[1936]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 13:05:29.694411 waagent[1936]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 13:05:29.694411 waagent[1936]: 2: eth0 inet 10.200.8.4/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 13:05:29.694411 waagent[1936]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 13:05:29.694411 waagent[1936]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 13:05:29.694411 waagent[1936]: 2: eth0 inet6 fe80::20d:3aff:feb8:770d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:05:29.694411 waagent[1936]: 3: enP38338s1 inet6 fe80::20d:3aff:feb8:770d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:05:29.715231 waagent[1936]: 2025-01-14T13:05:29.714670Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 13:05:29.715231 waagent[1936]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:05:29.715231 waagent[1936]: pkts bytes target prot opt in out source destination Jan 14 13:05:29.715231 waagent[1936]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:05:29.715231 waagent[1936]: pkts bytes target prot opt in out source destination Jan 14 13:05:29.715231 waagent[1936]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:05:29.715231 waagent[1936]: pkts bytes target prot opt in out source destination Jan 14 13:05:29.715231 waagent[1936]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:05:29.715231 waagent[1936]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:05:29.715231 waagent[1936]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:05:29.715231 waagent[1936]: 2025-01-14T13:05:29.715055Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 14 13:05:34.934052 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 13:05:34.939823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:05:35.032116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:05:35.036738 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:05:35.670358 kubelet[2172]: E0114 13:05:35.669931 2172 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:05:35.674787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:05:35.674986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:05:45.684199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 13:05:45.690880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:05:45.783295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:05:45.788440 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:05:46.143580 chronyd[1710]: Selected source PHC0 Jan 14 13:05:46.358165 kubelet[2188]: E0114 13:05:46.358109 2188 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:05:46.360758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:05:46.360947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:05:56.434256 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 13:05:56.439865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:05:56.533885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:05:56.545996 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:05:57.114765 kubelet[2204]: E0114 13:05:57.114695 2204 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:05:57.117084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:05:57.117270 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:06:05.614675 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 14 13:06:07.184179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 13:06:07.189851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:06:07.293576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:07.304997 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:06:07.344462 kubelet[2220]: E0114 13:06:07.344408 2220 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:06:07.346856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:06:07.347067 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:06:08.188850 update_engine[1702]: I20250114 13:06:08.188740 1702 update_attempter.cc:509] Updating boot flags... Jan 14 13:06:08.240660 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2243) Jan 14 13:06:08.382782 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2245) Jan 14 13:06:17.434074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 13:06:17.439871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:06:17.538963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:17.544428 (kubelet)[2350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:06:17.584905 kubelet[2350]: E0114 13:06:17.584849 2350 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:06:17.587549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:06:17.587780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:06:19.848564 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 13:06:19.853941 systemd[1]: Started sshd@0-10.200.8.4:22-10.200.16.10:44776.service - OpenSSH per-connection server daemon (10.200.16.10:44776). Jan 14 13:06:20.638934 sshd[2359]: Accepted publickey for core from 10.200.16.10 port 44776 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:06:20.640641 sshd-session[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:06:20.646461 systemd-logind[1701]: New session 3 of user core. Jan 14 13:06:20.651806 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 13:06:21.199647 systemd[1]: Started sshd@1-10.200.8.4:22-10.200.16.10:44788.service - OpenSSH per-connection server daemon (10.200.16.10:44788). Jan 14 13:06:21.849832 sshd[2364]: Accepted publickey for core from 10.200.16.10 port 44788 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:06:21.851517 sshd-session[2364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:06:21.857081 systemd-logind[1701]: New session 4 of user core. Jan 14 13:06:21.863778 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 13:06:22.303779 sshd[2366]: Connection closed by 10.200.16.10 port 44788 Jan 14 13:06:22.304742 sshd-session[2364]: pam_unix(sshd:session): session closed for user core Jan 14 13:06:22.308027 systemd[1]: sshd@1-10.200.8.4:22-10.200.16.10:44788.service: Deactivated successfully. Jan 14 13:06:22.310147 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 13:06:22.311640 systemd-logind[1701]: Session 4 logged out. Waiting for processes to exit. Jan 14 13:06:22.312671 systemd-logind[1701]: Removed session 4. Jan 14 13:06:22.420635 systemd[1]: Started sshd@2-10.200.8.4:22-10.200.16.10:44802.service - OpenSSH per-connection server daemon (10.200.16.10:44802). Jan 14 13:06:23.069638 sshd[2371]: Accepted publickey for core from 10.200.16.10 port 44802 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:06:23.071347 sshd-session[2371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:06:23.077148 systemd-logind[1701]: New session 5 of user core. Jan 14 13:06:23.084774 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 13:06:23.519562 sshd[2373]: Connection closed by 10.200.16.10 port 44802 Jan 14 13:06:23.520411 sshd-session[2371]: pam_unix(sshd:session): session closed for user core Jan 14 13:06:23.523333 systemd[1]: sshd@2-10.200.8.4:22-10.200.16.10:44802.service: Deactivated successfully. Jan 14 13:06:23.525409 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 13:06:23.527116 systemd-logind[1701]: Session 5 logged out. Waiting for processes to exit. Jan 14 13:06:23.528015 systemd-logind[1701]: Removed session 5. Jan 14 13:06:23.639442 systemd[1]: Started sshd@3-10.200.8.4:22-10.200.16.10:44812.service - OpenSSH per-connection server daemon (10.200.16.10:44812). Jan 14 13:06:24.283847 sshd[2378]: Accepted publickey for core from 10.200.16.10 port 44812 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:06:24.285480 sshd-session[2378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:06:24.290625 systemd-logind[1701]: New session 6 of user core. Jan 14 13:06:24.298779 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 13:06:24.745891 sshd[2380]: Connection closed by 10.200.16.10 port 44812 Jan 14 13:06:24.746802 sshd-session[2378]: pam_unix(sshd:session): session closed for user core Jan 14 13:06:24.750905 systemd[1]: sshd@3-10.200.8.4:22-10.200.16.10:44812.service: Deactivated successfully. Jan 14 13:06:24.752834 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 13:06:24.753509 systemd-logind[1701]: Session 6 logged out. Waiting for processes to exit. Jan 14 13:06:24.754523 systemd-logind[1701]: Removed session 6. Jan 14 13:06:24.858486 systemd[1]: Started sshd@4-10.200.8.4:22-10.200.16.10:44814.service - OpenSSH per-connection server daemon (10.200.16.10:44814). Jan 14 13:06:25.502820 sshd[2385]: Accepted publickey for core from 10.200.16.10 port 44814 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:06:25.504454 sshd-session[2385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:06:25.510058 systemd-logind[1701]: New session 7 of user core. Jan 14 13:06:25.515760 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 13:06:26.015718 sudo[2388]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 13:06:26.016083 sudo[2388]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:06:26.052153 sudo[2388]: pam_unix(sudo:session): session closed for user root Jan 14 13:06:26.158045 sshd[2387]: Connection closed by 10.200.16.10 port 44814 Jan 14 13:06:26.159312 sshd-session[2385]: pam_unix(sshd:session): session closed for user core Jan 14 13:06:26.163906 systemd[1]: sshd@4-10.200.8.4:22-10.200.16.10:44814.service: Deactivated successfully. Jan 14 13:06:26.166094 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 13:06:26.167087 systemd-logind[1701]: Session 7 logged out. Waiting for processes to exit. Jan 14 13:06:26.168227 systemd-logind[1701]: Removed session 7. Jan 14 13:06:26.277227 systemd[1]: Started sshd@5-10.200.8.4:22-10.200.16.10:38686.service - OpenSSH per-connection server daemon (10.200.16.10:38686). Jan 14 13:06:26.916383 sshd[2393]: Accepted publickey for core from 10.200.16.10 port 38686 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:06:26.917975 sshd-session[2393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:06:26.922483 systemd-logind[1701]: New session 8 of user core. Jan 14 13:06:26.929752 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 13:06:27.267150 sudo[2397]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 13:06:27.267511 sudo[2397]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:06:27.270736 sudo[2397]: pam_unix(sudo:session): session closed for user root Jan 14 13:06:27.275873 sudo[2396]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 13:06:27.276214 sudo[2396]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:06:27.289248 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:06:27.316482 augenrules[2419]: No rules Jan 14 13:06:27.318002 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:06:27.318236 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:06:27.319768 sudo[2396]: pam_unix(sudo:session): session closed for user root Jan 14 13:06:27.424392 sshd[2395]: Connection closed by 10.200.16.10 port 38686 Jan 14 13:06:27.425293 sshd-session[2393]: pam_unix(sshd:session): session closed for user core Jan 14 13:06:27.428452 systemd[1]: sshd@5-10.200.8.4:22-10.200.16.10:38686.service: Deactivated successfully. Jan 14 13:06:27.430514 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 13:06:27.432059 systemd-logind[1701]: Session 8 logged out. Waiting for processes to exit. Jan 14 13:06:27.433018 systemd-logind[1701]: Removed session 8. Jan 14 13:06:27.537696 systemd[1]: Started sshd@6-10.200.8.4:22-10.200.16.10:38700.service - OpenSSH per-connection server daemon (10.200.16.10:38700). Jan 14 13:06:27.684061 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 14 13:06:27.689916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:06:27.824842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:27.829811 (kubelet)[2437]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:06:27.875186 kubelet[2437]: E0114 13:06:27.875133 2437 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:06:27.877643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:06:27.877836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:06:28.380138 sshd[2427]: Accepted publickey for core from 10.200.16.10 port 38700 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:06:28.380476 sshd-session[2427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:06:28.388207 systemd-logind[1701]: New session 9 of user core. Jan 14 13:06:28.394777 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 13:06:28.648775 sudo[2445]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 13:06:28.649141 sudo[2445]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:06:30.837920 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 13:06:30.840435 (dockerd)[2463]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 13:06:32.818710 dockerd[2463]: time="2025-01-14T13:06:32.818636298Z" level=info msg="Starting up" Jan 14 13:06:33.443988 dockerd[2463]: time="2025-01-14T13:06:33.443935829Z" level=info msg="Loading containers: start." Jan 14 13:06:33.678783 kernel: Initializing XFRM netlink socket Jan 14 13:06:33.781805 systemd-networkd[1334]: docker0: Link UP Jan 14 13:06:33.827078 dockerd[2463]: time="2025-01-14T13:06:33.827031957Z" level=info msg="Loading containers: done." Jan 14 13:06:33.907484 dockerd[2463]: time="2025-01-14T13:06:33.907418484Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 13:06:33.907707 dockerd[2463]: time="2025-01-14T13:06:33.907548387Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 14 13:06:33.907757 dockerd[2463]: time="2025-01-14T13:06:33.907725390Z" level=info msg="Daemon has completed initialization" Jan 14 13:06:33.969423 dockerd[2463]: time="2025-01-14T13:06:33.969356114Z" level=info msg="API listen on /run/docker.sock" Jan 14 13:06:33.969970 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 13:06:36.069241 containerd[1723]: time="2025-01-14T13:06:36.068882623Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 14 13:06:36.787680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1805899943.mount: Deactivated successfully. Jan 14 13:06:37.933786 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 14 13:06:37.938898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:06:38.088163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:38.101046 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:06:38.167692 kubelet[2712]: E0114 13:06:38.167313 2712 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:06:38.170778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:06:38.171141 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:06:38.851789 containerd[1723]: time="2025-01-14T13:06:38.851730791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:38.863098 containerd[1723]: time="2025-01-14T13:06:38.863036063Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675650" Jan 14 13:06:38.867774 containerd[1723]: time="2025-01-14T13:06:38.867703776Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:38.873391 containerd[1723]: time="2025-01-14T13:06:38.873328212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:38.874557 containerd[1723]: time="2025-01-14T13:06:38.874342736Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.805414812s" Jan 14 13:06:38.874557 containerd[1723]: time="2025-01-14T13:06:38.874386337Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 14 13:06:38.898400 containerd[1723]: time="2025-01-14T13:06:38.898356416Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 14 13:06:41.064209 containerd[1723]: time="2025-01-14T13:06:41.064145490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:41.069536 containerd[1723]: time="2025-01-14T13:06:41.069470618Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606417" Jan 14 13:06:41.073775 containerd[1723]: time="2025-01-14T13:06:41.073705921Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:41.081106 containerd[1723]: time="2025-01-14T13:06:41.081028797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:41.082212 containerd[1723]: time="2025-01-14T13:06:41.082053322Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.183651105s" Jan 14 13:06:41.082212 containerd[1723]: time="2025-01-14T13:06:41.082093023Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 14 13:06:41.106317 containerd[1723]: time="2025-01-14T13:06:41.106265606Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 14 13:06:42.801611 containerd[1723]: time="2025-01-14T13:06:42.801550190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:42.806015 containerd[1723]: time="2025-01-14T13:06:42.805924584Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783043" Jan 14 13:06:42.811029 containerd[1723]: time="2025-01-14T13:06:42.810955093Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:42.819551 containerd[1723]: time="2025-01-14T13:06:42.819476578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:42.820893 containerd[1723]: time="2025-01-14T13:06:42.820575601Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.714263294s" Jan 14 13:06:42.820893 containerd[1723]: time="2025-01-14T13:06:42.820639003Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 14 13:06:42.843045 containerd[1723]: time="2025-01-14T13:06:42.843004786Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 14 13:06:43.967246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187801943.mount: Deactivated successfully. Jan 14 13:06:44.461433 containerd[1723]: time="2025-01-14T13:06:44.461366880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:44.467676 containerd[1723]: time="2025-01-14T13:06:44.467570514Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Jan 14 13:06:44.478360 containerd[1723]: time="2025-01-14T13:06:44.478283746Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:44.485343 containerd[1723]: time="2025-01-14T13:06:44.485271297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:44.486865 containerd[1723]: time="2025-01-14T13:06:44.486306319Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.643259632s" Jan 14 13:06:44.486865 containerd[1723]: time="2025-01-14T13:06:44.486353020Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 14 13:06:44.513300 containerd[1723]: time="2025-01-14T13:06:44.513253702Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 14 13:06:45.144121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800099123.mount: Deactivated successfully. Jan 14 13:06:46.557073 containerd[1723]: time="2025-01-14T13:06:46.557012594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:46.563394 containerd[1723]: time="2025-01-14T13:06:46.563320430Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 14 13:06:46.570516 containerd[1723]: time="2025-01-14T13:06:46.570325781Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:46.577932 containerd[1723]: time="2025-01-14T13:06:46.577858944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:46.579073 containerd[1723]: time="2025-01-14T13:06:46.578916567Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.065617764s" Jan 14 13:06:46.579073 containerd[1723]: time="2025-01-14T13:06:46.578957868Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 14 13:06:46.602364 containerd[1723]: time="2025-01-14T13:06:46.602325073Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 14 13:06:47.236634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2849539335.mount: Deactivated successfully. Jan 14 13:06:47.268182 containerd[1723]: time="2025-01-14T13:06:47.268123570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:47.272642 containerd[1723]: time="2025-01-14T13:06:47.272553266Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 14 13:06:47.281224 containerd[1723]: time="2025-01-14T13:06:47.281149151Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:47.288515 containerd[1723]: time="2025-01-14T13:06:47.288445309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:47.289356 containerd[1723]: time="2025-01-14T13:06:47.289182225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 686.815951ms" Jan 14 13:06:47.289356 containerd[1723]: time="2025-01-14T13:06:47.289221526Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 14 13:06:47.314531 containerd[1723]: time="2025-01-14T13:06:47.314494372Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 14 13:06:48.043024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3216138863.mount: Deactivated successfully. Jan 14 13:06:48.183944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 14 13:06:48.189903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:06:48.294780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:48.297464 (kubelet)[2824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:06:48.341001 kubelet[2824]: E0114 13:06:48.340942 2824 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:06:48.343562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:06:48.343786 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:06:51.221152 containerd[1723]: time="2025-01-14T13:06:51.221079959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:51.226627 containerd[1723]: time="2025-01-14T13:06:51.226528278Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 14 13:06:51.231434 containerd[1723]: time="2025-01-14T13:06:51.231355684Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:51.241952 containerd[1723]: time="2025-01-14T13:06:51.241493407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:06:51.242927 containerd[1723]: time="2025-01-14T13:06:51.242885737Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.928356164s" Jan 14 13:06:51.242927 containerd[1723]: time="2025-01-14T13:06:51.242921138Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 14 13:06:54.190502 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:54.196935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:06:54.229145 systemd[1]: Reloading requested from client PID 2936 ('systemctl') (unit session-9.scope)... Jan 14 13:06:54.229163 systemd[1]: Reloading... Jan 14 13:06:54.353632 zram_generator::config[2975]: No configuration found. Jan 14 13:06:54.493251 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:06:54.586006 systemd[1]: Reloading finished in 356 ms. Jan 14 13:06:54.637280 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 13:06:54.637379 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 13:06:54.637706 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:54.640360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:06:54.850844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:06:54.866035 (kubelet)[3047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:06:55.509814 kubelet[3047]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:06:55.509814 kubelet[3047]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:06:55.509814 kubelet[3047]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:06:55.510291 kubelet[3047]: I0114 13:06:55.509880 3047 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:06:55.864754 kubelet[3047]: I0114 13:06:55.864335 3047 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 14 13:06:55.864754 kubelet[3047]: I0114 13:06:55.864374 3047 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:06:55.864928 kubelet[3047]: I0114 13:06:55.864848 3047 server.go:927] "Client rotation is on, will bootstrap in background" Jan 14 13:06:55.881360 kubelet[3047]: I0114 13:06:55.880893 3047 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:06:55.882526 kubelet[3047]: E0114 13:06:55.882489 3047 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:55.893820 kubelet[3047]: I0114 13:06:55.893783 3047 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:06:55.894063 kubelet[3047]: I0114 13:06:55.894033 3047 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:06:55.894255 kubelet[3047]: I0114 13:06:55.894062 3047 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-847249f34f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:06:55.894402 kubelet[3047]: I0114 13:06:55.894271 3047 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:06:55.894402 kubelet[3047]: I0114 13:06:55.894286 3047 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:06:55.894490 kubelet[3047]: I0114 13:06:55.894434 3047 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:06:55.895314 kubelet[3047]: I0114 13:06:55.895294 3047 kubelet.go:400] "Attempting to sync node with API server" Jan 14 13:06:55.895402 kubelet[3047]: I0114 13:06:55.895349 3047 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:06:55.895402 kubelet[3047]: I0114 13:06:55.895383 3047 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:06:55.895583 kubelet[3047]: I0114 13:06:55.895407 3047 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:06:55.902905 kubelet[3047]: W0114 13:06:55.902547 3047 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:55.903354 kubelet[3047]: E0114 13:06:55.903047 3047 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:55.903354 kubelet[3047]: I0114 13:06:55.903151 3047 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:06:55.905481 kubelet[3047]: I0114 13:06:55.904702 3047 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:06:55.905481 kubelet[3047]: W0114 13:06:55.904778 3047 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 13:06:55.905481 kubelet[3047]: W0114 13:06:55.905371 3047 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-847249f34f&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:55.905481 kubelet[3047]: I0114 13:06:55.905418 3047 server.go:1264] "Started kubelet" Jan 14 13:06:55.905481 kubelet[3047]: E0114 13:06:55.905429 3047 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-847249f34f&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:55.906692 kubelet[3047]: I0114 13:06:55.906166 3047 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:06:55.907432 kubelet[3047]: I0114 13:06:55.907242 3047 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:06:55.907823 kubelet[3047]: I0114 13:06:55.907807 3047 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:06:55.908576 kubelet[3047]: I0114 13:06:55.908549 3047 server.go:455] "Adding debug handlers to kubelet server" Jan 14 13:06:55.912169 kubelet[3047]: I0114 13:06:55.912131 3047 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:06:55.917519 kubelet[3047]: E0114 13:06:55.917077 3047 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-847249f34f.181a90fb709ecd1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-847249f34f,UID:ci-4186.1.0-a-847249f34f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-847249f34f,},FirstTimestamp:2025-01-14 13:06:55.905393951 +0000 UTC m=+1.035001713,LastTimestamp:2025-01-14 13:06:55.905393951 +0000 UTC m=+1.035001713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-847249f34f,}" Jan 14 13:06:55.919389 kubelet[3047]: I0114 13:06:55.919362 3047 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:06:55.919852 kubelet[3047]: I0114 13:06:55.919827 3047 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 14 13:06:55.919931 kubelet[3047]: I0114 13:06:55.919893 3047 reconciler.go:26] "Reconciler: start to sync state" Jan 14 13:06:55.923650 kubelet[3047]: W0114 13:06:55.922372 3047 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:55.923650 kubelet[3047]: E0114 13:06:55.922454 3047 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:55.923650 kubelet[3047]: E0114 13:06:55.922548 3047 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-847249f34f?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="200ms" Jan 14 13:06:55.923650 kubelet[3047]: I0114 13:06:55.923434 3047 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:06:55.925691 kubelet[3047]: I0114 13:06:55.925668 3047 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:06:55.925691 kubelet[3047]: I0114 13:06:55.925689 3047 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:06:55.938142 kubelet[3047]: E0114 13:06:55.938095 3047 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:06:55.965080 kubelet[3047]: I0114 13:06:55.964874 3047 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:06:55.965080 kubelet[3047]: I0114 13:06:55.964906 3047 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:06:55.965080 kubelet[3047]: I0114 13:06:55.965032 3047 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:06:55.968901 kubelet[3047]: I0114 13:06:55.968735 3047 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:06:55.971403 kubelet[3047]: I0114 13:06:55.970453 3047 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:06:55.971403 kubelet[3047]: I0114 13:06:55.970491 3047 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:06:55.971403 kubelet[3047]: I0114 13:06:55.970514 3047 kubelet.go:2337] "Starting kubelet main sync loop" Jan 14 13:06:55.971403 kubelet[3047]: E0114 13:06:55.970565 3047 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:06:55.974635 kubelet[3047]: W0114 13:06:55.972144 3047 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:55.974635 kubelet[3047]: E0114 13:06:55.972180 3047 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:55.976829 kubelet[3047]: I0114 13:06:55.976801 3047 policy_none.go:49] "None policy: Start" Jan 14 13:06:55.977520 kubelet[3047]: I0114 13:06:55.977502 3047 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:06:55.977685 kubelet[3047]: I0114 13:06:55.977571 3047 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:06:55.987535 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 13:06:55.998975 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 13:06:56.003278 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 13:06:56.010930 kubelet[3047]: I0114 13:06:56.010486 3047 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:06:56.012343 kubelet[3047]: I0114 13:06:56.010966 3047 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 13:06:56.012343 kubelet[3047]: I0114 13:06:56.011102 3047 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:06:56.013786 kubelet[3047]: E0114 13:06:56.013748 3047 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-a-847249f34f\" not found" Jan 14 13:06:56.021308 kubelet[3047]: I0114 13:06:56.021259 3047 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.021701 kubelet[3047]: E0114 13:06:56.021672 3047 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.071157 kubelet[3047]: I0114 13:06:56.071085 3047 topology_manager.go:215] "Topology Admit Handler" podUID="e741db892fbe4fb08a82b5eb2d1b23cf" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.073392 kubelet[3047]: I0114 13:06:56.073212 3047 topology_manager.go:215] "Topology Admit Handler" podUID="1ffcc7a231ea533e869c6ce15da64394" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.075657 kubelet[3047]: I0114 13:06:56.075477 3047 topology_manager.go:215] "Topology Admit Handler" podUID="b73eacf586ef8704afa0538b08e10bff" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.082808 systemd[1]: Created slice kubepods-burstable-pode741db892fbe4fb08a82b5eb2d1b23cf.slice - libcontainer container kubepods-burstable-pode741db892fbe4fb08a82b5eb2d1b23cf.slice. Jan 14 13:06:56.103725 systemd[1]: Created slice kubepods-burstable-pod1ffcc7a231ea533e869c6ce15da64394.slice - libcontainer container kubepods-burstable-pod1ffcc7a231ea533e869c6ce15da64394.slice. Jan 14 13:06:56.117725 systemd[1]: Created slice kubepods-burstable-podb73eacf586ef8704afa0538b08e10bff.slice - libcontainer container kubepods-burstable-podb73eacf586ef8704afa0538b08e10bff.slice. Jan 14 13:06:56.121427 kubelet[3047]: I0114 13:06:56.121372 3047 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ffcc7a231ea533e869c6ce15da64394-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-847249f34f\" (UID: \"1ffcc7a231ea533e869c6ce15da64394\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.121427 kubelet[3047]: I0114 13:06:56.121418 3047 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1ffcc7a231ea533e869c6ce15da64394-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-847249f34f\" (UID: \"1ffcc7a231ea533e869c6ce15da64394\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.121730 kubelet[3047]: I0114 13:06:56.121445 3047 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1ffcc7a231ea533e869c6ce15da64394-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-847249f34f\" (UID: \"1ffcc7a231ea533e869c6ce15da64394\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.121730 kubelet[3047]: I0114 13:06:56.121471 3047 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e741db892fbe4fb08a82b5eb2d1b23cf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-847249f34f\" (UID: \"e741db892fbe4fb08a82b5eb2d1b23cf\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.121730 kubelet[3047]: I0114 13:06:56.121492 3047 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ffcc7a231ea533e869c6ce15da64394-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-847249f34f\" (UID: \"1ffcc7a231ea533e869c6ce15da64394\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.121730 kubelet[3047]: I0114 13:06:56.121512 3047 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ffcc7a231ea533e869c6ce15da64394-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-847249f34f\" (UID: \"1ffcc7a231ea533e869c6ce15da64394\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.121730 kubelet[3047]: I0114 13:06:56.121532 3047 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b73eacf586ef8704afa0538b08e10bff-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-847249f34f\" (UID: \"b73eacf586ef8704afa0538b08e10bff\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.121852 kubelet[3047]: I0114 13:06:56.121566 3047 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e741db892fbe4fb08a82b5eb2d1b23cf-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-847249f34f\" (UID: \"e741db892fbe4fb08a82b5eb2d1b23cf\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.121852 kubelet[3047]: I0114 13:06:56.121594 3047 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e741db892fbe4fb08a82b5eb2d1b23cf-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-847249f34f\" (UID: \"e741db892fbe4fb08a82b5eb2d1b23cf\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.123762 kubelet[3047]: E0114 13:06:56.123728 3047 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-847249f34f?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="400ms" Jan 14 13:06:56.224683 kubelet[3047]: I0114 13:06:56.224648 3047 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.225086 kubelet[3047]: E0114 13:06:56.225044 3047 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.320918 kubelet[3047]: E0114 13:06:56.320796 3047 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-847249f34f.181a90fb709ecd1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-847249f34f,UID:ci-4186.1.0-a-847249f34f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-847249f34f,},FirstTimestamp:2025-01-14 13:06:55.905393951 +0000 UTC m=+1.035001713,LastTimestamp:2025-01-14 13:06:55.905393951 +0000 UTC m=+1.035001713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-847249f34f,}" Jan 14 13:06:56.402479 containerd[1723]: time="2025-01-14T13:06:56.402322456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-847249f34f,Uid:e741db892fbe4fb08a82b5eb2d1b23cf,Namespace:kube-system,Attempt:0,}" Jan 14 13:06:56.407237 containerd[1723]: time="2025-01-14T13:06:56.407183063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-847249f34f,Uid:1ffcc7a231ea533e869c6ce15da64394,Namespace:kube-system,Attempt:0,}" Jan 14 13:06:56.421304 containerd[1723]: time="2025-01-14T13:06:56.421260972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-847249f34f,Uid:b73eacf586ef8704afa0538b08e10bff,Namespace:kube-system,Attempt:0,}" Jan 14 13:06:56.524591 kubelet[3047]: E0114 13:06:56.524536 3047 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-847249f34f?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="800ms" Jan 14 13:06:56.627245 kubelet[3047]: I0114 13:06:56.627198 3047 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-847249f34f" Jan 14 13:06:56.627620 kubelet[3047]: E0114 13:06:56.627571 3047 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4186.1.0-a-847249f34f" Jan 14 13:06:57.012828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314198842.mount: Deactivated successfully. Jan 14 13:06:57.033594 kubelet[3047]: W0114 13:06:57.033552 3047 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:57.033594 kubelet[3047]: E0114 13:06:57.033596 3047 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:57.056420 containerd[1723]: time="2025-01-14T13:06:57.056356908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:06:57.070595 containerd[1723]: time="2025-01-14T13:06:57.070404216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 14 13:06:57.076112 containerd[1723]: time="2025-01-14T13:06:57.076063241Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:06:57.084986 containerd[1723]: time="2025-01-14T13:06:57.084933935Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:06:57.093275 containerd[1723]: time="2025-01-14T13:06:57.092890410Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:06:57.098753 containerd[1723]: time="2025-01-14T13:06:57.098701337Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:06:57.103423 containerd[1723]: time="2025-01-14T13:06:57.103215336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:06:57.103579 kubelet[3047]: W0114 13:06:57.103297 3047 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-847249f34f&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:57.103579 kubelet[3047]: E0114 13:06:57.103377 3047 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-847249f34f&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:57.104157 containerd[1723]: time="2025-01-14T13:06:57.104119156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 701.663997ms" Jan 14 13:06:57.108336 containerd[1723]: time="2025-01-14T13:06:57.108241447Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:06:57.114905 containerd[1723]: time="2025-01-14T13:06:57.114859992Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 707.485525ms" Jan 14 13:06:57.166738 containerd[1723]: time="2025-01-14T13:06:57.166681029Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 745.308155ms" Jan 14 13:06:57.326169 kubelet[3047]: E0114 13:06:57.326026 3047 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-847249f34f?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="1.6s" Jan 14 13:06:57.388292 kubelet[3047]: W0114 13:06:57.388240 3047 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:57.388292 kubelet[3047]: E0114 13:06:57.388295 3047 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:57.404008 kubelet[3047]: W0114 13:06:57.403929 3047 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:57.404008 kubelet[3047]: E0114 13:06:57.404019 3047 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:57.430301 kubelet[3047]: I0114 13:06:57.430233 3047 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-847249f34f" Jan 14 13:06:57.430885 kubelet[3047]: E0114 13:06:57.430839 3047 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4186.1.0-a-847249f34f" Jan 14 13:06:58.082941 kubelet[3047]: E0114 13:06:58.082900 3047 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:58.927215 kubelet[3047]: E0114 13:06:58.927119 3047 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-847249f34f?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="3.2s" Jan 14 13:06:59.033731 kubelet[3047]: I0114 13:06:59.033694 3047 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-847249f34f" Jan 14 13:06:59.034102 kubelet[3047]: E0114 13:06:59.034065 3047 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-4186.1.0-a-847249f34f" Jan 14 13:06:59.156846 kubelet[3047]: W0114 13:06:59.156526 3047 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:59.156846 kubelet[3047]: E0114 13:06:59.156714 3047 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:59.211910 containerd[1723]: time="2025-01-14T13:06:59.210725714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:06:59.211910 containerd[1723]: time="2025-01-14T13:06:59.210798915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:06:59.211910 containerd[1723]: time="2025-01-14T13:06:59.210820316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:59.211910 containerd[1723]: time="2025-01-14T13:06:59.210906218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:59.213324 containerd[1723]: time="2025-01-14T13:06:59.209944697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:06:59.213324 containerd[1723]: time="2025-01-14T13:06:59.213265970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:06:59.213324 containerd[1723]: time="2025-01-14T13:06:59.213286170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:59.213898 containerd[1723]: time="2025-01-14T13:06:59.213686479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:59.233240 containerd[1723]: time="2025-01-14T13:06:59.232939102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:06:59.233240 containerd[1723]: time="2025-01-14T13:06:59.233006803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:06:59.233240 containerd[1723]: time="2025-01-14T13:06:59.233029604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:59.233240 containerd[1723]: time="2025-01-14T13:06:59.233121806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:06:59.267717 systemd[1]: run-containerd-runc-k8s.io-c898737b8fc13ef22199052c0c0fbcf5ded8f46d6933a63e70beaac217aa02e8-runc.k1XYYL.mount: Deactivated successfully. Jan 14 13:06:59.278820 systemd[1]: Started cri-containerd-26a16c60e5b5e02b9c952c997a05fb0950f2008132fc6fe729cf3e9d2ce5fabb.scope - libcontainer container 26a16c60e5b5e02b9c952c997a05fb0950f2008132fc6fe729cf3e9d2ce5fabb. Jan 14 13:06:59.280911 systemd[1]: Started cri-containerd-64a49c9c923691a827c9d9d3d7277d26bbce6be1297d305bf7f2874df0c7ec85.scope - libcontainer container 64a49c9c923691a827c9d9d3d7277d26bbce6be1297d305bf7f2874df0c7ec85. Jan 14 13:06:59.282839 systemd[1]: Started cri-containerd-c898737b8fc13ef22199052c0c0fbcf5ded8f46d6933a63e70beaac217aa02e8.scope - libcontainer container c898737b8fc13ef22199052c0c0fbcf5ded8f46d6933a63e70beaac217aa02e8. Jan 14 13:06:59.376409 containerd[1723]: time="2025-01-14T13:06:59.376257851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-847249f34f,Uid:e741db892fbe4fb08a82b5eb2d1b23cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c898737b8fc13ef22199052c0c0fbcf5ded8f46d6933a63e70beaac217aa02e8\"" Jan 14 13:06:59.381147 containerd[1723]: time="2025-01-14T13:06:59.381021056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-847249f34f,Uid:1ffcc7a231ea533e869c6ce15da64394,Namespace:kube-system,Attempt:0,} returns sandbox id \"26a16c60e5b5e02b9c952c997a05fb0950f2008132fc6fe729cf3e9d2ce5fabb\"" Jan 14 13:06:59.386418 containerd[1723]: time="2025-01-14T13:06:59.386281172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-847249f34f,Uid:b73eacf586ef8704afa0538b08e10bff,Namespace:kube-system,Attempt:0,} returns sandbox id \"64a49c9c923691a827c9d9d3d7277d26bbce6be1297d305bf7f2874df0c7ec85\"" Jan 14 13:06:59.387847 containerd[1723]: time="2025-01-14T13:06:59.387710503Z" level=info msg="CreateContainer within sandbox \"c898737b8fc13ef22199052c0c0fbcf5ded8f46d6933a63e70beaac217aa02e8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 13:06:59.388631 containerd[1723]: time="2025-01-14T13:06:59.387958109Z" level=info msg="CreateContainer within sandbox \"26a16c60e5b5e02b9c952c997a05fb0950f2008132fc6fe729cf3e9d2ce5fabb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 13:06:59.390967 containerd[1723]: time="2025-01-14T13:06:59.390939174Z" level=info msg="CreateContainer within sandbox \"64a49c9c923691a827c9d9d3d7277d26bbce6be1297d305bf7f2874df0c7ec85\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 13:06:59.432988 kubelet[3047]: W0114 13:06:59.432891 3047 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-847249f34f&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:59.432988 kubelet[3047]: E0114 13:06:59.432964 3047 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-847249f34f&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:59.589682 kubelet[3047]: W0114 13:06:59.589483 3047 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:59.589682 kubelet[3047]: E0114 13:06:59.589566 3047 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:59.919883 kubelet[3047]: W0114 13:06:59.919548 3047 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:06:59.919883 kubelet[3047]: E0114 13:06:59.919660 3047 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Jan 14 13:07:01.613914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3775224146.mount: Deactivated successfully. Jan 14 13:07:01.658122 containerd[1723]: time="2025-01-14T13:07:01.658068395Z" level=info msg="CreateContainer within sandbox \"c898737b8fc13ef22199052c0c0fbcf5ded8f46d6933a63e70beaac217aa02e8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8c038be95cc1b9f55a975aeddf6e56c1f5f610a0e5f4f164cbd462f535bb36c8\"" Jan 14 13:07:01.658855 containerd[1723]: time="2025-01-14T13:07:01.658823012Z" level=info msg="StartContainer for \"8c038be95cc1b9f55a975aeddf6e56c1f5f610a0e5f4f164cbd462f535bb36c8\"" Jan 14 13:07:01.664885 containerd[1723]: time="2025-01-14T13:07:01.664824444Z" level=info msg="CreateContainer within sandbox \"26a16c60e5b5e02b9c952c997a05fb0950f2008132fc6fe729cf3e9d2ce5fabb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40c4a3a974ad53ff86d203e96ee027c9ac52541551e82a6e554eafaf8315b03f\"" Jan 14 13:07:01.669351 containerd[1723]: time="2025-01-14T13:07:01.668683228Z" level=info msg="StartContainer for \"40c4a3a974ad53ff86d203e96ee027c9ac52541551e82a6e554eafaf8315b03f\"" Jan 14 13:07:01.678348 containerd[1723]: time="2025-01-14T13:07:01.678292740Z" level=info msg="CreateContainer within sandbox \"64a49c9c923691a827c9d9d3d7277d26bbce6be1297d305bf7f2874df0c7ec85\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"72b092b4ea925401e9b1137c81ddf3def99a782aed4289f26ab74c3f5dbd25d0\"" Jan 14 13:07:01.679259 containerd[1723]: time="2025-01-14T13:07:01.679223160Z" level=info msg="StartContainer for \"72b092b4ea925401e9b1137c81ddf3def99a782aed4289f26ab74c3f5dbd25d0\"" Jan 14 13:07:01.694794 systemd[1]: Started cri-containerd-8c038be95cc1b9f55a975aeddf6e56c1f5f610a0e5f4f164cbd462f535bb36c8.scope - libcontainer container 8c038be95cc1b9f55a975aeddf6e56c1f5f610a0e5f4f164cbd462f535bb36c8. Jan 14 13:07:01.722841 systemd[1]: Started cri-containerd-40c4a3a974ad53ff86d203e96ee027c9ac52541551e82a6e554eafaf8315b03f.scope - libcontainer container 40c4a3a974ad53ff86d203e96ee027c9ac52541551e82a6e554eafaf8315b03f. Jan 14 13:07:01.736905 systemd[1]: Started cri-containerd-72b092b4ea925401e9b1137c81ddf3def99a782aed4289f26ab74c3f5dbd25d0.scope - libcontainer container 72b092b4ea925401e9b1137c81ddf3def99a782aed4289f26ab74c3f5dbd25d0. Jan 14 13:07:01.803091 containerd[1723]: time="2025-01-14T13:07:01.803020681Z" level=info msg="StartContainer for \"8c038be95cc1b9f55a975aeddf6e56c1f5f610a0e5f4f164cbd462f535bb36c8\" returns successfully" Jan 14 13:07:01.824733 containerd[1723]: time="2025-01-14T13:07:01.824282248Z" level=info msg="StartContainer for \"40c4a3a974ad53ff86d203e96ee027c9ac52541551e82a6e554eafaf8315b03f\" returns successfully" Jan 14 13:07:01.839628 containerd[1723]: time="2025-01-14T13:07:01.838829867Z" level=info msg="StartContainer for \"72b092b4ea925401e9b1137c81ddf3def99a782aed4289f26ab74c3f5dbd25d0\" returns successfully" Jan 14 13:07:02.237346 kubelet[3047]: I0114 13:07:02.236750 3047 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-847249f34f" Jan 14 13:07:03.886092 kubelet[3047]: E0114 13:07:03.886039 3047 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.0-a-847249f34f\" not found" node="ci-4186.1.0-a-847249f34f" Jan 14 13:07:03.903851 kubelet[3047]: I0114 13:07:03.903807 3047 apiserver.go:52] "Watching apiserver" Jan 14 13:07:03.920216 kubelet[3047]: I0114 13:07:03.920172 3047 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 14 13:07:03.970254 kubelet[3047]: I0114 13:07:03.970198 3047 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-a-847249f34f" Jan 14 13:07:04.053168 kubelet[3047]: E0114 13:07:04.052033 3047 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-847249f34f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4186.1.0-a-847249f34f" Jan 14 13:07:05.925205 systemd[1]: Reloading requested from client PID 3324 ('systemctl') (unit session-9.scope)... Jan 14 13:07:05.925222 systemd[1]: Reloading... Jan 14 13:07:06.085637 zram_generator::config[3367]: No configuration found. Jan 14 13:07:06.253893 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:07:06.268008 kubelet[3047]: W0114 13:07:06.267933 3047 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:07:06.377347 systemd[1]: Reloading finished in 451 ms. Jan 14 13:07:06.432309 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:07:06.445486 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 13:07:06.445774 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:07:06.452150 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:07:07.068372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:07:07.080009 (kubelet)[3431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:07:07.137958 kubelet[3431]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:07:07.137958 kubelet[3431]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:07:07.137958 kubelet[3431]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:07:07.137958 kubelet[3431]: I0114 13:07:07.137658 3431 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:07:07.143814 kubelet[3431]: I0114 13:07:07.143777 3431 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 14 13:07:07.143961 kubelet[3431]: I0114 13:07:07.143835 3431 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:07:07.144131 kubelet[3431]: I0114 13:07:07.144099 3431 server.go:927] "Client rotation is on, will bootstrap in background" Jan 14 13:07:07.145438 kubelet[3431]: I0114 13:07:07.145411 3431 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 13:07:07.147748 kubelet[3431]: I0114 13:07:07.147719 3431 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:07:07.160816 kubelet[3431]: I0114 13:07:07.160779 3431 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:07:07.161083 kubelet[3431]: I0114 13:07:07.161037 3431 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:07:07.161635 kubelet[3431]: I0114 13:07:07.161079 3431 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-847249f34f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:07:07.161929 kubelet[3431]: I0114 13:07:07.161787 3431 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:07:07.161929 kubelet[3431]: I0114 13:07:07.161810 3431 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:07:07.162160 kubelet[3431]: I0114 13:07:07.162048 3431 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:07:07.162431 kubelet[3431]: I0114 13:07:07.162326 3431 kubelet.go:400] "Attempting to sync node with API server" Jan 14 13:07:07.162431 kubelet[3431]: I0114 13:07:07.162358 3431 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:07:07.162431 kubelet[3431]: I0114 13:07:07.162385 3431 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:07:07.162431 kubelet[3431]: I0114 13:07:07.162404 3431 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:07:07.166172 kubelet[3431]: I0114 13:07:07.166146 3431 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:07:07.166415 kubelet[3431]: I0114 13:07:07.166355 3431 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:07:07.167123 kubelet[3431]: I0114 13:07:07.166864 3431 server.go:1264] "Started kubelet" Jan 14 13:07:07.171164 kubelet[3431]: I0114 13:07:07.171138 3431 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:07:07.183022 kubelet[3431]: I0114 13:07:07.182960 3431 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:07:07.187523 kubelet[3431]: I0114 13:07:07.187386 3431 server.go:455] "Adding debug handlers to kubelet server" Jan 14 13:07:07.190410 kubelet[3431]: I0114 13:07:07.190387 3431 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:07:07.193233 kubelet[3431]: I0114 13:07:07.192960 3431 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:07:07.193506 kubelet[3431]: I0114 13:07:07.193492 3431 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:07:07.194748 kubelet[3431]: I0114 13:07:07.194656 3431 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 14 13:07:07.195032 kubelet[3431]: I0114 13:07:07.195020 3431 reconciler.go:26] "Reconciler: start to sync state" Jan 14 13:07:07.200277 kubelet[3431]: I0114 13:07:07.200131 3431 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:07:07.202574 kubelet[3431]: I0114 13:07:07.202369 3431 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:07:07.203769 kubelet[3431]: I0114 13:07:07.203200 3431 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:07:07.203948 kubelet[3431]: I0114 13:07:07.203929 3431 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:07:07.204788 kubelet[3431]: I0114 13:07:07.204774 3431 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:07:07.204884 kubelet[3431]: I0114 13:07:07.204873 3431 kubelet.go:2337] "Starting kubelet main sync loop" Jan 14 13:07:07.205000 kubelet[3431]: E0114 13:07:07.204980 3431 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:07:07.219627 kubelet[3431]: I0114 13:07:07.218048 3431 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:07:07.241826 sudo[3459]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 14 13:07:07.242893 sudo[3459]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 14 13:07:07.243554 kubelet[3431]: E0114 13:07:07.243520 3431 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:07:07.300090 kubelet[3431]: I0114 13:07:07.299496 3431 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.305363 kubelet[3431]: E0114 13:07:07.305096 3431 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 13:07:07.322972 kubelet[3431]: I0114 13:07:07.322850 3431 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.322972 kubelet[3431]: I0114 13:07:07.322936 3431 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.325735 kubelet[3431]: I0114 13:07:07.325704 3431 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:07:07.325735 kubelet[3431]: I0114 13:07:07.325727 3431 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:07:07.325906 kubelet[3431]: I0114 13:07:07.325749 3431 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:07:07.326365 kubelet[3431]: I0114 13:07:07.325948 3431 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 13:07:07.326365 kubelet[3431]: I0114 13:07:07.325964 3431 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 13:07:07.326365 kubelet[3431]: I0114 13:07:07.325988 3431 policy_none.go:49] "None policy: Start" Jan 14 13:07:07.328172 kubelet[3431]: I0114 13:07:07.328085 3431 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:07:07.328172 kubelet[3431]: I0114 13:07:07.328122 3431 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:07:07.328670 kubelet[3431]: I0114 13:07:07.328411 3431 state_mem.go:75] "Updated machine memory state" Jan 14 13:07:07.343349 kubelet[3431]: I0114 13:07:07.343316 3431 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:07:07.343759 kubelet[3431]: I0114 13:07:07.343521 3431 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 13:07:07.344952 kubelet[3431]: I0114 13:07:07.344698 3431 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:07:07.506627 kubelet[3431]: I0114 13:07:07.505632 3431 topology_manager.go:215] "Topology Admit Handler" podUID="e741db892fbe4fb08a82b5eb2d1b23cf" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.506627 kubelet[3431]: I0114 13:07:07.505913 3431 topology_manager.go:215] "Topology Admit Handler" podUID="1ffcc7a231ea533e869c6ce15da64394" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.506627 kubelet[3431]: I0114 13:07:07.505986 3431 topology_manager.go:215] "Topology Admit Handler" podUID="b73eacf586ef8704afa0538b08e10bff" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.524631 kubelet[3431]: W0114 13:07:07.522972 3431 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:07:07.524631 kubelet[3431]: E0114 13:07:07.523071 3431 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186.1.0-a-847249f34f\" already exists" pod="kube-system/kube-scheduler-ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.524858 kubelet[3431]: W0114 13:07:07.524773 3431 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:07:07.525563 kubelet[3431]: W0114 13:07:07.525513 3431 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:07:07.596046 kubelet[3431]: I0114 13:07:07.595721 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e741db892fbe4fb08a82b5eb2d1b23cf-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-847249f34f\" (UID: \"e741db892fbe4fb08a82b5eb2d1b23cf\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.596046 kubelet[3431]: I0114 13:07:07.595957 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e741db892fbe4fb08a82b5eb2d1b23cf-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-847249f34f\" (UID: \"e741db892fbe4fb08a82b5eb2d1b23cf\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.597236 kubelet[3431]: I0114 13:07:07.596512 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e741db892fbe4fb08a82b5eb2d1b23cf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-847249f34f\" (UID: \"e741db892fbe4fb08a82b5eb2d1b23cf\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.597236 kubelet[3431]: I0114 13:07:07.596582 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1ffcc7a231ea533e869c6ce15da64394-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-847249f34f\" (UID: \"1ffcc7a231ea533e869c6ce15da64394\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.597236 kubelet[3431]: I0114 13:07:07.597116 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ffcc7a231ea533e869c6ce15da64394-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-847249f34f\" (UID: \"1ffcc7a231ea533e869c6ce15da64394\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.597236 kubelet[3431]: I0114 13:07:07.597181 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ffcc7a231ea533e869c6ce15da64394-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-847249f34f\" (UID: \"1ffcc7a231ea533e869c6ce15da64394\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.597754 kubelet[3431]: I0114 13:07:07.597217 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ffcc7a231ea533e869c6ce15da64394-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-847249f34f\" (UID: \"1ffcc7a231ea533e869c6ce15da64394\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.597754 kubelet[3431]: I0114 13:07:07.597533 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1ffcc7a231ea533e869c6ce15da64394-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-847249f34f\" (UID: \"1ffcc7a231ea533e869c6ce15da64394\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.597754 kubelet[3431]: I0114 13:07:07.597665 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b73eacf586ef8704afa0538b08e10bff-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-847249f34f\" (UID: \"b73eacf586ef8704afa0538b08e10bff\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-847249f34f" Jan 14 13:07:07.814123 sudo[3459]: pam_unix(sudo:session): session closed for user root Jan 14 13:07:08.164688 kubelet[3431]: I0114 13:07:08.164636 3431 apiserver.go:52] "Watching apiserver" Jan 14 13:07:08.195248 kubelet[3431]: I0114 13:07:08.195151 3431 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 14 13:07:08.358317 kubelet[3431]: I0114 13:07:08.358089 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-a-847249f34f" podStartSLOduration=1.358064129 podStartE2EDuration="1.358064129s" podCreationTimestamp="2025-01-14 13:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:07:08.335674747 +0000 UTC m=+1.250691448" watchObservedRunningTime="2025-01-14 13:07:08.358064129 +0000 UTC m=+1.273080730" Jan 14 13:07:08.376922 kubelet[3431]: I0114 13:07:08.376561 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-a-847249f34f" podStartSLOduration=2.376534227 podStartE2EDuration="2.376534227s" podCreationTimestamp="2025-01-14 13:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:07:08.358977449 +0000 UTC m=+1.273994150" watchObservedRunningTime="2025-01-14 13:07:08.376534227 +0000 UTC m=+1.291550928" Jan 14 13:07:08.402008 kubelet[3431]: I0114 13:07:08.401799 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-847249f34f" podStartSLOduration=1.401777071 podStartE2EDuration="1.401777071s" podCreationTimestamp="2025-01-14 13:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:07:08.378013259 +0000 UTC m=+1.293029960" watchObservedRunningTime="2025-01-14 13:07:08.401777071 +0000 UTC m=+1.316793672" Jan 14 13:07:09.129551 sudo[2445]: pam_unix(sudo:session): session closed for user root Jan 14 13:07:09.233386 sshd[2444]: Connection closed by 10.200.16.10 port 38700 Jan 14 13:07:09.234139 sshd-session[2427]: pam_unix(sshd:session): session closed for user core Jan 14 13:07:09.237331 systemd[1]: sshd@6-10.200.8.4:22-10.200.16.10:38700.service: Deactivated successfully. Jan 14 13:07:09.240372 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 13:07:09.240627 systemd[1]: session-9.scope: Consumed 4.484s CPU time, 188.8M memory peak, 0B memory swap peak. Jan 14 13:07:09.242356 systemd-logind[1701]: Session 9 logged out. Waiting for processes to exit. Jan 14 13:07:09.243699 systemd-logind[1701]: Removed session 9. Jan 14 13:07:19.837256 kubelet[3431]: I0114 13:07:19.837197 3431 topology_manager.go:215] "Topology Admit Handler" podUID="b75370d0-7607-41b3-ad7a-83a3c37d865e" podNamespace="kube-system" podName="kube-proxy-cm96n" Jan 14 13:07:19.846524 systemd[1]: Created slice kubepods-besteffort-podb75370d0_7607_41b3_ad7a_83a3c37d865e.slice - libcontainer container kubepods-besteffort-podb75370d0_7607_41b3_ad7a_83a3c37d865e.slice. Jan 14 13:07:19.860539 kubelet[3431]: I0114 13:07:19.860492 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b75370d0-7607-41b3-ad7a-83a3c37d865e-xtables-lock\") pod \"kube-proxy-cm96n\" (UID: \"b75370d0-7607-41b3-ad7a-83a3c37d865e\") " pod="kube-system/kube-proxy-cm96n" Jan 14 13:07:19.860539 kubelet[3431]: I0114 13:07:19.860541 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b75370d0-7607-41b3-ad7a-83a3c37d865e-lib-modules\") pod \"kube-proxy-cm96n\" (UID: \"b75370d0-7607-41b3-ad7a-83a3c37d865e\") " pod="kube-system/kube-proxy-cm96n" Jan 14 13:07:19.860779 kubelet[3431]: I0114 13:07:19.860562 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgwmp\" (UniqueName: \"kubernetes.io/projected/b75370d0-7607-41b3-ad7a-83a3c37d865e-kube-api-access-dgwmp\") pod \"kube-proxy-cm96n\" (UID: \"b75370d0-7607-41b3-ad7a-83a3c37d865e\") " pod="kube-system/kube-proxy-cm96n" Jan 14 13:07:19.860779 kubelet[3431]: I0114 13:07:19.860588 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b75370d0-7607-41b3-ad7a-83a3c37d865e-kube-proxy\") pod \"kube-proxy-cm96n\" (UID: \"b75370d0-7607-41b3-ad7a-83a3c37d865e\") " pod="kube-system/kube-proxy-cm96n" Jan 14 13:07:19.863069 kubelet[3431]: I0114 13:07:19.863025 3431 topology_manager.go:215] "Topology Admit Handler" podUID="a8547c91-ee21-4a86-a46f-841e1f31b465" podNamespace="kube-system" podName="cilium-4dgxf" Jan 14 13:07:19.871473 kubelet[3431]: I0114 13:07:19.871258 3431 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 13:07:19.872852 systemd[1]: Created slice kubepods-burstable-poda8547c91_ee21_4a86_a46f_841e1f31b465.slice - libcontainer container kubepods-burstable-poda8547c91_ee21_4a86_a46f_841e1f31b465.slice. Jan 14 13:07:19.874647 containerd[1723]: time="2025-01-14T13:07:19.873201830Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 13:07:19.875025 kubelet[3431]: I0114 13:07:19.874767 3431 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 13:07:20.062058 kubelet[3431]: I0114 13:07:20.061979 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-cilium-run\") pod \"cilium-4dgxf\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " pod="kube-system/cilium-4dgxf" Jan 14 13:07:20.062058 kubelet[3431]: I0114 13:07:20.062028 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-lib-modules\") pod \"cilium-4dgxf\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " pod="kube-system/cilium-4dgxf" Jan 14 13:07:20.062058 kubelet[3431]: I0114 13:07:20.062056 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-cilium-cgroup\") pod \"cilium-4dgxf\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " pod="kube-system/cilium-4dgxf" Jan 14 13:07:20.062417 kubelet[3431]: I0114 13:07:20.062077 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-cni-path\") pod \"cilium-4dgxf\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " pod="kube-system/cilium-4dgxf" Jan 14 13:07:20.062417 kubelet[3431]: I0114 13:07:20.062099 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8547c91-ee21-4a86-a46f-841e1f31b465-hubble-tls\") pod \"cilium-4dgxf\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " pod="kube-system/cilium-4dgxf" Jan 14 13:07:20.062417 kubelet[3431]: I0114 13:07:20.062126 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-host-proc-sys-net\") pod \"cilium-4dgxf\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " pod="kube-system/cilium-4dgxf" Jan 14 13:07:20.062417 kubelet[3431]: I0114 13:07:20.062186 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-hostproc\") pod \"cilium-4dgxf\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " pod="kube-system/cilium-4dgxf" Jan 14 13:07:20.062417 kubelet[3431]: I0114 13:07:20.062220 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8547c91-ee21-4a86-a46f-841e1f31b465-cilium-config-path\") pod \"cilium-4dgxf\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " pod="kube-system/cilium-4dgxf" Jan 14 13:07:20.062417 kubelet[3431]: I0114 13:07:20.062245 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-bpf-maps\") pod \"cilium-4dgxf\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " pod="kube-system/cilium-4dgxf" Jan 14 13:07:20.062621 kubelet[3431]: I0114 13:07:20.062270 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st77n\" (UniqueName: \"kubernetes.io/projected/a8547c91-ee21-4a86-a46f-841e1f31b465-kube-api-access-st77n\") pod \"cilium-4dgxf\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " pod="kube-system/cilium-4dgxf" Jan 14 13:07:20.062621 kubelet[3431]: I0114 13:07:20.062291 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-xtables-lock\") pod \"cilium-4dgxf\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " pod="kube-system/cilium-4dgxf" Jan 14 13:07:20.062621 kubelet[3431]: I0114 13:07:20.062317 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-etc-cni-netd\") pod \"cilium-4dgxf\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " pod="kube-system/cilium-4dgxf" Jan 14 13:07:20.062621 kubelet[3431]: I0114 13:07:20.062341 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8547c91-ee21-4a86-a46f-841e1f31b465-clustermesh-secrets\") pod \"cilium-4dgxf\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " pod="kube-system/cilium-4dgxf" Jan 14 13:07:20.062621 kubelet[3431]: I0114 13:07:20.062373 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-host-proc-sys-kernel\") pod \"cilium-4dgxf\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " pod="kube-system/cilium-4dgxf" Jan 14 13:07:20.157577 containerd[1723]: time="2025-01-14T13:07:20.157437240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cm96n,Uid:b75370d0-7607-41b3-ad7a-83a3c37d865e,Namespace:kube-system,Attempt:0,}" Jan 14 13:07:20.229687 containerd[1723]: time="2025-01-14T13:07:20.228511193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:07:20.229687 containerd[1723]: time="2025-01-14T13:07:20.228593795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:07:20.229687 containerd[1723]: time="2025-01-14T13:07:20.228651696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:20.229687 containerd[1723]: time="2025-01-14T13:07:20.228756898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:20.261878 systemd[1]: Started cri-containerd-7d7bfffea0d5fcef339964749a7bc4c01b1f6b3a2e99bbce95a34d3dd45154da.scope - libcontainer container 7d7bfffea0d5fcef339964749a7bc4c01b1f6b3a2e99bbce95a34d3dd45154da. Jan 14 13:07:20.293329 kubelet[3431]: I0114 13:07:20.293212 3431 topology_manager.go:215] "Topology Admit Handler" podUID="e7261f0c-db2b-4f68-9839-43bd04863e06" podNamespace="kube-system" podName="cilium-operator-599987898-zh66p" Jan 14 13:07:20.307422 systemd[1]: Created slice kubepods-besteffort-pode7261f0c_db2b_4f68_9839_43bd04863e06.slice - libcontainer container kubepods-besteffort-pode7261f0c_db2b_4f68_9839_43bd04863e06.slice. Jan 14 13:07:20.337840 containerd[1723]: time="2025-01-14T13:07:20.337791080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cm96n,Uid:b75370d0-7607-41b3-ad7a-83a3c37d865e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d7bfffea0d5fcef339964749a7bc4c01b1f6b3a2e99bbce95a34d3dd45154da\"" Jan 14 13:07:20.342515 containerd[1723]: time="2025-01-14T13:07:20.342467983Z" level=info msg="CreateContainer within sandbox \"7d7bfffea0d5fcef339964749a7bc4c01b1f6b3a2e99bbce95a34d3dd45154da\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 13:07:20.405987 containerd[1723]: time="2025-01-14T13:07:20.405933569Z" level=info msg="CreateContainer within sandbox \"7d7bfffea0d5fcef339964749a7bc4c01b1f6b3a2e99bbce95a34d3dd45154da\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1bf6d19a582ed9b62e9e1df392a6824b8ee67cb4ce3dc9827b2c21df1e179048\"" Jan 14 13:07:20.406778 containerd[1723]: time="2025-01-14T13:07:20.406703586Z" level=info msg="StartContainer for \"1bf6d19a582ed9b62e9e1df392a6824b8ee67cb4ce3dc9827b2c21df1e179048\"" Jan 14 13:07:20.441855 systemd[1]: Started cri-containerd-1bf6d19a582ed9b62e9e1df392a6824b8ee67cb4ce3dc9827b2c21df1e179048.scope - libcontainer container 1bf6d19a582ed9b62e9e1df392a6824b8ee67cb4ce3dc9827b2c21df1e179048. Jan 14 13:07:20.465802 kubelet[3431]: I0114 13:07:20.465583 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7261f0c-db2b-4f68-9839-43bd04863e06-cilium-config-path\") pod \"cilium-operator-599987898-zh66p\" (UID: \"e7261f0c-db2b-4f68-9839-43bd04863e06\") " pod="kube-system/cilium-operator-599987898-zh66p" Jan 14 13:07:20.465802 kubelet[3431]: I0114 13:07:20.465719 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpcj5\" (UniqueName: \"kubernetes.io/projected/e7261f0c-db2b-4f68-9839-43bd04863e06-kube-api-access-kpcj5\") pod \"cilium-operator-599987898-zh66p\" (UID: \"e7261f0c-db2b-4f68-9839-43bd04863e06\") " pod="kube-system/cilium-operator-599987898-zh66p" Jan 14 13:07:20.480637 containerd[1723]: time="2025-01-14T13:07:20.479224971Z" level=info msg="StartContainer for \"1bf6d19a582ed9b62e9e1df392a6824b8ee67cb4ce3dc9827b2c21df1e179048\" returns successfully" Jan 14 13:07:20.483388 containerd[1723]: time="2025-01-14T13:07:20.481614523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dgxf,Uid:a8547c91-ee21-4a86-a46f-841e1f31b465,Namespace:kube-system,Attempt:0,}" Jan 14 13:07:20.563139 containerd[1723]: time="2025-01-14T13:07:20.562763196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:07:20.563139 containerd[1723]: time="2025-01-14T13:07:20.562840098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:07:20.563139 containerd[1723]: time="2025-01-14T13:07:20.562862498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:20.563139 containerd[1723]: time="2025-01-14T13:07:20.562974200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:20.597868 systemd[1]: Started cri-containerd-b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2.scope - libcontainer container b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2. Jan 14 13:07:20.613105 containerd[1723]: time="2025-01-14T13:07:20.613055795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zh66p,Uid:e7261f0c-db2b-4f68-9839-43bd04863e06,Namespace:kube-system,Attempt:0,}" Jan 14 13:07:20.625268 containerd[1723]: time="2025-01-14T13:07:20.625228461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dgxf,Uid:a8547c91-ee21-4a86-a46f-841e1f31b465,Namespace:kube-system,Attempt:0,} returns sandbox id \"b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2\"" Jan 14 13:07:20.627491 containerd[1723]: time="2025-01-14T13:07:20.627235705Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 14 13:07:20.676103 containerd[1723]: time="2025-01-14T13:07:20.675999370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:07:20.676103 containerd[1723]: time="2025-01-14T13:07:20.676049271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:07:20.676103 containerd[1723]: time="2025-01-14T13:07:20.676062671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:20.676364 containerd[1723]: time="2025-01-14T13:07:20.676249975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:20.701850 systemd[1]: Started cri-containerd-bce1134f9a696f273e9699e46f23479c8ecf3ecbae7bb7fa5fa76776ecf2351c.scope - libcontainer container bce1134f9a696f273e9699e46f23479c8ecf3ecbae7bb7fa5fa76776ecf2351c. Jan 14 13:07:20.773011 containerd[1723]: time="2025-01-14T13:07:20.772881487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zh66p,Uid:e7261f0c-db2b-4f68-9839-43bd04863e06,Namespace:kube-system,Attempt:0,} returns sandbox id \"bce1134f9a696f273e9699e46f23479c8ecf3ecbae7bb7fa5fa76776ecf2351c\"" Jan 14 13:07:21.325977 kubelet[3431]: I0114 13:07:21.325887 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cm96n" podStartSLOduration=2.325863269 podStartE2EDuration="2.325863269s" podCreationTimestamp="2025-01-14 13:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:07:21.324942349 +0000 UTC m=+14.239958950" watchObservedRunningTime="2025-01-14 13:07:21.325863269 +0000 UTC m=+14.240879970" Jan 14 13:07:26.457586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4242295758.mount: Deactivated successfully. Jan 14 13:07:30.431026 containerd[1723]: time="2025-01-14T13:07:30.430960651Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:30.434932 containerd[1723]: time="2025-01-14T13:07:30.434852534Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734091" Jan 14 13:07:30.440492 containerd[1723]: time="2025-01-14T13:07:30.440419153Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:30.442680 containerd[1723]: time="2025-01-14T13:07:30.442024187Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.814744982s" Jan 14 13:07:30.442680 containerd[1723]: time="2025-01-14T13:07:30.442065288Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 14 13:07:30.443770 containerd[1723]: time="2025-01-14T13:07:30.443739224Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 14 13:07:30.445153 containerd[1723]: time="2025-01-14T13:07:30.445117253Z" level=info msg="CreateContainer within sandbox \"b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 14 13:07:30.503507 containerd[1723]: time="2025-01-14T13:07:30.503454200Z" level=info msg="CreateContainer within sandbox \"b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5\"" Jan 14 13:07:30.505134 containerd[1723]: time="2025-01-14T13:07:30.504084913Z" level=info msg="StartContainer for \"92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5\"" Jan 14 13:07:30.537773 systemd[1]: Started cri-containerd-92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5.scope - libcontainer container 92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5. Jan 14 13:07:30.568764 containerd[1723]: time="2025-01-14T13:07:30.568708594Z" level=info msg="StartContainer for \"92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5\" returns successfully" Jan 14 13:07:30.577023 systemd[1]: cri-containerd-92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5.scope: Deactivated successfully. Jan 14 13:07:30.600311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5-rootfs.mount: Deactivated successfully. Jan 14 13:07:34.253836 containerd[1723]: time="2025-01-14T13:07:34.253767230Z" level=info msg="shim disconnected" id=92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5 namespace=k8s.io Jan 14 13:07:34.253836 containerd[1723]: time="2025-01-14T13:07:34.253826631Z" level=warning msg="cleaning up after shim disconnected" id=92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5 namespace=k8s.io Jan 14 13:07:34.253836 containerd[1723]: time="2025-01-14T13:07:34.253839631Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:07:34.340631 containerd[1723]: time="2025-01-14T13:07:34.339081053Z" level=info msg="CreateContainer within sandbox \"b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 14 13:07:34.384487 containerd[1723]: time="2025-01-14T13:07:34.384425621Z" level=info msg="CreateContainer within sandbox \"b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0\"" Jan 14 13:07:34.385328 containerd[1723]: time="2025-01-14T13:07:34.385012634Z" level=info msg="StartContainer for \"daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0\"" Jan 14 13:07:34.420955 systemd[1]: Started cri-containerd-daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0.scope - libcontainer container daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0. Jan 14 13:07:34.454182 containerd[1723]: time="2025-01-14T13:07:34.454125611Z" level=info msg="StartContainer for \"daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0\" returns successfully" Jan 14 13:07:34.463572 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:07:34.464042 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:07:34.464140 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:07:34.474789 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:07:34.475081 systemd[1]: cri-containerd-daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0.scope: Deactivated successfully. Jan 14 13:07:34.494775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0-rootfs.mount: Deactivated successfully. Jan 14 13:07:34.497107 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:07:34.508003 containerd[1723]: time="2025-01-14T13:07:34.507180644Z" level=info msg="shim disconnected" id=daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0 namespace=k8s.io Jan 14 13:07:34.508003 containerd[1723]: time="2025-01-14T13:07:34.507252646Z" level=warning msg="cleaning up after shim disconnected" id=daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0 namespace=k8s.io Jan 14 13:07:34.508003 containerd[1723]: time="2025-01-14T13:07:34.507263646Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:07:35.346802 containerd[1723]: time="2025-01-14T13:07:35.346117669Z" level=info msg="CreateContainer within sandbox \"b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 14 13:07:35.402397 containerd[1723]: time="2025-01-14T13:07:35.402151066Z" level=info msg="CreateContainer within sandbox \"b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19\"" Jan 14 13:07:35.402976 containerd[1723]: time="2025-01-14T13:07:35.402826681Z" level=info msg="StartContainer for \"25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19\"" Jan 14 13:07:35.455797 systemd[1]: Started cri-containerd-25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19.scope - libcontainer container 25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19. Jan 14 13:07:35.516722 systemd[1]: cri-containerd-25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19.scope: Deactivated successfully. Jan 14 13:07:35.521283 containerd[1723]: time="2025-01-14T13:07:35.521240411Z" level=info msg="StartContainer for \"25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19\" returns successfully" Jan 14 13:07:35.751870 containerd[1723]: time="2025-01-14T13:07:35.751650634Z" level=info msg="shim disconnected" id=25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19 namespace=k8s.io Jan 14 13:07:35.751870 containerd[1723]: time="2025-01-14T13:07:35.751721435Z" level=warning msg="cleaning up after shim disconnected" id=25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19 namespace=k8s.io Jan 14 13:07:35.751870 containerd[1723]: time="2025-01-14T13:07:35.751735836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:07:36.351103 containerd[1723]: time="2025-01-14T13:07:36.350926438Z" level=info msg="CreateContainer within sandbox \"b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 14 13:07:36.365949 systemd[1]: run-containerd-runc-k8s.io-25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19-runc.rxJahM.mount: Deactivated successfully. Jan 14 13:07:36.366087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19-rootfs.mount: Deactivated successfully. Jan 14 13:07:36.409166 containerd[1723]: time="2025-01-14T13:07:36.409112681Z" level=info msg="CreateContainer within sandbox \"b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13\"" Jan 14 13:07:36.410897 containerd[1723]: time="2025-01-14T13:07:36.409800296Z" level=info msg="StartContainer for \"4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13\"" Jan 14 13:07:36.444768 systemd[1]: Started cri-containerd-4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13.scope - libcontainer container 4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13. Jan 14 13:07:36.466350 systemd[1]: cri-containerd-4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13.scope: Deactivated successfully. Jan 14 13:07:36.472522 containerd[1723]: time="2025-01-14T13:07:36.472479135Z" level=info msg="StartContainer for \"4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13\" returns successfully" Jan 14 13:07:36.701278 containerd[1723]: time="2025-01-14T13:07:36.701206322Z" level=info msg="shim disconnected" id=4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13 namespace=k8s.io Jan 14 13:07:36.701591 containerd[1723]: time="2025-01-14T13:07:36.701283724Z" level=warning msg="cleaning up after shim disconnected" id=4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13 namespace=k8s.io Jan 14 13:07:36.701591 containerd[1723]: time="2025-01-14T13:07:36.701296424Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:07:37.356788 containerd[1723]: time="2025-01-14T13:07:37.356733528Z" level=info msg="CreateContainer within sandbox \"b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 14 13:07:37.367183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13-rootfs.mount: Deactivated successfully. Jan 14 13:07:37.413012 containerd[1723]: time="2025-01-14T13:07:37.412958830Z" level=info msg="CreateContainer within sandbox \"b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e\"" Jan 14 13:07:37.413657 containerd[1723]: time="2025-01-14T13:07:37.413579243Z" level=info msg="StartContainer for \"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e\"" Jan 14 13:07:37.449754 systemd[1]: Started cri-containerd-b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e.scope - libcontainer container b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e. Jan 14 13:07:37.482828 containerd[1723]: time="2025-01-14T13:07:37.482246410Z" level=info msg="StartContainer for \"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e\" returns successfully" Jan 14 13:07:37.570696 kubelet[3431]: I0114 13:07:37.570278 3431 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 14 13:07:37.624011 kubelet[3431]: I0114 13:07:37.623263 3431 topology_manager.go:215] "Topology Admit Handler" podUID="374507e3-91d9-497f-b033-3e7739eea513" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6db2f" Jan 14 13:07:37.628446 kubelet[3431]: W0114 13:07:37.628127 3431 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186.1.0-a-847249f34f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.0-a-847249f34f' and this object Jan 14 13:07:37.628446 kubelet[3431]: E0114 13:07:37.628185 3431 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186.1.0-a-847249f34f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.0-a-847249f34f' and this object Jan 14 13:07:37.631646 kubelet[3431]: I0114 13:07:37.631590 3431 topology_manager.go:215] "Topology Admit Handler" podUID="f845eb37-db3a-4c62-b30f-e8f55b3e54c4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cxnnc" Jan 14 13:07:37.631998 systemd[1]: Created slice kubepods-burstable-pod374507e3_91d9_497f_b033_3e7739eea513.slice - libcontainer container kubepods-burstable-pod374507e3_91d9_497f_b033_3e7739eea513.slice. Jan 14 13:07:37.645139 systemd[1]: Created slice kubepods-burstable-podf845eb37_db3a_4c62_b30f_e8f55b3e54c4.slice - libcontainer container kubepods-burstable-podf845eb37_db3a_4c62_b30f_e8f55b3e54c4.slice. Jan 14 13:07:37.695383 kubelet[3431]: I0114 13:07:37.695167 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/374507e3-91d9-497f-b033-3e7739eea513-config-volume\") pod \"coredns-7db6d8ff4d-6db2f\" (UID: \"374507e3-91d9-497f-b033-3e7739eea513\") " pod="kube-system/coredns-7db6d8ff4d-6db2f" Jan 14 13:07:37.695383 kubelet[3431]: I0114 13:07:37.695217 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thxm6\" (UniqueName: \"kubernetes.io/projected/374507e3-91d9-497f-b033-3e7739eea513-kube-api-access-thxm6\") pod \"coredns-7db6d8ff4d-6db2f\" (UID: \"374507e3-91d9-497f-b033-3e7739eea513\") " pod="kube-system/coredns-7db6d8ff4d-6db2f" Jan 14 13:07:37.695383 kubelet[3431]: I0114 13:07:37.695251 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f845eb37-db3a-4c62-b30f-e8f55b3e54c4-config-volume\") pod \"coredns-7db6d8ff4d-cxnnc\" (UID: \"f845eb37-db3a-4c62-b30f-e8f55b3e54c4\") " pod="kube-system/coredns-7db6d8ff4d-cxnnc" Jan 14 13:07:37.695383 kubelet[3431]: I0114 13:07:37.695282 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v49ft\" (UniqueName: \"kubernetes.io/projected/f845eb37-db3a-4c62-b30f-e8f55b3e54c4-kube-api-access-v49ft\") pod \"coredns-7db6d8ff4d-cxnnc\" (UID: \"f845eb37-db3a-4c62-b30f-e8f55b3e54c4\") " pod="kube-system/coredns-7db6d8ff4d-cxnnc" Jan 14 13:07:38.374409 systemd[1]: run-containerd-runc-k8s.io-b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e-runc.6pxYXo.mount: Deactivated successfully. Jan 14 13:07:38.840173 containerd[1723]: time="2025-01-14T13:07:38.840117257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6db2f,Uid:374507e3-91d9-497f-b033-3e7739eea513,Namespace:kube-system,Attempt:0,}" Jan 14 13:07:38.849855 containerd[1723]: time="2025-01-14T13:07:38.849813463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cxnnc,Uid:f845eb37-db3a-4c62-b30f-e8f55b3e54c4,Namespace:kube-system,Attempt:0,}" Jan 14 13:07:42.904899 containerd[1723]: time="2025-01-14T13:07:42.904840710Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:42.910949 containerd[1723]: time="2025-01-14T13:07:42.910881938Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906557" Jan 14 13:07:42.917744 containerd[1723]: time="2025-01-14T13:07:42.917668582Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:07:42.919569 containerd[1723]: time="2025-01-14T13:07:42.919090713Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 12.475193485s" Jan 14 13:07:42.919569 containerd[1723]: time="2025-01-14T13:07:42.919129513Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 14 13:07:42.922264 containerd[1723]: time="2025-01-14T13:07:42.922233979Z" level=info msg="CreateContainer within sandbox \"bce1134f9a696f273e9699e46f23479c8ecf3ecbae7bb7fa5fa76776ecf2351c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 14 13:07:42.972350 containerd[1723]: time="2025-01-14T13:07:42.972299043Z" level=info msg="CreateContainer within sandbox \"bce1134f9a696f273e9699e46f23479c8ecf3ecbae7bb7fa5fa76776ecf2351c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6\"" Jan 14 13:07:42.974633 containerd[1723]: time="2025-01-14T13:07:42.973534269Z" level=info msg="StartContainer for \"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6\"" Jan 14 13:07:43.010802 systemd[1]: Started cri-containerd-f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6.scope - libcontainer container f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6. Jan 14 13:07:43.045833 containerd[1723]: time="2025-01-14T13:07:43.045677502Z" level=info msg="StartContainer for \"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6\" returns successfully" Jan 14 13:07:43.403013 kubelet[3431]: I0114 13:07:43.402365 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4dgxf" podStartSLOduration=14.585847059 podStartE2EDuration="24.402339479s" podCreationTimestamp="2025-01-14 13:07:19 +0000 UTC" firstStartedPulling="2025-01-14 13:07:20.626527689 +0000 UTC m=+13.541544290" lastFinishedPulling="2025-01-14 13:07:30.443020009 +0000 UTC m=+23.358036710" observedRunningTime="2025-01-14 13:07:38.386563821 +0000 UTC m=+31.301580422" watchObservedRunningTime="2025-01-14 13:07:43.402339479 +0000 UTC m=+36.317356080" Jan 14 13:07:46.623456 systemd-networkd[1334]: cilium_host: Link UP Jan 14 13:07:46.625062 systemd-networkd[1334]: cilium_net: Link UP Jan 14 13:07:46.625567 systemd-networkd[1334]: cilium_net: Gained carrier Jan 14 13:07:46.626221 systemd-networkd[1334]: cilium_host: Gained carrier Jan 14 13:07:46.817787 systemd-networkd[1334]: cilium_vxlan: Link UP Jan 14 13:07:46.817799 systemd-networkd[1334]: cilium_vxlan: Gained carrier Jan 14 13:07:47.119806 kernel: NET: Registered PF_ALG protocol family Jan 14 13:07:47.388932 systemd-networkd[1334]: cilium_host: Gained IPv6LL Jan 14 13:07:47.516799 systemd-networkd[1334]: cilium_net: Gained IPv6LL Jan 14 13:07:47.894842 systemd-networkd[1334]: lxc_health: Link UP Jan 14 13:07:47.903345 systemd-networkd[1334]: lxc_health: Gained carrier Jan 14 13:07:48.028754 systemd-networkd[1334]: cilium_vxlan: Gained IPv6LL Jan 14 13:07:48.460725 kernel: eth0: renamed from tmpc4ee1 Jan 14 13:07:48.465881 systemd-networkd[1334]: tmpc4ee1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:07:48.465970 systemd-networkd[1334]: tmpc4ee1: Cannot enable IPv6, ignoring: No such file or directory Jan 14 13:07:48.466011 systemd-networkd[1334]: tmpc4ee1: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Jan 14 13:07:48.466029 systemd-networkd[1334]: tmpc4ee1: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Jan 14 13:07:48.466046 systemd-networkd[1334]: tmpc4ee1: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Jan 14 13:07:48.466066 systemd-networkd[1334]: tmpc4ee1: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Jan 14 13:07:48.468553 systemd-networkd[1334]: lxc8d5b62072107: Link UP Jan 14 13:07:48.477689 systemd-networkd[1334]: lxc8d5b62072107: Gained carrier Jan 14 13:07:48.478071 systemd-networkd[1334]: lxc4902767c0c91: Link UP Jan 14 13:07:48.490628 kernel: eth0: renamed from tmpbd58a Jan 14 13:07:48.515787 systemd-networkd[1334]: lxc4902767c0c91: Gained carrier Jan 14 13:07:48.553377 kubelet[3431]: I0114 13:07:48.553295 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-zh66p" podStartSLOduration=6.408525457 podStartE2EDuration="28.553270451s" podCreationTimestamp="2025-01-14 13:07:20 +0000 UTC" firstStartedPulling="2025-01-14 13:07:20.775286939 +0000 UTC m=+13.690303540" lastFinishedPulling="2025-01-14 13:07:42.920031833 +0000 UTC m=+35.835048534" observedRunningTime="2025-01-14 13:07:43.404647128 +0000 UTC m=+36.319663829" watchObservedRunningTime="2025-01-14 13:07:48.553270451 +0000 UTC m=+41.468287152" Jan 14 13:07:49.628899 systemd-networkd[1334]: lxc4902767c0c91: Gained IPv6LL Jan 14 13:07:49.630313 systemd-networkd[1334]: lxc_health: Gained IPv6LL Jan 14 13:07:50.012761 systemd-networkd[1334]: lxc8d5b62072107: Gained IPv6LL Jan 14 13:07:52.330444 containerd[1723]: time="2025-01-14T13:07:52.330310372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:07:52.330444 containerd[1723]: time="2025-01-14T13:07:52.330390674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:07:52.330444 containerd[1723]: time="2025-01-14T13:07:52.330411074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:52.341670 containerd[1723]: time="2025-01-14T13:07:52.330509776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:52.341670 containerd[1723]: time="2025-01-14T13:07:52.334131851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:07:52.341670 containerd[1723]: time="2025-01-14T13:07:52.334208652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:07:52.341670 containerd[1723]: time="2025-01-14T13:07:52.334232953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:52.341670 containerd[1723]: time="2025-01-14T13:07:52.334342455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:07:52.377087 systemd[1]: run-containerd-runc-k8s.io-bd58a259ad290134583eb6eae767de0accdcc59bd28cf5cfa7933f30d2ad3864-runc.YMbKE3.mount: Deactivated successfully. Jan 14 13:07:52.387824 systemd[1]: Started cri-containerd-bd58a259ad290134583eb6eae767de0accdcc59bd28cf5cfa7933f30d2ad3864.scope - libcontainer container bd58a259ad290134583eb6eae767de0accdcc59bd28cf5cfa7933f30d2ad3864. Jan 14 13:07:52.416812 systemd[1]: Started cri-containerd-c4ee1b788efb422c208ae28bf0cb061e06271a13ea66621eb4fb49c8b5d1dece.scope - libcontainer container c4ee1b788efb422c208ae28bf0cb061e06271a13ea66621eb4fb49c8b5d1dece. Jan 14 13:07:52.491286 containerd[1723]: time="2025-01-14T13:07:52.491239279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6db2f,Uid:374507e3-91d9-497f-b033-3e7739eea513,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4ee1b788efb422c208ae28bf0cb061e06271a13ea66621eb4fb49c8b5d1dece\"" Jan 14 13:07:52.499719 containerd[1723]: time="2025-01-14T13:07:52.499479349Z" level=info msg="CreateContainer within sandbox \"c4ee1b788efb422c208ae28bf0cb061e06271a13ea66621eb4fb49c8b5d1dece\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:07:52.511993 containerd[1723]: time="2025-01-14T13:07:52.511943705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cxnnc,Uid:f845eb37-db3a-4c62-b30f-e8f55b3e54c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd58a259ad290134583eb6eae767de0accdcc59bd28cf5cfa7933f30d2ad3864\"" Jan 14 13:07:52.515719 containerd[1723]: time="2025-01-14T13:07:52.515679482Z" level=info msg="CreateContainer within sandbox \"bd58a259ad290134583eb6eae767de0accdcc59bd28cf5cfa7933f30d2ad3864\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:07:52.579559 containerd[1723]: time="2025-01-14T13:07:52.579508693Z" level=info msg="CreateContainer within sandbox \"c4ee1b788efb422c208ae28bf0cb061e06271a13ea66621eb4fb49c8b5d1dece\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a813ab02387e830907eb09888516a4beecdad4c2a503c627507d92c386b9e12\"" Jan 14 13:07:52.581282 containerd[1723]: time="2025-01-14T13:07:52.580012104Z" level=info msg="StartContainer for \"5a813ab02387e830907eb09888516a4beecdad4c2a503c627507d92c386b9e12\"" Jan 14 13:07:52.591768 containerd[1723]: time="2025-01-14T13:07:52.591585342Z" level=info msg="CreateContainer within sandbox \"bd58a259ad290134583eb6eae767de0accdcc59bd28cf5cfa7933f30d2ad3864\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79630dee7424ef3c7bc01e224291095a86cc323b8c4ebcc819006b3941f4df58\"" Jan 14 13:07:52.592956 containerd[1723]: time="2025-01-14T13:07:52.592742265Z" level=info msg="StartContainer for \"79630dee7424ef3c7bc01e224291095a86cc323b8c4ebcc819006b3941f4df58\"" Jan 14 13:07:52.612794 systemd[1]: Started cri-containerd-5a813ab02387e830907eb09888516a4beecdad4c2a503c627507d92c386b9e12.scope - libcontainer container 5a813ab02387e830907eb09888516a4beecdad4c2a503c627507d92c386b9e12. Jan 14 13:07:52.639786 systemd[1]: Started cri-containerd-79630dee7424ef3c7bc01e224291095a86cc323b8c4ebcc819006b3941f4df58.scope - libcontainer container 79630dee7424ef3c7bc01e224291095a86cc323b8c4ebcc819006b3941f4df58. Jan 14 13:07:52.674902 containerd[1723]: time="2025-01-14T13:07:52.674841753Z" level=info msg="StartContainer for \"5a813ab02387e830907eb09888516a4beecdad4c2a503c627507d92c386b9e12\" returns successfully" Jan 14 13:07:52.689285 containerd[1723]: time="2025-01-14T13:07:52.689240749Z" level=info msg="StartContainer for \"79630dee7424ef3c7bc01e224291095a86cc323b8c4ebcc819006b3941f4df58\" returns successfully" Jan 14 13:07:53.414324 kubelet[3431]: I0114 13:07:53.414249 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cxnnc" podStartSLOduration=33.414069144 podStartE2EDuration="33.414069144s" podCreationTimestamp="2025-01-14 13:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:07:53.413821639 +0000 UTC m=+46.328838340" watchObservedRunningTime="2025-01-14 13:07:53.414069144 +0000 UTC m=+46.329085745" Jan 14 13:07:53.431717 kubelet[3431]: I0114 13:07:53.430910 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6db2f" podStartSLOduration=33.43088669 podStartE2EDuration="33.43088669s" podCreationTimestamp="2025-01-14 13:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:07:53.430687186 +0000 UTC m=+46.345703887" watchObservedRunningTime="2025-01-14 13:07:53.43088669 +0000 UTC m=+46.345903391" Jan 14 13:09:04.858917 systemd[1]: Started sshd@7-10.200.8.4:22-10.200.16.10:48682.service - OpenSSH per-connection server daemon (10.200.16.10:48682). Jan 14 13:09:05.498615 sshd[4813]: Accepted publickey for core from 10.200.16.10 port 48682 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:05.500083 sshd-session[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:05.504802 systemd-logind[1701]: New session 10 of user core. Jan 14 13:09:05.509771 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 13:09:06.024434 sshd[4815]: Connection closed by 10.200.16.10 port 48682 Jan 14 13:09:06.025438 sshd-session[4813]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:06.028871 systemd[1]: sshd@7-10.200.8.4:22-10.200.16.10:48682.service: Deactivated successfully. Jan 14 13:09:06.031368 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 13:09:06.033102 systemd-logind[1701]: Session 10 logged out. Waiting for processes to exit. Jan 14 13:09:06.034269 systemd-logind[1701]: Removed session 10. Jan 14 13:09:11.141934 systemd[1]: Started sshd@8-10.200.8.4:22-10.200.16.10:58348.service - OpenSSH per-connection server daemon (10.200.16.10:58348). Jan 14 13:09:11.791159 sshd[4828]: Accepted publickey for core from 10.200.16.10 port 58348 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:11.792648 sshd-session[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:11.797447 systemd-logind[1701]: New session 11 of user core. Jan 14 13:09:11.802758 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 13:09:12.303845 sshd[4830]: Connection closed by 10.200.16.10 port 58348 Jan 14 13:09:12.304753 sshd-session[4828]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:12.308968 systemd[1]: sshd@8-10.200.8.4:22-10.200.16.10:58348.service: Deactivated successfully. Jan 14 13:09:12.311053 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 13:09:12.311866 systemd-logind[1701]: Session 11 logged out. Waiting for processes to exit. Jan 14 13:09:12.312977 systemd-logind[1701]: Removed session 11. Jan 14 13:09:17.425964 systemd[1]: Started sshd@9-10.200.8.4:22-10.200.16.10:33472.service - OpenSSH per-connection server daemon (10.200.16.10:33472). Jan 14 13:09:18.068351 sshd[4842]: Accepted publickey for core from 10.200.16.10 port 33472 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:18.070203 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:18.075908 systemd-logind[1701]: New session 12 of user core. Jan 14 13:09:18.080748 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 13:09:18.580705 sshd[4844]: Connection closed by 10.200.16.10 port 33472 Jan 14 13:09:18.581420 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:18.585740 systemd[1]: sshd@9-10.200.8.4:22-10.200.16.10:33472.service: Deactivated successfully. Jan 14 13:09:18.587836 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 13:09:18.588633 systemd-logind[1701]: Session 12 logged out. Waiting for processes to exit. Jan 14 13:09:18.589753 systemd-logind[1701]: Removed session 12. Jan 14 13:09:23.692792 systemd[1]: Started sshd@10-10.200.8.4:22-10.200.16.10:33476.service - OpenSSH per-connection server daemon (10.200.16.10:33476). Jan 14 13:09:24.344418 sshd[4858]: Accepted publickey for core from 10.200.16.10 port 33476 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:24.346219 sshd-session[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:24.352162 systemd-logind[1701]: New session 13 of user core. Jan 14 13:09:24.356792 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 13:09:24.854866 sshd[4860]: Connection closed by 10.200.16.10 port 33476 Jan 14 13:09:24.855792 sshd-session[4858]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:24.860116 systemd[1]: sshd@10-10.200.8.4:22-10.200.16.10:33476.service: Deactivated successfully. Jan 14 13:09:24.862725 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 13:09:24.863957 systemd-logind[1701]: Session 13 logged out. Waiting for processes to exit. Jan 14 13:09:24.864975 systemd-logind[1701]: Removed session 13. Jan 14 13:09:29.975031 systemd[1]: Started sshd@11-10.200.8.4:22-10.200.16.10:47360.service - OpenSSH per-connection server daemon (10.200.16.10:47360). Jan 14 13:09:30.619892 sshd[4872]: Accepted publickey for core from 10.200.16.10 port 47360 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:30.621306 sshd-session[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:30.626072 systemd-logind[1701]: New session 14 of user core. Jan 14 13:09:30.629793 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 13:09:31.126745 sshd[4874]: Connection closed by 10.200.16.10 port 47360 Jan 14 13:09:31.127582 sshd-session[4872]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:31.130578 systemd[1]: sshd@11-10.200.8.4:22-10.200.16.10:47360.service: Deactivated successfully. Jan 14 13:09:31.132769 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 13:09:31.134550 systemd-logind[1701]: Session 14 logged out. Waiting for processes to exit. Jan 14 13:09:31.135469 systemd-logind[1701]: Removed session 14. Jan 14 13:09:36.240756 systemd[1]: Started sshd@12-10.200.8.4:22-10.200.16.10:42302.service - OpenSSH per-connection server daemon (10.200.16.10:42302). Jan 14 13:09:36.885262 sshd[4886]: Accepted publickey for core from 10.200.16.10 port 42302 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:36.886937 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:36.891974 systemd-logind[1701]: New session 15 of user core. Jan 14 13:09:36.895771 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 13:09:37.391040 sshd[4888]: Connection closed by 10.200.16.10 port 42302 Jan 14 13:09:37.391780 sshd-session[4886]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:37.394661 systemd[1]: sshd@12-10.200.8.4:22-10.200.16.10:42302.service: Deactivated successfully. Jan 14 13:09:37.397117 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 13:09:37.398762 systemd-logind[1701]: Session 15 logged out. Waiting for processes to exit. Jan 14 13:09:37.399851 systemd-logind[1701]: Removed session 15. Jan 14 13:09:37.508933 systemd[1]: Started sshd@13-10.200.8.4:22-10.200.16.10:42314.service - OpenSSH per-connection server daemon (10.200.16.10:42314). Jan 14 13:09:38.152803 sshd[4899]: Accepted publickey for core from 10.200.16.10 port 42314 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:38.154580 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:38.160485 systemd-logind[1701]: New session 16 of user core. Jan 14 13:09:38.164773 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 13:09:38.703304 sshd[4901]: Connection closed by 10.200.16.10 port 42314 Jan 14 13:09:38.703962 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:38.708632 systemd[1]: sshd@13-10.200.8.4:22-10.200.16.10:42314.service: Deactivated successfully. Jan 14 13:09:38.710939 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 13:09:38.711778 systemd-logind[1701]: Session 16 logged out. Waiting for processes to exit. Jan 14 13:09:38.712848 systemd-logind[1701]: Removed session 16. Jan 14 13:09:38.820925 systemd[1]: Started sshd@14-10.200.8.4:22-10.200.16.10:42328.service - OpenSSH per-connection server daemon (10.200.16.10:42328). Jan 14 13:09:39.463216 sshd[4910]: Accepted publickey for core from 10.200.16.10 port 42328 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:39.464797 sshd-session[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:39.468958 systemd-logind[1701]: New session 17 of user core. Jan 14 13:09:39.477750 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 13:09:39.972895 sshd[4912]: Connection closed by 10.200.16.10 port 42328 Jan 14 13:09:39.973657 sshd-session[4910]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:39.976519 systemd[1]: sshd@14-10.200.8.4:22-10.200.16.10:42328.service: Deactivated successfully. Jan 14 13:09:39.978804 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 13:09:39.980533 systemd-logind[1701]: Session 17 logged out. Waiting for processes to exit. Jan 14 13:09:39.981890 systemd-logind[1701]: Removed session 17. Jan 14 13:09:45.087062 systemd[1]: Started sshd@15-10.200.8.4:22-10.200.16.10:42332.service - OpenSSH per-connection server daemon (10.200.16.10:42332). Jan 14 13:09:45.737839 sshd[4922]: Accepted publickey for core from 10.200.16.10 port 42332 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:45.739315 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:45.743672 systemd-logind[1701]: New session 18 of user core. Jan 14 13:09:45.748780 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 13:09:46.247030 sshd[4924]: Connection closed by 10.200.16.10 port 42332 Jan 14 13:09:46.247908 sshd-session[4922]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:46.252070 systemd[1]: sshd@15-10.200.8.4:22-10.200.16.10:42332.service: Deactivated successfully. Jan 14 13:09:46.254232 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 13:09:46.255067 systemd-logind[1701]: Session 18 logged out. Waiting for processes to exit. Jan 14 13:09:46.256118 systemd-logind[1701]: Removed session 18. Jan 14 13:09:46.363813 systemd[1]: Started sshd@16-10.200.8.4:22-10.200.16.10:46780.service - OpenSSH per-connection server daemon (10.200.16.10:46780). Jan 14 13:09:47.014354 sshd[4934]: Accepted publickey for core from 10.200.16.10 port 46780 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:47.016670 sshd-session[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:47.020764 systemd-logind[1701]: New session 19 of user core. Jan 14 13:09:47.024780 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 13:09:47.613070 sshd[4936]: Connection closed by 10.200.16.10 port 46780 Jan 14 13:09:47.612897 sshd-session[4934]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:47.618170 systemd[1]: sshd@16-10.200.8.4:22-10.200.16.10:46780.service: Deactivated successfully. Jan 14 13:09:47.622916 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 13:09:47.625982 systemd-logind[1701]: Session 19 logged out. Waiting for processes to exit. Jan 14 13:09:47.627427 systemd-logind[1701]: Removed session 19. Jan 14 13:09:47.731958 systemd[1]: Started sshd@17-10.200.8.4:22-10.200.16.10:46788.service - OpenSSH per-connection server daemon (10.200.16.10:46788). Jan 14 13:09:48.374131 sshd[4944]: Accepted publickey for core from 10.200.16.10 port 46788 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:48.375872 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:48.381274 systemd-logind[1701]: New session 20 of user core. Jan 14 13:09:48.385761 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 13:09:50.380237 sshd[4946]: Connection closed by 10.200.16.10 port 46788 Jan 14 13:09:50.381270 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:50.385022 systemd[1]: sshd@17-10.200.8.4:22-10.200.16.10:46788.service: Deactivated successfully. Jan 14 13:09:50.388024 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 13:09:50.389573 systemd-logind[1701]: Session 20 logged out. Waiting for processes to exit. Jan 14 13:09:50.390757 systemd-logind[1701]: Removed session 20. Jan 14 13:09:50.501939 systemd[1]: Started sshd@18-10.200.8.4:22-10.200.16.10:46802.service - OpenSSH per-connection server daemon (10.200.16.10:46802). Jan 14 13:09:51.138762 sshd[4965]: Accepted publickey for core from 10.200.16.10 port 46802 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:51.140550 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:51.145532 systemd-logind[1701]: New session 21 of user core. Jan 14 13:09:51.150741 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 13:09:51.760989 sshd[4969]: Connection closed by 10.200.16.10 port 46802 Jan 14 13:09:51.761945 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:51.766325 systemd[1]: sshd@18-10.200.8.4:22-10.200.16.10:46802.service: Deactivated successfully. Jan 14 13:09:51.768885 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 13:09:51.770009 systemd-logind[1701]: Session 21 logged out. Waiting for processes to exit. Jan 14 13:09:51.771384 systemd-logind[1701]: Removed session 21. Jan 14 13:09:51.879335 systemd[1]: Started sshd@19-10.200.8.4:22-10.200.16.10:46818.service - OpenSSH per-connection server daemon (10.200.16.10:46818). Jan 14 13:09:52.519375 sshd[4978]: Accepted publickey for core from 10.200.16.10 port 46818 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:52.520850 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:52.525673 systemd-logind[1701]: New session 22 of user core. Jan 14 13:09:52.530768 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 13:09:53.024470 sshd[4980]: Connection closed by 10.200.16.10 port 46818 Jan 14 13:09:53.025311 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:53.030017 systemd[1]: sshd@19-10.200.8.4:22-10.200.16.10:46818.service: Deactivated successfully. Jan 14 13:09:53.032507 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 13:09:53.033579 systemd-logind[1701]: Session 22 logged out. Waiting for processes to exit. Jan 14 13:09:53.035034 systemd-logind[1701]: Removed session 22. Jan 14 13:09:58.145916 systemd[1]: Started sshd@20-10.200.8.4:22-10.200.16.10:48174.service - OpenSSH per-connection server daemon (10.200.16.10:48174). Jan 14 13:09:58.783839 sshd[4994]: Accepted publickey for core from 10.200.16.10 port 48174 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:09:58.785343 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:09:58.790283 systemd-logind[1701]: New session 23 of user core. Jan 14 13:09:58.793802 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 13:09:59.290910 sshd[4996]: Connection closed by 10.200.16.10 port 48174 Jan 14 13:09:59.291779 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Jan 14 13:09:59.294959 systemd[1]: sshd@20-10.200.8.4:22-10.200.16.10:48174.service: Deactivated successfully. Jan 14 13:09:59.297154 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 13:09:59.299061 systemd-logind[1701]: Session 23 logged out. Waiting for processes to exit. Jan 14 13:09:59.300110 systemd-logind[1701]: Removed session 23. Jan 14 13:10:04.408170 systemd[1]: Started sshd@21-10.200.8.4:22-10.200.16.10:48184.service - OpenSSH per-connection server daemon (10.200.16.10:48184). Jan 14 13:10:05.056632 sshd[5006]: Accepted publickey for core from 10.200.16.10 port 48184 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:10:05.058473 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:10:05.063927 systemd-logind[1701]: New session 24 of user core. Jan 14 13:10:05.066774 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 13:10:05.564370 sshd[5008]: Connection closed by 10.200.16.10 port 48184 Jan 14 13:10:05.565146 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Jan 14 13:10:05.568963 systemd[1]: sshd@21-10.200.8.4:22-10.200.16.10:48184.service: Deactivated successfully. Jan 14 13:10:05.571461 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 13:10:05.573205 systemd-logind[1701]: Session 24 logged out. Waiting for processes to exit. Jan 14 13:10:05.574468 systemd-logind[1701]: Removed session 24. Jan 14 13:10:10.683201 systemd[1]: Started sshd@22-10.200.8.4:22-10.200.16.10:37498.service - OpenSSH per-connection server daemon (10.200.16.10:37498). Jan 14 13:10:11.321627 sshd[5021]: Accepted publickey for core from 10.200.16.10 port 37498 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:10:11.323415 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:10:11.327706 systemd-logind[1701]: New session 25 of user core. Jan 14 13:10:11.332768 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 13:10:11.830576 sshd[5023]: Connection closed by 10.200.16.10 port 37498 Jan 14 13:10:11.831655 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Jan 14 13:10:11.836135 systemd[1]: sshd@22-10.200.8.4:22-10.200.16.10:37498.service: Deactivated successfully. Jan 14 13:10:11.839033 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 13:10:11.839860 systemd-logind[1701]: Session 25 logged out. Waiting for processes to exit. Jan 14 13:10:11.840920 systemd-logind[1701]: Removed session 25. Jan 14 13:10:11.947942 systemd[1]: Started sshd@23-10.200.8.4:22-10.200.16.10:37500.service - OpenSSH per-connection server daemon (10.200.16.10:37500). Jan 14 13:10:12.585362 sshd[5033]: Accepted publickey for core from 10.200.16.10 port 37500 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:10:12.586867 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:10:12.592155 systemd-logind[1701]: New session 26 of user core. Jan 14 13:10:12.598774 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 13:10:14.374289 containerd[1723]: time="2025-01-14T13:10:14.374209861Z" level=info msg="StopContainer for \"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6\" with timeout 30 (s)" Jan 14 13:10:14.375630 containerd[1723]: time="2025-01-14T13:10:14.375528885Z" level=info msg="Stop container \"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6\" with signal terminated" Jan 14 13:10:14.400076 systemd[1]: run-containerd-runc-k8s.io-b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e-runc.jBMDZe.mount: Deactivated successfully. Jan 14 13:10:14.403563 systemd[1]: cri-containerd-f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6.scope: Deactivated successfully. Jan 14 13:10:14.414460 containerd[1723]: time="2025-01-14T13:10:14.414418374Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:10:14.421290 containerd[1723]: time="2025-01-14T13:10:14.421138993Z" level=info msg="StopContainer for \"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e\" with timeout 2 (s)" Jan 14 13:10:14.421651 containerd[1723]: time="2025-01-14T13:10:14.421511199Z" level=info msg="Stop container \"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e\" with signal terminated" Jan 14 13:10:14.433115 systemd-networkd[1334]: lxc_health: Link DOWN Jan 14 13:10:14.433124 systemd-networkd[1334]: lxc_health: Lost carrier Jan 14 13:10:14.433478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6-rootfs.mount: Deactivated successfully. Jan 14 13:10:14.450640 systemd[1]: cri-containerd-b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e.scope: Deactivated successfully. Jan 14 13:10:14.451191 systemd[1]: cri-containerd-b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e.scope: Consumed 7.303s CPU time. Jan 14 13:10:14.471540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e-rootfs.mount: Deactivated successfully. Jan 14 13:10:14.512748 containerd[1723]: time="2025-01-14T13:10:14.512628914Z" level=info msg="shim disconnected" id=f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6 namespace=k8s.io Jan 14 13:10:14.513002 containerd[1723]: time="2025-01-14T13:10:14.512754716Z" level=warning msg="cleaning up after shim disconnected" id=f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6 namespace=k8s.io Jan 14 13:10:14.513002 containerd[1723]: time="2025-01-14T13:10:14.512715515Z" level=info msg="shim disconnected" id=b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e namespace=k8s.io Jan 14 13:10:14.513002 containerd[1723]: time="2025-01-14T13:10:14.512813217Z" level=warning msg="cleaning up after shim disconnected" id=b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e namespace=k8s.io Jan 14 13:10:14.513002 containerd[1723]: time="2025-01-14T13:10:14.512821417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:10:14.513975 containerd[1723]: time="2025-01-14T13:10:14.513003120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:10:14.542100 containerd[1723]: time="2025-01-14T13:10:14.542057335Z" level=info msg="StopContainer for \"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e\" returns successfully" Jan 14 13:10:14.542873 containerd[1723]: time="2025-01-14T13:10:14.542835349Z" level=info msg="StopPodSandbox for \"b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2\"" Jan 14 13:10:14.542986 containerd[1723]: time="2025-01-14T13:10:14.542875050Z" level=info msg="Container to stop \"92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:10:14.542986 containerd[1723]: time="2025-01-14T13:10:14.542916250Z" level=info msg="Container to stop \"25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:10:14.542986 containerd[1723]: time="2025-01-14T13:10:14.542928550Z" level=info msg="Container to stop \"4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:10:14.542986 containerd[1723]: time="2025-01-14T13:10:14.542940051Z" level=info msg="Container to stop \"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:10:14.542986 containerd[1723]: time="2025-01-14T13:10:14.542951051Z" level=info msg="Container to stop \"daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:10:14.548291 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2-shm.mount: Deactivated successfully. Jan 14 13:10:14.549428 containerd[1723]: time="2025-01-14T13:10:14.549394965Z" level=info msg="StopContainer for \"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6\" returns successfully" Jan 14 13:10:14.550618 containerd[1723]: time="2025-01-14T13:10:14.550498385Z" level=info msg="StopPodSandbox for \"bce1134f9a696f273e9699e46f23479c8ecf3ecbae7bb7fa5fa76776ecf2351c\"" Jan 14 13:10:14.551142 containerd[1723]: time="2025-01-14T13:10:14.550566486Z" level=info msg="Container to stop \"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:10:14.553523 systemd[1]: cri-containerd-b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2.scope: Deactivated successfully. Jan 14 13:10:14.567500 systemd[1]: cri-containerd-bce1134f9a696f273e9699e46f23479c8ecf3ecbae7bb7fa5fa76776ecf2351c.scope: Deactivated successfully. Jan 14 13:10:14.615517 containerd[1723]: time="2025-01-14T13:10:14.615272332Z" level=info msg="shim disconnected" id=b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2 namespace=k8s.io Jan 14 13:10:14.615517 containerd[1723]: time="2025-01-14T13:10:14.615336533Z" level=warning msg="cleaning up after shim disconnected" id=b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2 namespace=k8s.io Jan 14 13:10:14.615517 containerd[1723]: time="2025-01-14T13:10:14.615348034Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:10:14.618656 containerd[1723]: time="2025-01-14T13:10:14.617776077Z" level=info msg="shim disconnected" id=bce1134f9a696f273e9699e46f23479c8ecf3ecbae7bb7fa5fa76776ecf2351c namespace=k8s.io Jan 14 13:10:14.618656 containerd[1723]: time="2025-01-14T13:10:14.617856078Z" level=warning msg="cleaning up after shim disconnected" id=bce1134f9a696f273e9699e46f23479c8ecf3ecbae7bb7fa5fa76776ecf2351c namespace=k8s.io Jan 14 13:10:14.618656 containerd[1723]: time="2025-01-14T13:10:14.617884179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:10:14.637964 containerd[1723]: time="2025-01-14T13:10:14.636508208Z" level=warning msg="cleanup warnings time=\"2025-01-14T13:10:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 13:10:14.638249 containerd[1723]: time="2025-01-14T13:10:14.638018235Z" level=info msg="TearDown network for sandbox \"b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2\" successfully" Jan 14 13:10:14.638249 containerd[1723]: time="2025-01-14T13:10:14.638219839Z" level=info msg="StopPodSandbox for \"b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2\" returns successfully" Jan 14 13:10:14.640147 containerd[1723]: time="2025-01-14T13:10:14.640116772Z" level=info msg="TearDown network for sandbox \"bce1134f9a696f273e9699e46f23479c8ecf3ecbae7bb7fa5fa76776ecf2351c\" successfully" Jan 14 13:10:14.640239 containerd[1723]: time="2025-01-14T13:10:14.640147673Z" level=info msg="StopPodSandbox for \"bce1134f9a696f273e9699e46f23479c8ecf3ecbae7bb7fa5fa76776ecf2351c\" returns successfully" Jan 14 13:10:14.687872 kubelet[3431]: I0114 13:10:14.687454 3431 scope.go:117] "RemoveContainer" containerID="f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6" Jan 14 13:10:14.690482 containerd[1723]: time="2025-01-14T13:10:14.690044157Z" level=info msg="RemoveContainer for \"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6\"" Jan 14 13:10:14.707108 containerd[1723]: time="2025-01-14T13:10:14.707069159Z" level=info msg="RemoveContainer for \"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6\" returns successfully" Jan 14 13:10:14.707427 kubelet[3431]: I0114 13:10:14.707400 3431 scope.go:117] "RemoveContainer" containerID="f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6" Jan 14 13:10:14.707716 containerd[1723]: time="2025-01-14T13:10:14.707669869Z" level=error msg="ContainerStatus for \"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6\": not found" Jan 14 13:10:14.707892 kubelet[3431]: E0114 13:10:14.707847 3431 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6\": not found" containerID="f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6" Jan 14 13:10:14.708001 kubelet[3431]: I0114 13:10:14.707900 3431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6"} err="failed to get container status \"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"f50eab8ce59ba11c4b30faf60ababa81ded0d2187facaf2c78dc669bcb5377e6\": not found" Jan 14 13:10:14.708065 kubelet[3431]: I0114 13:10:14.708007 3431 scope.go:117] "RemoveContainer" containerID="b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e" Jan 14 13:10:14.709132 containerd[1723]: time="2025-01-14T13:10:14.709094395Z" level=info msg="RemoveContainer for \"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e\"" Jan 14 13:10:14.725418 containerd[1723]: time="2025-01-14T13:10:14.725368483Z" level=info msg="RemoveContainer for \"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e\" returns successfully" Jan 14 13:10:14.725723 kubelet[3431]: I0114 13:10:14.725683 3431 scope.go:117] "RemoveContainer" containerID="4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13" Jan 14 13:10:14.726872 containerd[1723]: time="2025-01-14T13:10:14.726843809Z" level=info msg="RemoveContainer for \"4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13\"" Jan 14 13:10:14.738256 containerd[1723]: time="2025-01-14T13:10:14.738218811Z" level=info msg="RemoveContainer for \"4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13\" returns successfully" Jan 14 13:10:14.738503 kubelet[3431]: I0114 13:10:14.738454 3431 scope.go:117] "RemoveContainer" containerID="25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19" Jan 14 13:10:14.739679 containerd[1723]: time="2025-01-14T13:10:14.739640236Z" level=info msg="RemoveContainer for \"25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19\"" Jan 14 13:10:14.749320 containerd[1723]: time="2025-01-14T13:10:14.749279306Z" level=info msg="RemoveContainer for \"25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19\" returns successfully" Jan 14 13:10:14.749623 kubelet[3431]: I0114 13:10:14.749536 3431 scope.go:117] "RemoveContainer" containerID="daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0" Jan 14 13:10:14.750643 containerd[1723]: time="2025-01-14T13:10:14.750592530Z" level=info msg="RemoveContainer for \"daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0\"" Jan 14 13:10:14.763679 containerd[1723]: time="2025-01-14T13:10:14.763643061Z" level=info msg="RemoveContainer for \"daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0\" returns successfully" Jan 14 13:10:14.763931 kubelet[3431]: I0114 13:10:14.763906 3431 scope.go:117] "RemoveContainer" containerID="92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5" Jan 14 13:10:14.765129 containerd[1723]: time="2025-01-14T13:10:14.765080586Z" level=info msg="RemoveContainer for \"92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5\"" Jan 14 13:10:14.776650 containerd[1723]: time="2025-01-14T13:10:14.776592890Z" level=info msg="RemoveContainer for \"92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5\" returns successfully" Jan 14 13:10:14.776918 kubelet[3431]: I0114 13:10:14.776848 3431 scope.go:117] "RemoveContainer" containerID="b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e" Jan 14 13:10:14.777115 containerd[1723]: time="2025-01-14T13:10:14.777079599Z" level=error msg="ContainerStatus for \"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e\": not found" Jan 14 13:10:14.777273 kubelet[3431]: E0114 13:10:14.777248 3431 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e\": not found" containerID="b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e" Jan 14 13:10:14.777336 kubelet[3431]: I0114 13:10:14.777287 3431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e"} err="failed to get container status \"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2112fba55c77127c956e6c502a857b3c3189363aafb4b26bf77044fde9d9b2e\": not found" Jan 14 13:10:14.777336 kubelet[3431]: I0114 13:10:14.777316 3431 scope.go:117] "RemoveContainer" containerID="4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13" Jan 14 13:10:14.777581 containerd[1723]: time="2025-01-14T13:10:14.777536707Z" level=error msg="ContainerStatus for \"4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13\": not found" Jan 14 13:10:14.777865 kubelet[3431]: I0114 13:10:14.777695 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-cni-path\") pod \"a8547c91-ee21-4a86-a46f-841e1f31b465\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " Jan 14 13:10:14.777865 kubelet[3431]: I0114 13:10:14.777726 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-hostproc\") pod \"a8547c91-ee21-4a86-a46f-841e1f31b465\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " Jan 14 13:10:14.777865 kubelet[3431]: E0114 13:10:14.777743 3431 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13\": not found" containerID="4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13" Jan 14 13:10:14.777865 kubelet[3431]: I0114 13:10:14.777757 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpcj5\" (UniqueName: \"kubernetes.io/projected/e7261f0c-db2b-4f68-9839-43bd04863e06-kube-api-access-kpcj5\") pod \"e7261f0c-db2b-4f68-9839-43bd04863e06\" (UID: \"e7261f0c-db2b-4f68-9839-43bd04863e06\") " Jan 14 13:10:14.777865 kubelet[3431]: I0114 13:10:14.777764 3431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13"} err="failed to get container status \"4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13\": rpc error: code = NotFound desc = an error occurred when try to find container \"4219bdfc5ebbeb09c1482678b39bb4672b427d99821080cadf18e06693beeb13\": not found" Jan 14 13:10:14.778089 kubelet[3431]: I0114 13:10:14.777779 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-cilium-cgroup\") pod \"a8547c91-ee21-4a86-a46f-841e1f31b465\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " Jan 14 13:10:14.778089 kubelet[3431]: I0114 13:10:14.777783 3431 scope.go:117] "RemoveContainer" containerID="25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19" Jan 14 13:10:14.778089 kubelet[3431]: I0114 13:10:14.777800 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-lib-modules\") pod \"a8547c91-ee21-4a86-a46f-841e1f31b465\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " Jan 14 13:10:14.778089 kubelet[3431]: I0114 13:10:14.777824 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st77n\" (UniqueName: \"kubernetes.io/projected/a8547c91-ee21-4a86-a46f-841e1f31b465-kube-api-access-st77n\") pod \"a8547c91-ee21-4a86-a46f-841e1f31b465\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " Jan 14 13:10:14.778089 kubelet[3431]: I0114 13:10:14.777843 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-host-proc-sys-kernel\") pod \"a8547c91-ee21-4a86-a46f-841e1f31b465\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " Jan 14 13:10:14.778089 kubelet[3431]: I0114 13:10:14.777868 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8547c91-ee21-4a86-a46f-841e1f31b465-cilium-config-path\") pod \"a8547c91-ee21-4a86-a46f-841e1f31b465\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " Jan 14 13:10:14.778520 kubelet[3431]: I0114 13:10:14.777890 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8547c91-ee21-4a86-a46f-841e1f31b465-clustermesh-secrets\") pod \"a8547c91-ee21-4a86-a46f-841e1f31b465\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " Jan 14 13:10:14.778520 kubelet[3431]: I0114 13:10:14.777912 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-etc-cni-netd\") pod \"a8547c91-ee21-4a86-a46f-841e1f31b465\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " Jan 14 13:10:14.778520 kubelet[3431]: I0114 13:10:14.777935 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7261f0c-db2b-4f68-9839-43bd04863e06-cilium-config-path\") pod \"e7261f0c-db2b-4f68-9839-43bd04863e06\" (UID: \"e7261f0c-db2b-4f68-9839-43bd04863e06\") " Jan 14 13:10:14.778520 kubelet[3431]: I0114 13:10:14.777956 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-cilium-run\") pod \"a8547c91-ee21-4a86-a46f-841e1f31b465\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " Jan 14 13:10:14.778520 kubelet[3431]: I0114 13:10:14.777978 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8547c91-ee21-4a86-a46f-841e1f31b465-hubble-tls\") pod \"a8547c91-ee21-4a86-a46f-841e1f31b465\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " Jan 14 13:10:14.778520 kubelet[3431]: I0114 13:10:14.777999 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-host-proc-sys-net\") pod \"a8547c91-ee21-4a86-a46f-841e1f31b465\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " Jan 14 13:10:14.779012 containerd[1723]: time="2025-01-14T13:10:14.778353622Z" level=error msg="ContainerStatus for \"25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19\": not found" Jan 14 13:10:14.779059 kubelet[3431]: I0114 13:10:14.778019 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-xtables-lock\") pod \"a8547c91-ee21-4a86-a46f-841e1f31b465\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " Jan 14 13:10:14.779059 kubelet[3431]: I0114 13:10:14.778042 3431 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-bpf-maps\") pod \"a8547c91-ee21-4a86-a46f-841e1f31b465\" (UID: \"a8547c91-ee21-4a86-a46f-841e1f31b465\") " Jan 14 13:10:14.779059 kubelet[3431]: I0114 13:10:14.778149 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-hostproc" (OuterVolumeSpecName: "hostproc") pod "a8547c91-ee21-4a86-a46f-841e1f31b465" (UID: "a8547c91-ee21-4a86-a46f-841e1f31b465"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:10:14.779059 kubelet[3431]: I0114 13:10:14.778218 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-cni-path" (OuterVolumeSpecName: "cni-path") pod "a8547c91-ee21-4a86-a46f-841e1f31b465" (UID: "a8547c91-ee21-4a86-a46f-841e1f31b465"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:10:14.779059 kubelet[3431]: I0114 13:10:14.778614 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a8547c91-ee21-4a86-a46f-841e1f31b465" (UID: "a8547c91-ee21-4a86-a46f-841e1f31b465"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:10:14.779254 kubelet[3431]: I0114 13:10:14.778722 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a8547c91-ee21-4a86-a46f-841e1f31b465" (UID: "a8547c91-ee21-4a86-a46f-841e1f31b465"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:10:14.779254 kubelet[3431]: I0114 13:10:14.778754 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a8547c91-ee21-4a86-a46f-841e1f31b465" (UID: "a8547c91-ee21-4a86-a46f-841e1f31b465"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:10:14.782723 kubelet[3431]: I0114 13:10:14.781745 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a8547c91-ee21-4a86-a46f-841e1f31b465" (UID: "a8547c91-ee21-4a86-a46f-841e1f31b465"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:10:14.782723 kubelet[3431]: I0114 13:10:14.781787 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a8547c91-ee21-4a86-a46f-841e1f31b465" (UID: "a8547c91-ee21-4a86-a46f-841e1f31b465"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:10:14.782904 kubelet[3431]: I0114 13:10:14.782881 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a8547c91-ee21-4a86-a46f-841e1f31b465" (UID: "a8547c91-ee21-4a86-a46f-841e1f31b465"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:10:14.783005 kubelet[3431]: I0114 13:10:14.782988 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a8547c91-ee21-4a86-a46f-841e1f31b465" (UID: "a8547c91-ee21-4a86-a46f-841e1f31b465"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:10:14.783483 kubelet[3431]: I0114 13:10:14.783457 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8547c91-ee21-4a86-a46f-841e1f31b465-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a8547c91-ee21-4a86-a46f-841e1f31b465" (UID: "a8547c91-ee21-4a86-a46f-841e1f31b465"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 14 13:10:14.783670 kubelet[3431]: I0114 13:10:14.783645 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a8547c91-ee21-4a86-a46f-841e1f31b465" (UID: "a8547c91-ee21-4a86-a46f-841e1f31b465"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:10:14.783792 kubelet[3431]: E0114 13:10:14.783663 3431 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19\": not found" containerID="25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19" Jan 14 13:10:14.784919 kubelet[3431]: I0114 13:10:14.784887 3431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19"} err="failed to get container status \"25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19\": rpc error: code = NotFound desc = an error occurred when try to find container \"25b808aa372a24091ae51278a2453e82e2521f0ce727c8d08eb361ac0f3c9b19\": not found" Jan 14 13:10:14.785036 kubelet[3431]: I0114 13:10:14.785020 3431 scope.go:117] "RemoveContainer" containerID="daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0" Jan 14 13:10:14.785357 containerd[1723]: time="2025-01-14T13:10:14.785325945Z" level=error msg="ContainerStatus for \"daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0\": not found" Jan 14 13:10:14.785811 kubelet[3431]: E0114 13:10:14.785786 3431 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0\": not found" containerID="daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0" Jan 14 13:10:14.785887 kubelet[3431]: I0114 13:10:14.785817 3431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0"} err="failed to get container status \"daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0\": rpc error: code = NotFound desc = an error occurred when try to find container \"daa94399ca8cbc7656406ea269b526951805ba30ae8ccda38d0c896d7c4b1de0\": not found" Jan 14 13:10:14.785887 kubelet[3431]: I0114 13:10:14.785839 3431 scope.go:117] "RemoveContainer" containerID="92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5" Jan 14 13:10:14.788254 kubelet[3431]: I0114 13:10:14.788221 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7261f0c-db2b-4f68-9839-43bd04863e06-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e7261f0c-db2b-4f68-9839-43bd04863e06" (UID: "e7261f0c-db2b-4f68-9839-43bd04863e06"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 14 13:10:14.788352 kubelet[3431]: I0114 13:10:14.788331 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8547c91-ee21-4a86-a46f-841e1f31b465-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a8547c91-ee21-4a86-a46f-841e1f31b465" (UID: "a8547c91-ee21-4a86-a46f-841e1f31b465"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:10:14.788432 kubelet[3431]: I0114 13:10:14.788414 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7261f0c-db2b-4f68-9839-43bd04863e06-kube-api-access-kpcj5" (OuterVolumeSpecName: "kube-api-access-kpcj5") pod "e7261f0c-db2b-4f68-9839-43bd04863e06" (UID: "e7261f0c-db2b-4f68-9839-43bd04863e06"). InnerVolumeSpecName "kube-api-access-kpcj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:10:14.788688 containerd[1723]: time="2025-01-14T13:10:14.788580603Z" level=error msg="ContainerStatus for \"92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5\": not found" Jan 14 13:10:14.790039 kubelet[3431]: I0114 13:10:14.790012 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8547c91-ee21-4a86-a46f-841e1f31b465-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a8547c91-ee21-4a86-a46f-841e1f31b465" (UID: "a8547c91-ee21-4a86-a46f-841e1f31b465"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 14 13:10:14.790341 kubelet[3431]: E0114 13:10:14.790266 3431 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5\": not found" containerID="92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5" Jan 14 13:10:14.790465 kubelet[3431]: I0114 13:10:14.790441 3431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5"} err="failed to get container status \"92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"92022575136a80bfe03d540d6a73abcdc158b165636f921516a0021c77f3e3d5\": not found" Jan 14 13:10:14.790670 kubelet[3431]: I0114 13:10:14.790645 3431 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8547c91-ee21-4a86-a46f-841e1f31b465-kube-api-access-st77n" (OuterVolumeSpecName: "kube-api-access-st77n") pod "a8547c91-ee21-4a86-a46f-841e1f31b465" (UID: "a8547c91-ee21-4a86-a46f-841e1f31b465"). InnerVolumeSpecName "kube-api-access-st77n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:10:14.878867 kubelet[3431]: I0114 13:10:14.878810 3431 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kpcj5\" (UniqueName: \"kubernetes.io/projected/e7261f0c-db2b-4f68-9839-43bd04863e06-kube-api-access-kpcj5\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.878867 kubelet[3431]: I0114 13:10:14.878863 3431 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-cilium-cgroup\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.878867 kubelet[3431]: I0114 13:10:14.878875 3431 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-lib-modules\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.879128 kubelet[3431]: I0114 13:10:14.878885 3431 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-st77n\" (UniqueName: \"kubernetes.io/projected/a8547c91-ee21-4a86-a46f-841e1f31b465-kube-api-access-st77n\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.879128 kubelet[3431]: I0114 13:10:14.878898 3431 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8547c91-ee21-4a86-a46f-841e1f31b465-cilium-config-path\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.879128 kubelet[3431]: I0114 13:10:14.878908 3431 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-host-proc-sys-kernel\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.879128 kubelet[3431]: I0114 13:10:14.878920 3431 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8547c91-ee21-4a86-a46f-841e1f31b465-clustermesh-secrets\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.879128 kubelet[3431]: I0114 13:10:14.878930 3431 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-etc-cni-netd\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.879128 kubelet[3431]: I0114 13:10:14.878940 3431 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7261f0c-db2b-4f68-9839-43bd04863e06-cilium-config-path\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.879128 kubelet[3431]: I0114 13:10:14.878950 3431 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8547c91-ee21-4a86-a46f-841e1f31b465-hubble-tls\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.879128 kubelet[3431]: I0114 13:10:14.878962 3431 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-host-proc-sys-net\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.879352 kubelet[3431]: I0114 13:10:14.878973 3431 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-xtables-lock\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.879352 kubelet[3431]: I0114 13:10:14.878984 3431 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-bpf-maps\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.879352 kubelet[3431]: I0114 13:10:14.878994 3431 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-cilium-run\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.879352 kubelet[3431]: I0114 13:10:14.879005 3431 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-hostproc\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.879352 kubelet[3431]: I0114 13:10:14.879015 3431 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8547c91-ee21-4a86-a46f-841e1f31b465-cni-path\") on node \"ci-4186.1.0-a-847249f34f\" DevicePath \"\"" Jan 14 13:10:14.993283 systemd[1]: Removed slice kubepods-besteffort-pode7261f0c_db2b_4f68_9839_43bd04863e06.slice - libcontainer container kubepods-besteffort-pode7261f0c_db2b_4f68_9839_43bd04863e06.slice. Jan 14 13:10:14.998217 systemd[1]: Removed slice kubepods-burstable-poda8547c91_ee21_4a86_a46f_841e1f31b465.slice - libcontainer container kubepods-burstable-poda8547c91_ee21_4a86_a46f_841e1f31b465.slice. Jan 14 13:10:14.998551 systemd[1]: kubepods-burstable-poda8547c91_ee21_4a86_a46f_841e1f31b465.slice: Consumed 7.391s CPU time. Jan 14 13:10:15.209085 kubelet[3431]: I0114 13:10:15.209039 3431 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8547c91-ee21-4a86-a46f-841e1f31b465" path="/var/lib/kubelet/pods/a8547c91-ee21-4a86-a46f-841e1f31b465/volumes" Jan 14 13:10:15.209797 kubelet[3431]: I0114 13:10:15.209765 3431 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7261f0c-db2b-4f68-9839-43bd04863e06" path="/var/lib/kubelet/pods/e7261f0c-db2b-4f68-9839-43bd04863e06/volumes" Jan 14 13:10:15.383748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bce1134f9a696f273e9699e46f23479c8ecf3ecbae7bb7fa5fa76776ecf2351c-rootfs.mount: Deactivated successfully. Jan 14 13:10:15.384280 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bce1134f9a696f273e9699e46f23479c8ecf3ecbae7bb7fa5fa76776ecf2351c-shm.mount: Deactivated successfully. Jan 14 13:10:15.384697 systemd[1]: var-lib-kubelet-pods-e7261f0c\x2ddb2b\x2d4f68\x2d9839\x2d43bd04863e06-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkpcj5.mount: Deactivated successfully. Jan 14 13:10:15.384856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b020d4263430b458a878d7c716f3115f32dc17467338cd90afd41560965d79b2-rootfs.mount: Deactivated successfully. Jan 14 13:10:15.384954 systemd[1]: var-lib-kubelet-pods-a8547c91\x2dee21\x2d4a86\x2da46f\x2d841e1f31b465-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dst77n.mount: Deactivated successfully. Jan 14 13:10:15.385047 systemd[1]: var-lib-kubelet-pods-a8547c91\x2dee21\x2d4a86\x2da46f\x2d841e1f31b465-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 14 13:10:15.385137 systemd[1]: var-lib-kubelet-pods-a8547c91\x2dee21\x2d4a86\x2da46f\x2d841e1f31b465-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 14 13:10:16.424538 sshd[5036]: Connection closed by 10.200.16.10 port 37500 Jan 14 13:10:16.428771 systemd[1]: sshd@23-10.200.8.4:22-10.200.16.10:37500.service: Deactivated successfully. Jan 14 13:10:16.425376 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Jan 14 13:10:16.431248 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 13:10:16.433217 systemd-logind[1701]: Session 26 logged out. Waiting for processes to exit. Jan 14 13:10:16.434439 systemd-logind[1701]: Removed session 26. Jan 14 13:10:16.541836 systemd[1]: Started sshd@24-10.200.8.4:22-10.200.16.10:51760.service - OpenSSH per-connection server daemon (10.200.16.10:51760). Jan 14 13:10:17.184885 sshd[5200]: Accepted publickey for core from 10.200.16.10 port 51760 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:10:17.186667 sshd-session[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:10:17.191264 systemd-logind[1701]: New session 27 of user core. Jan 14 13:10:17.198763 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 13:10:17.391679 kubelet[3431]: E0114 13:10:17.391640 3431 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 13:10:18.133640 kubelet[3431]: I0114 13:10:18.133583 3431 topology_manager.go:215] "Topology Admit Handler" podUID="05741fbd-8ef0-43fc-9570-0c5ebc3a8d34" podNamespace="kube-system" podName="cilium-mwrvr" Jan 14 13:10:18.133832 kubelet[3431]: E0114 13:10:18.133667 3431 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a8547c91-ee21-4a86-a46f-841e1f31b465" containerName="mount-bpf-fs" Jan 14 13:10:18.133832 kubelet[3431]: E0114 13:10:18.133679 3431 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a8547c91-ee21-4a86-a46f-841e1f31b465" containerName="mount-cgroup" Jan 14 13:10:18.133832 kubelet[3431]: E0114 13:10:18.133689 3431 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a8547c91-ee21-4a86-a46f-841e1f31b465" containerName="apply-sysctl-overwrites" Jan 14 13:10:18.133832 kubelet[3431]: E0114 13:10:18.133696 3431 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a8547c91-ee21-4a86-a46f-841e1f31b465" containerName="clean-cilium-state" Jan 14 13:10:18.133832 kubelet[3431]: E0114 13:10:18.133703 3431 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a8547c91-ee21-4a86-a46f-841e1f31b465" containerName="cilium-agent" Jan 14 13:10:18.133832 kubelet[3431]: E0114 13:10:18.133711 3431 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e7261f0c-db2b-4f68-9839-43bd04863e06" containerName="cilium-operator" Jan 14 13:10:18.133832 kubelet[3431]: I0114 13:10:18.133738 3431 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8547c91-ee21-4a86-a46f-841e1f31b465" containerName="cilium-agent" Jan 14 13:10:18.133832 kubelet[3431]: I0114 13:10:18.133745 3431 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7261f0c-db2b-4f68-9839-43bd04863e06" containerName="cilium-operator" Jan 14 13:10:18.146441 systemd[1]: Created slice kubepods-burstable-pod05741fbd_8ef0_43fc_9570_0c5ebc3a8d34.slice - libcontainer container kubepods-burstable-pod05741fbd_8ef0_43fc_9570_0c5ebc3a8d34.slice. Jan 14 13:10:18.226868 sshd[5203]: Connection closed by 10.200.16.10 port 51760 Jan 14 13:10:18.227570 sshd-session[5200]: pam_unix(sshd:session): session closed for user core Jan 14 13:10:18.230545 systemd[1]: sshd@24-10.200.8.4:22-10.200.16.10:51760.service: Deactivated successfully. Jan 14 13:10:18.232803 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 13:10:18.234517 systemd-logind[1701]: Session 27 logged out. Waiting for processes to exit. Jan 14 13:10:18.235743 systemd-logind[1701]: Removed session 27. Jan 14 13:10:18.297112 kubelet[3431]: I0114 13:10:18.297074 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-host-proc-sys-kernel\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.297112 kubelet[3431]: I0114 13:10:18.297114 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-etc-cni-netd\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.297338 kubelet[3431]: I0114 13:10:18.297135 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-clustermesh-secrets\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.297338 kubelet[3431]: I0114 13:10:18.297155 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8cvd\" (UniqueName: \"kubernetes.io/projected/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-kube-api-access-q8cvd\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.297338 kubelet[3431]: I0114 13:10:18.297176 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-bpf-maps\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.297338 kubelet[3431]: I0114 13:10:18.297194 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-cni-path\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.297338 kubelet[3431]: I0114 13:10:18.297212 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-cilium-ipsec-secrets\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.297338 kubelet[3431]: I0114 13:10:18.297235 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-cilium-cgroup\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.297568 kubelet[3431]: I0114 13:10:18.297253 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-lib-modules\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.297568 kubelet[3431]: I0114 13:10:18.297272 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-host-proc-sys-net\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.297568 kubelet[3431]: I0114 13:10:18.297294 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-hubble-tls\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.297568 kubelet[3431]: I0114 13:10:18.297319 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-xtables-lock\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.297568 kubelet[3431]: I0114 13:10:18.297342 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-cilium-run\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.297568 kubelet[3431]: I0114 13:10:18.297366 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-hostproc\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.297750 kubelet[3431]: I0114 13:10:18.297389 3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05741fbd-8ef0-43fc-9570-0c5ebc3a8d34-cilium-config-path\") pod \"cilium-mwrvr\" (UID: \"05741fbd-8ef0-43fc-9570-0c5ebc3a8d34\") " pod="kube-system/cilium-mwrvr" Jan 14 13:10:18.353851 systemd[1]: Started sshd@25-10.200.8.4:22-10.200.16.10:51770.service - OpenSSH per-connection server daemon (10.200.16.10:51770). Jan 14 13:10:18.452986 containerd[1723]: time="2025-01-14T13:10:18.452944926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mwrvr,Uid:05741fbd-8ef0-43fc-9570-0c5ebc3a8d34,Namespace:kube-system,Attempt:0,}" Jan 14 13:10:18.514487 containerd[1723]: time="2025-01-14T13:10:18.514302542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:10:18.514487 containerd[1723]: time="2025-01-14T13:10:18.514376443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:10:18.514487 containerd[1723]: time="2025-01-14T13:10:18.514399744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:10:18.515395 containerd[1723]: time="2025-01-14T13:10:18.515306561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:10:18.540776 systemd[1]: Started cri-containerd-90fa6931a2437eabaaa140c62d90dd2a95da30c823d552cff11edeebc8674671.scope - libcontainer container 90fa6931a2437eabaaa140c62d90dd2a95da30c823d552cff11edeebc8674671. Jan 14 13:10:18.564133 containerd[1723]: time="2025-01-14T13:10:18.564076391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mwrvr,Uid:05741fbd-8ef0-43fc-9570-0c5ebc3a8d34,Namespace:kube-system,Attempt:0,} returns sandbox id \"90fa6931a2437eabaaa140c62d90dd2a95da30c823d552cff11edeebc8674671\"" Jan 14 13:10:18.567715 containerd[1723]: time="2025-01-14T13:10:18.567585258Z" level=info msg="CreateContainer within sandbox \"90fa6931a2437eabaaa140c62d90dd2a95da30c823d552cff11edeebc8674671\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 14 13:10:18.615270 containerd[1723]: time="2025-01-14T13:10:18.615216267Z" level=info msg="CreateContainer within sandbox \"90fa6931a2437eabaaa140c62d90dd2a95da30c823d552cff11edeebc8674671\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7d4cf155026be1c084d790f3f82b47382009197d261b5871856134da691550db\"" Jan 14 13:10:18.616030 containerd[1723]: time="2025-01-14T13:10:18.615999481Z" level=info msg="StartContainer for \"7d4cf155026be1c084d790f3f82b47382009197d261b5871856134da691550db\"" Jan 14 13:10:18.645792 systemd[1]: Started cri-containerd-7d4cf155026be1c084d790f3f82b47382009197d261b5871856134da691550db.scope - libcontainer container 7d4cf155026be1c084d790f3f82b47382009197d261b5871856134da691550db. Jan 14 13:10:18.676790 containerd[1723]: time="2025-01-14T13:10:18.676584937Z" level=info msg="StartContainer for \"7d4cf155026be1c084d790f3f82b47382009197d261b5871856134da691550db\" returns successfully" Jan 14 13:10:18.682179 systemd[1]: cri-containerd-7d4cf155026be1c084d790f3f82b47382009197d261b5871856134da691550db.scope: Deactivated successfully. Jan 14 13:10:18.751654 containerd[1723]: time="2025-01-14T13:10:18.751422864Z" level=info msg="shim disconnected" id=7d4cf155026be1c084d790f3f82b47382009197d261b5871856134da691550db namespace=k8s.io Jan 14 13:10:18.751654 containerd[1723]: time="2025-01-14T13:10:18.751492666Z" level=warning msg="cleaning up after shim disconnected" id=7d4cf155026be1c084d790f3f82b47382009197d261b5871856134da691550db namespace=k8s.io Jan 14 13:10:18.751654 containerd[1723]: time="2025-01-14T13:10:18.751501566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:10:19.000070 sshd[5213]: Accepted publickey for core from 10.200.16.10 port 51770 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:10:19.001541 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:10:19.006710 systemd-logind[1701]: New session 28 of user core. Jan 14 13:10:19.009767 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 13:10:19.448306 sshd[5321]: Connection closed by 10.200.16.10 port 51770 Jan 14 13:10:19.449057 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Jan 14 13:10:19.452093 systemd[1]: sshd@25-10.200.8.4:22-10.200.16.10:51770.service: Deactivated successfully. Jan 14 13:10:19.454428 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 13:10:19.456466 systemd-logind[1701]: Session 28 logged out. Waiting for processes to exit. Jan 14 13:10:19.457651 systemd-logind[1701]: Removed session 28. Jan 14 13:10:19.561792 systemd[1]: Started sshd@26-10.200.8.4:22-10.200.16.10:51780.service - OpenSSH per-connection server daemon (10.200.16.10:51780). Jan 14 13:10:19.713682 containerd[1723]: time="2025-01-14T13:10:19.713410511Z" level=info msg="CreateContainer within sandbox \"90fa6931a2437eabaaa140c62d90dd2a95da30c823d552cff11edeebc8674671\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 14 13:10:19.769074 containerd[1723]: time="2025-01-14T13:10:19.769026172Z" level=info msg="CreateContainer within sandbox \"90fa6931a2437eabaaa140c62d90dd2a95da30c823d552cff11edeebc8674671\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c1c4f9ae58b521ad70f1d8fe8c0739a54a03f4972794da61f3d8b53702474547\"" Jan 14 13:10:19.769763 containerd[1723]: time="2025-01-14T13:10:19.769586183Z" level=info msg="StartContainer for \"c1c4f9ae58b521ad70f1d8fe8c0739a54a03f4972794da61f3d8b53702474547\"" Jan 14 13:10:19.805766 systemd[1]: Started cri-containerd-c1c4f9ae58b521ad70f1d8fe8c0739a54a03f4972794da61f3d8b53702474547.scope - libcontainer container c1c4f9ae58b521ad70f1d8fe8c0739a54a03f4972794da61f3d8b53702474547. Jan 14 13:10:19.840159 containerd[1723]: time="2025-01-14T13:10:19.839433115Z" level=info msg="StartContainer for \"c1c4f9ae58b521ad70f1d8fe8c0739a54a03f4972794da61f3d8b53702474547\" returns successfully" Jan 14 13:10:19.843689 systemd[1]: cri-containerd-c1c4f9ae58b521ad70f1d8fe8c0739a54a03f4972794da61f3d8b53702474547.scope: Deactivated successfully. Jan 14 13:10:19.884361 containerd[1723]: time="2025-01-14T13:10:19.884269670Z" level=info msg="shim disconnected" id=c1c4f9ae58b521ad70f1d8fe8c0739a54a03f4972794da61f3d8b53702474547 namespace=k8s.io Jan 14 13:10:19.884361 containerd[1723]: time="2025-01-14T13:10:19.884354371Z" level=warning msg="cleaning up after shim disconnected" id=c1c4f9ae58b521ad70f1d8fe8c0739a54a03f4972794da61f3d8b53702474547 namespace=k8s.io Jan 14 13:10:19.884866 containerd[1723]: time="2025-01-14T13:10:19.884381572Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:10:20.212078 sshd[5327]: Accepted publickey for core from 10.200.16.10 port 51780 ssh2: RSA SHA256:M5nAcovbN21UJg+IuqsdYp1Y8uRpqNPaQvfcGTOPdoU Jan 14 13:10:20.213538 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:10:20.217657 systemd-logind[1701]: New session 29 of user core. Jan 14 13:10:20.221756 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 13:10:20.407894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1c4f9ae58b521ad70f1d8fe8c0739a54a03f4972794da61f3d8b53702474547-rootfs.mount: Deactivated successfully. Jan 14 13:10:20.718332 containerd[1723]: time="2025-01-14T13:10:20.718090472Z" level=info msg="CreateContainer within sandbox \"90fa6931a2437eabaaa140c62d90dd2a95da30c823d552cff11edeebc8674671\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 14 13:10:20.769488 containerd[1723]: time="2025-01-14T13:10:20.769428951Z" level=info msg="CreateContainer within sandbox \"90fa6931a2437eabaaa140c62d90dd2a95da30c823d552cff11edeebc8674671\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"746ec095f26693bafbef666f02f148baa1032c77005b869914854db364dc95da\"" Jan 14 13:10:20.770329 containerd[1723]: time="2025-01-14T13:10:20.770151265Z" level=info msg="StartContainer for \"746ec095f26693bafbef666f02f148baa1032c77005b869914854db364dc95da\"" Jan 14 13:10:20.813087 systemd[1]: Started cri-containerd-746ec095f26693bafbef666f02f148baa1032c77005b869914854db364dc95da.scope - libcontainer container 746ec095f26693bafbef666f02f148baa1032c77005b869914854db364dc95da. Jan 14 13:10:20.847621 systemd[1]: cri-containerd-746ec095f26693bafbef666f02f148baa1032c77005b869914854db364dc95da.scope: Deactivated successfully. Jan 14 13:10:20.851117 containerd[1723]: time="2025-01-14T13:10:20.851069008Z" level=info msg="StartContainer for \"746ec095f26693bafbef666f02f148baa1032c77005b869914854db364dc95da\" returns successfully" Jan 14 13:10:20.888349 containerd[1723]: time="2025-01-14T13:10:20.888245417Z" level=info msg="shim disconnected" id=746ec095f26693bafbef666f02f148baa1032c77005b869914854db364dc95da namespace=k8s.io Jan 14 13:10:20.888349 containerd[1723]: time="2025-01-14T13:10:20.888329119Z" level=warning msg="cleaning up after shim disconnected" id=746ec095f26693bafbef666f02f148baa1032c77005b869914854db364dc95da namespace=k8s.io Jan 14 13:10:20.888349 containerd[1723]: time="2025-01-14T13:10:20.888342419Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:10:21.135133 kubelet[3431]: I0114 13:10:21.134641 3431 setters.go:580] "Node became not ready" node="ci-4186.1.0-a-847249f34f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-14T13:10:21Z","lastTransitionTime":"2025-01-14T13:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 14 13:10:21.408263 systemd[1]: run-containerd-runc-k8s.io-746ec095f26693bafbef666f02f148baa1032c77005b869914854db364dc95da-runc.Z5ewGd.mount: Deactivated successfully. Jan 14 13:10:21.408397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-746ec095f26693bafbef666f02f148baa1032c77005b869914854db364dc95da-rootfs.mount: Deactivated successfully. Jan 14 13:10:21.723623 containerd[1723]: time="2025-01-14T13:10:21.723503247Z" level=info msg="CreateContainer within sandbox \"90fa6931a2437eabaaa140c62d90dd2a95da30c823d552cff11edeebc8674671\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 14 13:10:21.774906 containerd[1723]: time="2025-01-14T13:10:21.774864027Z" level=info msg="CreateContainer within sandbox \"90fa6931a2437eabaaa140c62d90dd2a95da30c823d552cff11edeebc8674671\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ee21f5b895afb1980bdc662afae6df070cd1d1d7ff47d866652699f612506b09\"" Jan 14 13:10:21.775651 containerd[1723]: time="2025-01-14T13:10:21.775417037Z" level=info msg="StartContainer for \"ee21f5b895afb1980bdc662afae6df070cd1d1d7ff47d866652699f612506b09\"" Jan 14 13:10:21.806772 systemd[1]: Started cri-containerd-ee21f5b895afb1980bdc662afae6df070cd1d1d7ff47d866652699f612506b09.scope - libcontainer container ee21f5b895afb1980bdc662afae6df070cd1d1d7ff47d866652699f612506b09. Jan 14 13:10:21.831103 systemd[1]: cri-containerd-ee21f5b895afb1980bdc662afae6df070cd1d1d7ff47d866652699f612506b09.scope: Deactivated successfully. Jan 14 13:10:21.836755 containerd[1723]: time="2025-01-14T13:10:21.836715207Z" level=info msg="StartContainer for \"ee21f5b895afb1980bdc662afae6df070cd1d1d7ff47d866652699f612506b09\" returns successfully" Jan 14 13:10:21.869973 containerd[1723]: time="2025-01-14T13:10:21.869900839Z" level=info msg="shim disconnected" id=ee21f5b895afb1980bdc662afae6df070cd1d1d7ff47d866652699f612506b09 namespace=k8s.io Jan 14 13:10:21.869973 containerd[1723]: time="2025-01-14T13:10:21.869968741Z" level=warning msg="cleaning up after shim disconnected" id=ee21f5b895afb1980bdc662afae6df070cd1d1d7ff47d866652699f612506b09 namespace=k8s.io Jan 14 13:10:21.869973 containerd[1723]: time="2025-01-14T13:10:21.869979441Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:10:22.393012 kubelet[3431]: E0114 13:10:22.392948 3431 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 13:10:22.407596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee21f5b895afb1980bdc662afae6df070cd1d1d7ff47d866652699f612506b09-rootfs.mount: Deactivated successfully. Jan 14 13:10:22.728731 containerd[1723]: time="2025-01-14T13:10:22.728575816Z" level=info msg="CreateContainer within sandbox \"90fa6931a2437eabaaa140c62d90dd2a95da30c823d552cff11edeebc8674671\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 14 13:10:22.779855 containerd[1723]: time="2025-01-14T13:10:22.779809993Z" level=info msg="CreateContainer within sandbox \"90fa6931a2437eabaaa140c62d90dd2a95da30c823d552cff11edeebc8674671\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d99923b56db2fde6023b85e49752a8fc9806b05fb4fad16528f13d21a85b847\"" Jan 14 13:10:22.780638 containerd[1723]: time="2025-01-14T13:10:22.780521207Z" level=info msg="StartContainer for \"4d99923b56db2fde6023b85e49752a8fc9806b05fb4fad16528f13d21a85b847\"" Jan 14 13:10:22.815766 systemd[1]: Started cri-containerd-4d99923b56db2fde6023b85e49752a8fc9806b05fb4fad16528f13d21a85b847.scope - libcontainer container 4d99923b56db2fde6023b85e49752a8fc9806b05fb4fad16528f13d21a85b847. Jan 14 13:10:22.858080 containerd[1723]: time="2025-01-14T13:10:22.858030285Z" level=info msg="StartContainer for \"4d99923b56db2fde6023b85e49752a8fc9806b05fb4fad16528f13d21a85b847\" returns successfully" Jan 14 13:10:23.266747 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 14 13:10:24.771749 systemd[1]: run-containerd-runc-k8s.io-4d99923b56db2fde6023b85e49752a8fc9806b05fb4fad16528f13d21a85b847-runc.bnFsNh.mount: Deactivated successfully. Jan 14 13:10:26.167160 systemd-networkd[1334]: lxc_health: Link UP Jan 14 13:10:26.171928 systemd-networkd[1334]: lxc_health: Gained carrier Jan 14 13:10:26.490114 kubelet[3431]: I0114 13:10:26.488869 3431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mwrvr" podStartSLOduration=8.488842031 podStartE2EDuration="8.488842031s" podCreationTimestamp="2025-01-14 13:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:10:23.749804193 +0000 UTC m=+196.664820794" watchObservedRunningTime="2025-01-14 13:10:26.488842031 +0000 UTC m=+199.403858632" Jan 14 13:10:27.207946 kubelet[3431]: E0114 13:10:27.207881 3431 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-cxnnc" podUID="f845eb37-db3a-4c62-b30f-e8f55b3e54c4" Jan 14 13:10:27.260808 systemd-networkd[1334]: lxc_health: Gained IPv6LL Jan 14 13:10:29.186904 systemd[1]: run-containerd-runc-k8s.io-4d99923b56db2fde6023b85e49752a8fc9806b05fb4fad16528f13d21a85b847-runc.FTvMUe.mount: Deactivated successfully. Jan 14 13:10:31.490488 sshd[5390]: Connection closed by 10.200.16.10 port 51780 Jan 14 13:10:31.491433 sshd-session[5327]: pam_unix(sshd:session): session closed for user core Jan 14 13:10:31.495069 systemd[1]: sshd@26-10.200.8.4:22-10.200.16.10:51780.service: Deactivated successfully. Jan 14 13:10:31.497977 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 13:10:31.499653 systemd-logind[1701]: Session 29 logged out. Waiting for processes to exit. Jan 14 13:10:31.500723 systemd-logind[1701]: Removed session 29.