Feb 13 15:58:56.081473 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:06:02 -00 2025 Feb 13 15:58:56.081501 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:58:56.081510 kernel: BIOS-provided physical RAM map: Feb 13 15:58:56.081519 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:58:56.081525 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 13 15:58:56.081531 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 13 15:58:56.081540 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 13 15:58:56.081547 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 13 15:58:56.081556 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 13 15:58:56.081564 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 13 15:58:56.081607 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 13 15:58:56.081616 kernel: printk: bootconsole [earlyser0] enabled Feb 13 15:58:56.081623 kernel: NX (Execute Disable) protection: active Feb 13 15:58:56.081629 kernel: APIC: Static calls initialized Feb 13 15:58:56.081642 kernel: efi: EFI v2.7 by Microsoft Feb 13 15:58:56.081650 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee81a98 RNG=0x3ffd1018 Feb 13 15:58:56.081658 kernel: random: crng init done Feb 13 15:58:56.081667 kernel: secureboot: Secure boot disabled Feb 13 15:58:56.081674 kernel: SMBIOS 3.1.0 present. Feb 13 15:58:56.081682 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Feb 13 15:58:56.081691 kernel: Hypervisor detected: Microsoft Hyper-V Feb 13 15:58:56.081698 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 13 15:58:56.081710 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Feb 13 15:58:56.081717 kernel: Hyper-V: Nested features: 0x1e0101 Feb 13 15:58:56.081729 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 13 15:58:56.081736 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 13 15:58:56.081746 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 15:58:56.081753 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 15:58:56.081764 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 13 15:58:56.081771 kernel: tsc: Detected 2593.906 MHz processor Feb 13 15:58:56.081780 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:58:56.081790 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:58:56.081797 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 13 15:58:56.081809 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:58:56.081817 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:58:56.081824 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 13 15:58:56.081833 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 13 15:58:56.081841 kernel: Using GB pages for direct mapping Feb 13 15:58:56.081849 kernel: ACPI: Early table checksum verification disabled Feb 13 15:58:56.081858 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 13 15:58:56.081870 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081882 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081890 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 13 15:58:56.081897 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 13 15:58:56.081908 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081916 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081923 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081936 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081943 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081953 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081961 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081968 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 13 15:58:56.081979 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 13 15:58:56.081986 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 13 15:58:56.081995 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 13 15:58:56.082005 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 13 15:58:56.082014 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 13 15:58:56.082024 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 13 15:58:56.082032 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 13 15:58:56.082039 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 13 15:58:56.082050 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 13 15:58:56.082058 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:58:56.082067 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:58:56.082077 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 13 15:58:56.082084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 13 15:58:56.082096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 13 15:58:56.082104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 13 15:58:56.082112 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 13 15:58:56.082123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 13 15:58:56.082130 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 13 15:58:56.082138 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 13 15:58:56.082148 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 13 15:58:56.082157 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 13 15:58:56.082168 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 13 15:58:56.082178 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 13 15:58:56.082186 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 13 15:58:56.082196 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 13 15:58:56.082205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 13 15:58:56.082214 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 13 15:58:56.082224 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 13 15:58:56.082231 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 13 15:58:56.082240 kernel: Zone ranges: Feb 13 15:58:56.082252 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:58:56.082259 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 15:58:56.082269 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 15:58:56.082277 kernel: Movable zone start for each node Feb 13 15:58:56.082285 kernel: Early memory node ranges Feb 13 15:58:56.082295 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:58:56.082302 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 13 15:58:56.082310 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 13 15:58:56.082320 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 15:58:56.082330 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 13 15:58:56.082341 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:58:56.082348 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:58:56.082356 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 13 15:58:56.082366 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 13 15:58:56.082373 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 13 15:58:56.082381 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:58:56.082391 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:58:56.082398 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:58:56.082411 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 13 15:58:56.082419 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:58:56.082427 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 13 15:58:56.082437 kernel: Booting paravirtualized kernel on Hyper-V Feb 13 15:58:56.082445 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:58:56.082453 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:58:56.082463 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:58:56.082471 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:58:56.082480 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:58:56.082490 kernel: Hyper-V: PV spinlocks enabled Feb 13 15:58:56.082498 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:58:56.082509 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:58:56.082517 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:58:56.082525 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 15:58:56.082535 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:58:56.082542 kernel: Fallback order for Node 0: 0 Feb 13 15:58:56.082552 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 13 15:58:56.082563 kernel: Policy zone: Normal Feb 13 15:58:56.082588 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:58:56.082599 kernel: software IO TLB: area num 2. Feb 13 15:58:56.082613 kernel: Memory: 8074984K/8387460K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 312220K reserved, 0K cma-reserved) Feb 13 15:58:56.082622 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:58:56.082632 kernel: ftrace: allocating 37890 entries in 149 pages Feb 13 15:58:56.082642 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:58:56.082652 kernel: Dynamic Preempt: voluntary Feb 13 15:58:56.082661 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:58:56.082670 kernel: rcu: RCU event tracing is enabled. Feb 13 15:58:56.082681 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:58:56.082691 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:58:56.082701 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:58:56.082710 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:58:56.082718 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:58:56.082729 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:58:56.082737 kernel: Using NULL legacy PIC Feb 13 15:58:56.082750 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 13 15:58:56.082758 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:58:56.082766 kernel: Console: colour dummy device 80x25 Feb 13 15:58:56.082777 kernel: printk: console [tty1] enabled Feb 13 15:58:56.082785 kernel: printk: console [ttyS0] enabled Feb 13 15:58:56.082795 kernel: printk: bootconsole [earlyser0] disabled Feb 13 15:58:56.082804 kernel: ACPI: Core revision 20230628 Feb 13 15:58:56.082812 kernel: Failed to register legacy timer interrupt Feb 13 15:58:56.082822 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:58:56.082833 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 15:58:56.082842 kernel: Hyper-V: Using IPI hypercalls Feb 13 15:58:56.082851 kernel: APIC: send_IPI() replaced with hv_send_ipi() Feb 13 15:58:56.082859 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Feb 13 15:58:56.082870 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Feb 13 15:58:56.082878 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Feb 13 15:58:56.082887 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Feb 13 15:58:56.082897 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Feb 13 15:58:56.082905 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Feb 13 15:58:56.082918 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 15:58:56.082926 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 15:58:56.082935 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:58:56.082945 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:58:56.082953 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:58:56.082962 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:58:56.082971 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 15:58:56.082979 kernel: RETBleed: Vulnerable Feb 13 15:58:56.082990 kernel: Speculative Store Bypass: Vulnerable Feb 13 15:58:56.082997 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:58:56.083009 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:58:56.083018 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:58:56.083028 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:58:56.083037 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:58:56.083048 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 15:58:56.083056 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 15:58:56.083066 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 15:58:56.083076 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:58:56.083085 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 15:58:56.083094 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 15:58:56.083102 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 15:58:56.083115 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 13 15:58:56.083123 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:58:56.083132 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:58:56.083141 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:58:56.083149 kernel: landlock: Up and running. Feb 13 15:58:56.083159 kernel: SELinux: Initializing. Feb 13 15:58:56.083168 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:58:56.083176 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:58:56.083187 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 15:58:56.083195 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:58:56.083204 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:58:56.083216 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:58:56.083224 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 15:58:56.083235 kernel: signal: max sigframe size: 3632 Feb 13 15:58:56.083243 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:58:56.083253 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:58:56.083262 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:58:56.083270 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:58:56.083281 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:58:56.083289 kernel: .... node #0, CPUs: #1 Feb 13 15:58:56.083302 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 13 15:58:56.083313 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:58:56.083324 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:58:56.083334 kernel: smpboot: Max logical packages: 1 Feb 13 15:58:56.083346 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 13 15:58:56.083364 kernel: devtmpfs: initialized Feb 13 15:58:56.083377 kernel: x86/mm: Memory block size: 128MB Feb 13 15:58:56.083389 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 13 15:58:56.083405 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:58:56.083414 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:58:56.083426 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:58:56.083435 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:58:56.083445 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:58:56.083453 kernel: audit: type=2000 audit(1739462334.028:1): state=initialized audit_enabled=0 res=1 Feb 13 15:58:56.083462 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:58:56.083472 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:58:56.083480 kernel: cpuidle: using governor menu Feb 13 15:58:56.083490 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:58:56.083498 kernel: dca service started, version 1.12.1 Feb 13 15:58:56.083506 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Feb 13 15:58:56.083514 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:58:56.083522 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:58:56.083530 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:58:56.083538 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:58:56.083546 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:58:56.083553 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:58:56.083563 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:58:56.083585 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:58:56.083593 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:58:56.083601 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:58:56.083613 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:58:56.083622 kernel: ACPI: Interpreter enabled Feb 13 15:58:56.083633 kernel: ACPI: PM: (supports S0 S5) Feb 13 15:58:56.083644 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:58:56.083653 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:58:56.083669 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 15:58:56.083682 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 13 15:58:56.083696 kernel: iommu: Default domain type: Translated Feb 13 15:58:56.083709 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:58:56.083722 kernel: efivars: Registered efivars operations Feb 13 15:58:56.083735 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:58:56.083749 kernel: PCI: System does not support PCI Feb 13 15:58:56.083762 kernel: vgaarb: loaded Feb 13 15:58:56.083776 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 13 15:58:56.083793 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:58:56.083807 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:58:56.083820 kernel: pnp: PnP ACPI init Feb 13 15:58:56.083834 kernel: pnp: PnP ACPI: found 3 devices Feb 13 15:58:56.083847 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:58:56.083861 kernel: NET: Registered PF_INET protocol family Feb 13 15:58:56.083873 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:58:56.083887 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 15:58:56.083900 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:58:56.083915 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:58:56.083931 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 15:58:56.083943 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 15:58:56.083957 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:58:56.083970 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:58:56.083983 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:58:56.083997 kernel: NET: Registered PF_XDP protocol family Feb 13 15:58:56.084009 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:58:56.084022 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 15:58:56.084042 kernel: software IO TLB: mapped [mem 0x000000003ad8c000-0x000000003ed8c000] (64MB) Feb 13 15:58:56.084056 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:58:56.084070 kernel: Initialise system trusted keyrings Feb 13 15:58:56.084081 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 15:58:56.084093 kernel: Key type asymmetric registered Feb 13 15:58:56.084106 kernel: Asymmetric key parser 'x509' registered Feb 13 15:58:56.084119 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:58:56.084132 kernel: io scheduler mq-deadline registered Feb 13 15:58:56.084145 kernel: io scheduler kyber registered Feb 13 15:58:56.084163 kernel: io scheduler bfq registered Feb 13 15:58:56.084176 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:58:56.084189 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:58:56.084203 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:58:56.084219 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 15:58:56.084233 kernel: i8042: PNP: No PS/2 controller found. Feb 13 15:58:56.084417 kernel: rtc_cmos 00:02: registered as rtc0 Feb 13 15:58:56.084555 kernel: rtc_cmos 00:02: setting system clock to 2025-02-13T15:58:55 UTC (1739462335) Feb 13 15:58:56.084711 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 13 15:58:56.084730 kernel: intel_pstate: CPU model not supported Feb 13 15:58:56.084744 kernel: efifb: probing for efifb Feb 13 15:58:56.084756 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 15:58:56.084771 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 15:58:56.084783 kernel: efifb: scrolling: redraw Feb 13 15:58:56.084795 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:58:56.084806 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:58:56.084820 kernel: fb0: EFI VGA frame buffer device Feb 13 15:58:56.084838 kernel: pstore: Using crash dump compression: deflate Feb 13 15:58:56.084853 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:58:56.084867 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:58:56.084881 kernel: Segment Routing with IPv6 Feb 13 15:58:56.084895 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:58:56.084908 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:58:56.084920 kernel: Key type dns_resolver registered Feb 13 15:58:56.084933 kernel: IPI shorthand broadcast: enabled Feb 13 15:58:56.084946 kernel: sched_clock: Marking stable (838002700, 49746600)->(1110677000, -222927700) Feb 13 15:58:56.084963 kernel: registered taskstats version 1 Feb 13 15:58:56.084977 kernel: Loading compiled-in X.509 certificates Feb 13 15:58:56.084990 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 3d19ae6dcd850c11d55bf09bd44e00c45ed399eb' Feb 13 15:58:56.085005 kernel: Key type .fscrypt registered Feb 13 15:58:56.085018 kernel: Key type fscrypt-provisioning registered Feb 13 15:58:56.085032 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:58:56.085046 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:58:56.085060 kernel: ima: No architecture policies found Feb 13 15:58:56.085078 kernel: clk: Disabling unused clocks Feb 13 15:58:56.085092 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 15:58:56.085106 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:58:56.085120 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 15:58:56.085134 kernel: Run /init as init process Feb 13 15:58:56.085148 kernel: with arguments: Feb 13 15:58:56.085162 kernel: /init Feb 13 15:58:56.085176 kernel: with environment: Feb 13 15:58:56.085189 kernel: HOME=/ Feb 13 15:58:56.085203 kernel: TERM=linux Feb 13 15:58:56.085218 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:58:56.085236 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:58:56.085254 systemd[1]: Detected virtualization microsoft. Feb 13 15:58:56.085269 systemd[1]: Detected architecture x86-64. Feb 13 15:58:56.085283 systemd[1]: Running in initrd. Feb 13 15:58:56.085297 systemd[1]: No hostname configured, using default hostname. Feb 13 15:58:56.085312 systemd[1]: Hostname set to . Feb 13 15:58:56.085330 systemd[1]: Initializing machine ID from random generator. Feb 13 15:58:56.085345 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:58:56.085360 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:58:56.085375 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:58:56.085391 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:58:56.085406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:58:56.085421 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:58:56.085436 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:58:56.085456 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:58:56.085471 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:58:56.085486 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:58:56.085501 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:58:56.085516 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:58:56.085531 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:58:56.085545 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:58:56.085563 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:58:56.085591 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:58:56.085605 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:58:56.085621 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:58:56.085636 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:58:56.085651 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:58:56.085666 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:58:56.085682 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:58:56.085696 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:58:56.085714 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:58:56.085730 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:58:56.085745 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:58:56.085759 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:58:56.085774 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:58:56.085789 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:58:56.085804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:58:56.085844 systemd-journald[177]: Collecting audit messages is disabled. Feb 13 15:58:56.085882 systemd-journald[177]: Journal started Feb 13 15:58:56.085914 systemd-journald[177]: Runtime Journal (/run/log/journal/acdef269e6604ead9f2214f519fcb5c4) is 8.0M, max 158.8M, 150.8M free. Feb 13 15:58:56.089703 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:58:56.096688 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:58:56.099857 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:58:56.100993 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:58:56.110318 systemd-modules-load[178]: Inserted module 'overlay' Feb 13 15:58:56.113876 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:58:56.129767 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:58:56.137723 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:58:56.146618 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:58:56.158223 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:58:56.160587 kernel: Bridge firewalling registered Feb 13 15:58:56.160581 systemd-modules-load[178]: Inserted module 'br_netfilter' Feb 13 15:58:56.165781 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:58:56.182766 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:58:56.188513 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:58:56.191274 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:58:56.194736 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:58:56.204791 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:58:56.213777 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:58:56.222789 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:58:56.225498 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:58:56.240149 dracut-cmdline[211]: dracut-dracut-053 Feb 13 15:58:56.240726 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:58:56.247786 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:58:56.297852 systemd-resolved[217]: Positive Trust Anchors: Feb 13 15:58:56.300465 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:58:56.300525 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:58:56.324131 systemd-resolved[217]: Defaulting to hostname 'linux'. Feb 13 15:58:56.327697 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:58:56.330718 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:58:56.344589 kernel: SCSI subsystem initialized Feb 13 15:58:56.355588 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:58:56.366592 kernel: iscsi: registered transport (tcp) Feb 13 15:58:56.387607 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:58:56.387679 kernel: QLogic iSCSI HBA Driver Feb 13 15:58:56.423285 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:58:56.434773 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:58:56.463452 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:58:56.463542 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:58:56.466651 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:58:56.506593 kernel: raid6: avx512x4 gen() 18303 MB/s Feb 13 15:58:56.525583 kernel: raid6: avx512x2 gen() 18281 MB/s Feb 13 15:58:56.544583 kernel: raid6: avx512x1 gen() 18322 MB/s Feb 13 15:58:56.563585 kernel: raid6: avx2x4 gen() 18237 MB/s Feb 13 15:58:56.582583 kernel: raid6: avx2x2 gen() 18284 MB/s Feb 13 15:58:56.602405 kernel: raid6: avx2x1 gen() 13732 MB/s Feb 13 15:58:56.602451 kernel: raid6: using algorithm avx512x1 gen() 18322 MB/s Feb 13 15:58:56.624401 kernel: raid6: .... xor() 26448 MB/s, rmw enabled Feb 13 15:58:56.624456 kernel: raid6: using avx512x2 recovery algorithm Feb 13 15:58:56.646596 kernel: xor: automatically using best checksumming function avx Feb 13 15:58:56.786605 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:58:56.796451 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:58:56.806228 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:58:56.819472 systemd-udevd[396]: Using default interface naming scheme 'v255'. Feb 13 15:58:56.823931 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:58:56.839746 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:58:56.853507 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Feb 13 15:58:56.879553 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:58:56.890730 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:58:56.932296 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:58:56.945819 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:58:56.974557 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:58:56.982236 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:58:56.987010 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:58:56.995354 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:58:57.006504 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:58:57.025588 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:58:57.030815 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:58:57.048186 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:58:57.055931 kernel: AES CTR mode by8 optimization enabled Feb 13 15:58:57.060023 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:58:57.061236 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:58:57.069096 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:58:57.075566 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:58:57.076746 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:58:57.084312 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:58:57.094375 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:58:57.101776 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:58:57.110132 kernel: hv_vmbus: Vmbus version:5.2 Feb 13 15:58:57.101883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:58:57.116914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:58:57.145280 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:58:57.159459 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 15:58:57.159512 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 15:58:57.161751 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:58:57.171631 kernel: PTP clock support registered Feb 13 15:58:57.178593 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:58:57.191191 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 15:58:57.195259 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 15:58:57.201589 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 15:58:57.209262 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 15:58:57.209296 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 15:58:57.209312 kernel: hv_vmbus: registering driver hv_utils Feb 13 15:58:57.209688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:58:57.913085 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 15:58:57.913117 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 15:58:57.913141 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 15:58:57.913153 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 15:58:57.913165 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 15:58:57.913178 kernel: scsi host0: storvsc_host_t Feb 13 15:58:57.913343 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 15:58:57.913368 kernel: scsi host1: storvsc_host_t Feb 13 15:58:57.908036 systemd-resolved[217]: Clock change detected. Flushing caches. Feb 13 15:58:57.921774 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 15:58:57.925135 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 15:58:57.947860 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 15:58:57.950932 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:58:57.950955 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 15:58:57.958567 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 15:58:57.971552 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 15:58:57.971680 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 15:58:57.971782 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 15:58:57.971882 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 15:58:57.971978 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:58:57.971990 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 15:58:58.073722 kernel: hv_netvsc 6045bddd-4d8f-6045-bddd-4d8f6045bddd eth0: VF slot 1 added Feb 13 15:58:58.083660 kernel: hv_vmbus: registering driver hv_pci Feb 13 15:58:58.088337 kernel: hv_pci 458487d7-ae16-45e9-a4b1-e117919f9a03: PCI VMBus probing: Using version 0x10004 Feb 13 15:58:58.130517 kernel: hv_pci 458487d7-ae16-45e9-a4b1-e117919f9a03: PCI host bridge to bus ae16:00 Feb 13 15:58:58.130807 kernel: pci_bus ae16:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 13 15:58:58.130998 kernel: pci_bus ae16:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 15:58:58.131595 kernel: pci ae16:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 13 15:58:58.131796 kernel: pci ae16:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 15:58:58.131986 kernel: pci ae16:00:02.0: enabling Extended Tags Feb 13 15:58:58.132185 kernel: pci ae16:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ae16:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 13 15:58:58.132353 kernel: pci_bus ae16:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 15:58:58.132502 kernel: pci ae16:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 15:58:58.294044 kernel: mlx5_core ae16:00:02.0: enabling device (0000 -> 0002) Feb 13 15:58:58.532472 kernel: mlx5_core ae16:00:02.0: firmware version: 14.30.5000 Feb 13 15:58:58.532683 kernel: hv_netvsc 6045bddd-4d8f-6045-bddd-4d8f6045bddd eth0: VF registering: eth1 Feb 13 15:58:58.532848 kernel: mlx5_core ae16:00:02.0 eth1: joined to eth0 Feb 13 15:58:58.533020 kernel: mlx5_core ae16:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 15:58:58.533207 kernel: BTRFS: device fsid 0e178e67-0100-48b1-87c9-422b9a68652a devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (450) Feb 13 15:58:58.439306 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 15:58:58.538230 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 15:58:58.546668 kernel: mlx5_core ae16:00:02.0 enP44566s1: renamed from eth1 Feb 13 15:58:58.554678 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (445) Feb 13 15:58:58.577555 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 15:58:58.580977 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 15:58:58.597622 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:58:58.611226 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:58:58.623127 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:58:59.635827 disk-uuid[609]: The operation has completed successfully. Feb 13 15:58:59.639245 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:58:59.717555 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:58:59.717683 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:58:59.739275 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:58:59.744927 sh[695]: Success Feb 13 15:58:59.775528 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:58:59.990968 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:59:00.008241 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:59:00.012571 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:59:00.030125 kernel: BTRFS info (device dm-0): first mount of filesystem 0e178e67-0100-48b1-87c9-422b9a68652a Feb 13 15:59:00.030180 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:59:00.035947 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:59:00.038772 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:59:00.041356 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:59:00.356925 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:59:00.360502 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:59:00.375346 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:59:00.382292 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:59:00.405941 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:59:00.405999 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:59:00.406019 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:59:00.427130 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:59:00.438045 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:59:00.443681 kernel: BTRFS info (device sda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:59:00.452399 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:59:00.462307 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:59:00.477476 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:59:00.489342 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:59:00.511232 systemd-networkd[879]: lo: Link UP Feb 13 15:59:00.511242 systemd-networkd[879]: lo: Gained carrier Feb 13 15:59:00.513310 systemd-networkd[879]: Enumeration completed Feb 13 15:59:00.513559 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:59:00.514725 systemd[1]: Reached target network.target - Network. Feb 13 15:59:00.516742 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:59:00.516746 systemd-networkd[879]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:59:00.577125 kernel: mlx5_core ae16:00:02.0 enP44566s1: Link up Feb 13 15:59:00.615775 kernel: hv_netvsc 6045bddd-4d8f-6045-bddd-4d8f6045bddd eth0: Data path switched to VF: enP44566s1 Feb 13 15:59:00.615309 systemd-networkd[879]: enP44566s1: Link UP Feb 13 15:59:00.615457 systemd-networkd[879]: eth0: Link UP Feb 13 15:59:00.615657 systemd-networkd[879]: eth0: Gained carrier Feb 13 15:59:00.615671 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:59:00.622825 systemd-networkd[879]: enP44566s1: Gained carrier Feb 13 15:59:00.666158 systemd-networkd[879]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 15:59:01.419835 ignition[860]: Ignition 2.20.0 Feb 13 15:59:01.419847 ignition[860]: Stage: fetch-offline Feb 13 15:59:01.419890 ignition[860]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:59:01.419901 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:59:01.420008 ignition[860]: parsed url from cmdline: "" Feb 13 15:59:01.420013 ignition[860]: no config URL provided Feb 13 15:59:01.420020 ignition[860]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:59:01.420031 ignition[860]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:59:01.420038 ignition[860]: failed to fetch config: resource requires networking Feb 13 15:59:01.422249 ignition[860]: Ignition finished successfully Feb 13 15:59:01.439865 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:59:01.450309 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:59:01.463180 ignition[887]: Ignition 2.20.0 Feb 13 15:59:01.463193 ignition[887]: Stage: fetch Feb 13 15:59:01.463391 ignition[887]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:59:01.463405 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:59:01.463510 ignition[887]: parsed url from cmdline: "" Feb 13 15:59:01.463514 ignition[887]: no config URL provided Feb 13 15:59:01.463518 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:59:01.463526 ignition[887]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:59:01.463552 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 15:59:01.536494 ignition[887]: GET result: OK Feb 13 15:59:01.536644 ignition[887]: config has been read from IMDS userdata Feb 13 15:59:01.536682 ignition[887]: parsing config with SHA512: 2663c1513a0fb138b89d02c8fef7b5e681f9f1cf2c73c09699f671d528af0d4b9007393f2f31638bf0e56997f9052b78b8ce496c2099b88860194a7c19d0c0ff Feb 13 15:59:01.542563 unknown[887]: fetched base config from "system" Feb 13 15:59:01.542579 unknown[887]: fetched base config from "system" Feb 13 15:59:01.542590 unknown[887]: fetched user config from "azure" Feb 13 15:59:01.549467 ignition[887]: fetch: fetch complete Feb 13 15:59:01.549477 ignition[887]: fetch: fetch passed Feb 13 15:59:01.549537 ignition[887]: Ignition finished successfully Feb 13 15:59:01.553627 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:59:01.563269 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:59:01.577769 ignition[893]: Ignition 2.20.0 Feb 13 15:59:01.577780 ignition[893]: Stage: kargs Feb 13 15:59:01.577984 ignition[893]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:59:01.577998 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:59:01.578830 ignition[893]: kargs: kargs passed Feb 13 15:59:01.583013 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:59:01.578874 ignition[893]: Ignition finished successfully Feb 13 15:59:01.598280 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:59:01.610143 ignition[899]: Ignition 2.20.0 Feb 13 15:59:01.610154 ignition[899]: Stage: disks Feb 13 15:59:01.612036 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:59:01.610362 ignition[899]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:59:01.616389 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:59:01.610375 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:59:01.620812 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:59:01.611244 ignition[899]: disks: disks passed Feb 13 15:59:01.623596 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:59:01.611286 ignition[899]: Ignition finished successfully Feb 13 15:59:01.628015 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:59:01.646100 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:59:01.657291 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:59:01.727908 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 15:59:01.734530 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:59:01.749619 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:59:01.839130 kernel: EXT4-fs (sda9): mounted filesystem e45e00fd-a630-4f0f-91bb-bc879e42a47e r/w with ordered data mode. Quota mode: none. Feb 13 15:59:01.839320 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:59:01.841099 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:59:01.875298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:59:01.880039 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:59:01.888272 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:59:01.895674 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (918) Feb 13 15:59:01.897190 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:59:01.897234 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:59:01.911695 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:59:01.919264 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:59:01.919300 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:59:01.919316 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:59:01.923121 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:59:01.927315 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:59:01.933920 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:59:02.062367 systemd-networkd[879]: enP44566s1: Gained IPv6LL Feb 13 15:59:02.638397 systemd-networkd[879]: eth0: Gained IPv6LL Feb 13 15:59:02.666826 coreos-metadata[920]: Feb 13 15:59:02.666 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:59:02.673655 coreos-metadata[920]: Feb 13 15:59:02.673 INFO Fetch successful Feb 13 15:59:02.676345 coreos-metadata[920]: Feb 13 15:59:02.673 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:59:02.688240 coreos-metadata[920]: Feb 13 15:59:02.688 INFO Fetch successful Feb 13 15:59:02.705096 coreos-metadata[920]: Feb 13 15:59:02.703 INFO wrote hostname ci-4186.1.1-a-254057132e to /sysroot/etc/hostname Feb 13 15:59:02.707704 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:59:02.712533 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:59:02.769690 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:59:02.777762 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:59:02.782609 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:59:03.603944 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:59:03.614208 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:59:03.623273 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:59:03.632668 kernel: BTRFS info (device sda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:59:03.631523 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:59:03.656436 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:59:03.661633 ignition[1038]: INFO : Ignition 2.20.0 Feb 13 15:59:03.661633 ignition[1038]: INFO : Stage: mount Feb 13 15:59:03.661633 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:59:03.661633 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:59:03.673620 ignition[1038]: INFO : mount: mount passed Feb 13 15:59:03.673620 ignition[1038]: INFO : Ignition finished successfully Feb 13 15:59:03.664561 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:59:03.684181 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:59:03.691909 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:59:03.710903 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1049) Feb 13 15:59:03.710955 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:59:03.712117 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:59:03.716761 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:59:03.722119 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:59:03.723627 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:59:03.744994 ignition[1066]: INFO : Ignition 2.20.0 Feb 13 15:59:03.744994 ignition[1066]: INFO : Stage: files Feb 13 15:59:03.749627 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:59:03.749627 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:59:03.749627 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:59:03.749627 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:59:03.749627 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:59:03.813556 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:59:03.817981 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:59:03.817981 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:59:03.814188 unknown[1066]: wrote ssh authorized keys file for user: core Feb 13 15:59:03.840833 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:59:03.845697 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:59:04.093641 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:59:04.228524 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:59:04.234059 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:59:04.234059 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:59:04.747470 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:59:04.867112 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:59:05.322994 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:59:05.606649 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:59:05.606649 ignition[1066]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:59:05.697472 ignition[1066]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:59:05.706396 ignition[1066]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:59:05.706396 ignition[1066]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:59:05.706396 ignition[1066]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:59:05.706396 ignition[1066]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:59:05.706396 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:59:05.706396 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:59:05.706396 ignition[1066]: INFO : files: files passed Feb 13 15:59:05.706396 ignition[1066]: INFO : Ignition finished successfully Feb 13 15:59:05.699830 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:59:05.719331 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:59:05.730255 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:59:05.742431 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:59:05.742518 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:59:05.762149 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:59:05.762149 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:59:05.771093 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:59:05.775497 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:59:05.777302 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:59:05.787354 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:59:05.812472 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:59:05.812592 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:59:05.816848 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:59:05.817079 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:59:05.817564 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:59:05.820254 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:59:05.835458 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:59:05.852308 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:59:05.862515 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:59:05.868320 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:59:05.869294 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:59:05.869634 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:59:05.869741 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:59:05.870969 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:59:05.871374 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:59:05.871759 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:59:05.872171 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:59:05.872551 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:59:05.872947 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:59:05.873342 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:59:05.873734 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:59:05.874141 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:59:05.874608 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:59:05.874957 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:59:05.875086 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:59:05.875780 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:59:05.876205 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:59:05.876542 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:59:05.979264 ignition[1118]: INFO : Ignition 2.20.0 Feb 13 15:59:05.979264 ignition[1118]: INFO : Stage: umount Feb 13 15:59:05.979264 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:59:05.979264 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:59:05.979264 ignition[1118]: INFO : umount: umount passed Feb 13 15:59:05.979264 ignition[1118]: INFO : Ignition finished successfully Feb 13 15:59:05.912542 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:59:05.915901 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:59:05.916060 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:59:05.921091 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:59:05.921253 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:59:05.928526 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:59:05.930947 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:59:05.935473 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:59:05.935610 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:59:05.949180 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:59:05.960871 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:59:05.965200 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:59:05.967206 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:59:05.979451 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:59:05.979830 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:59:05.989747 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:59:05.989830 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:59:05.997031 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:59:06.001044 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:59:06.005696 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:59:06.005790 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:59:06.009341 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:59:06.009394 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:59:06.011900 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:59:06.011943 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:59:06.016249 systemd[1]: Stopped target network.target - Network. Feb 13 15:59:06.020570 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:59:06.020627 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:59:06.029090 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:59:06.035433 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:59:06.035505 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:59:06.040877 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:59:06.043039 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:59:06.047815 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:59:06.047864 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:59:06.052346 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:59:06.052383 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:59:06.054853 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:59:06.054907 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:59:06.059817 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:59:06.059869 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:59:06.065269 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:59:06.071796 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:59:06.076099 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:59:06.082207 systemd-networkd[879]: eth0: DHCPv6 lease lost Feb 13 15:59:06.084986 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:59:06.085093 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:59:06.089724 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:59:06.089759 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:59:06.115376 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:59:06.123947 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:59:06.124020 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:59:06.135294 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:59:06.179548 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:59:06.179672 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:59:06.192549 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:59:06.193684 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:59:06.200145 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:59:06.200221 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:59:06.208778 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:59:06.208823 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:59:06.213498 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:59:06.213553 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:59:06.216637 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:59:06.216679 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:59:06.221072 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:59:06.221136 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:59:06.239294 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:59:06.244622 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:59:06.244684 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:59:06.247395 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:59:06.247456 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:59:06.252551 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:59:06.257683 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:59:06.270122 kernel: hv_netvsc 6045bddd-4d8f-6045-bddd-4d8f6045bddd eth0: Data path switched from VF: enP44566s1 Feb 13 15:59:06.271743 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:59:06.271808 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:59:06.281409 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:59:06.281471 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:59:06.289613 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:59:06.289674 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:59:06.295223 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:59:06.295277 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:59:06.306032 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:59:06.308310 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:59:06.313088 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:59:06.315475 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:59:06.397752 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:59:06.397923 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:59:06.402906 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:59:06.409942 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:59:06.410015 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:59:06.421357 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:59:06.431096 systemd[1]: Switching root. Feb 13 15:59:06.515387 systemd-journald[177]: Journal stopped Feb 13 15:58:56.081473 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:06:02 -00 2025 Feb 13 15:58:56.081501 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:58:56.081510 kernel: BIOS-provided physical RAM map: Feb 13 15:58:56.081519 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:58:56.081525 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 13 15:58:56.081531 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 13 15:58:56.081540 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 13 15:58:56.081547 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 13 15:58:56.081556 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 13 15:58:56.081564 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 13 15:58:56.081607 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 13 15:58:56.081616 kernel: printk: bootconsole [earlyser0] enabled Feb 13 15:58:56.081623 kernel: NX (Execute Disable) protection: active Feb 13 15:58:56.081629 kernel: APIC: Static calls initialized Feb 13 15:58:56.081642 kernel: efi: EFI v2.7 by Microsoft Feb 13 15:58:56.081650 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee81a98 RNG=0x3ffd1018 Feb 13 15:58:56.081658 kernel: random: crng init done Feb 13 15:58:56.081667 kernel: secureboot: Secure boot disabled Feb 13 15:58:56.081674 kernel: SMBIOS 3.1.0 present. Feb 13 15:58:56.081682 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Feb 13 15:58:56.081691 kernel: Hypervisor detected: Microsoft Hyper-V Feb 13 15:58:56.081698 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 13 15:58:56.081710 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Feb 13 15:58:56.081717 kernel: Hyper-V: Nested features: 0x1e0101 Feb 13 15:58:56.081729 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 13 15:58:56.081736 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 13 15:58:56.081746 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 15:58:56.081753 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 13 15:58:56.081764 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 13 15:58:56.081771 kernel: tsc: Detected 2593.906 MHz processor Feb 13 15:58:56.081780 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:58:56.081790 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:58:56.081797 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 13 15:58:56.081809 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:58:56.081817 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:58:56.081824 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 13 15:58:56.081833 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 13 15:58:56.081841 kernel: Using GB pages for direct mapping Feb 13 15:58:56.081849 kernel: ACPI: Early table checksum verification disabled Feb 13 15:58:56.081858 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 13 15:58:56.081870 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081882 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081890 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 13 15:58:56.081897 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 13 15:58:56.081908 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081916 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081923 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081936 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081943 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081953 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081961 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 13 15:58:56.081968 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 13 15:58:56.081979 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 13 15:58:56.081986 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 13 15:58:56.081995 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 13 15:58:56.082005 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 13 15:58:56.082014 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 13 15:58:56.082024 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 13 15:58:56.082032 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 13 15:58:56.082039 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 13 15:58:56.082050 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 13 15:58:56.082058 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:58:56.082067 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:58:56.082077 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 13 15:58:56.082084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 13 15:58:56.082096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 13 15:58:56.082104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 13 15:58:56.082112 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 13 15:58:56.082123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 13 15:58:56.082130 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 13 15:58:56.082138 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 13 15:58:56.082148 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 13 15:58:56.082157 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 13 15:58:56.082168 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 13 15:58:56.082178 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 13 15:58:56.082186 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 13 15:58:56.082196 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 13 15:58:56.082205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 13 15:58:56.082214 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 13 15:58:56.082224 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 13 15:58:56.082231 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 13 15:58:56.082240 kernel: Zone ranges: Feb 13 15:58:56.082252 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:58:56.082259 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 15:58:56.082269 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 15:58:56.082277 kernel: Movable zone start for each node Feb 13 15:58:56.082285 kernel: Early memory node ranges Feb 13 15:58:56.082295 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:58:56.082302 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 13 15:58:56.082310 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 13 15:58:56.082320 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 13 15:58:56.082330 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 13 15:58:56.082341 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:58:56.082348 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:58:56.082356 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 13 15:58:56.082366 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 13 15:58:56.082373 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 13 15:58:56.082381 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:58:56.082391 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:58:56.082398 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:58:56.082411 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 13 15:58:56.082419 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:58:56.082427 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 13 15:58:56.082437 kernel: Booting paravirtualized kernel on Hyper-V Feb 13 15:58:56.082445 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:58:56.082453 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:58:56.082463 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:58:56.082471 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:58:56.082480 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:58:56.082490 kernel: Hyper-V: PV spinlocks enabled Feb 13 15:58:56.082498 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:58:56.082509 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:58:56.082517 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:58:56.082525 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 15:58:56.082535 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:58:56.082542 kernel: Fallback order for Node 0: 0 Feb 13 15:58:56.082552 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 13 15:58:56.082563 kernel: Policy zone: Normal Feb 13 15:58:56.082588 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:58:56.082599 kernel: software IO TLB: area num 2. Feb 13 15:58:56.082613 kernel: Memory: 8074984K/8387460K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 312220K reserved, 0K cma-reserved) Feb 13 15:58:56.082622 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:58:56.082632 kernel: ftrace: allocating 37890 entries in 149 pages Feb 13 15:58:56.082642 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:58:56.082652 kernel: Dynamic Preempt: voluntary Feb 13 15:58:56.082661 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:58:56.082670 kernel: rcu: RCU event tracing is enabled. Feb 13 15:58:56.082681 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:58:56.082691 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:58:56.082701 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:58:56.082710 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:58:56.082718 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:58:56.082729 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:58:56.082737 kernel: Using NULL legacy PIC Feb 13 15:58:56.082750 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 13 15:58:56.082758 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:58:56.082766 kernel: Console: colour dummy device 80x25 Feb 13 15:58:56.082777 kernel: printk: console [tty1] enabled Feb 13 15:58:56.082785 kernel: printk: console [ttyS0] enabled Feb 13 15:58:56.082795 kernel: printk: bootconsole [earlyser0] disabled Feb 13 15:58:56.082804 kernel: ACPI: Core revision 20230628 Feb 13 15:58:56.082812 kernel: Failed to register legacy timer interrupt Feb 13 15:58:56.082822 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:58:56.082833 kernel: Hyper-V: enabling crash_kexec_post_notifiers Feb 13 15:58:56.082842 kernel: Hyper-V: Using IPI hypercalls Feb 13 15:58:56.082851 kernel: APIC: send_IPI() replaced with hv_send_ipi() Feb 13 15:58:56.082859 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Feb 13 15:58:56.082870 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Feb 13 15:58:56.082878 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Feb 13 15:58:56.082887 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Feb 13 15:58:56.082897 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Feb 13 15:58:56.082905 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Feb 13 15:58:56.082918 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 15:58:56.082926 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 15:58:56.082935 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:58:56.082945 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:58:56.082953 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:58:56.082962 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:58:56.082971 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 15:58:56.082979 kernel: RETBleed: Vulnerable Feb 13 15:58:56.082990 kernel: Speculative Store Bypass: Vulnerable Feb 13 15:58:56.082997 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:58:56.083009 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:58:56.083018 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:58:56.083028 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:58:56.083037 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:58:56.083048 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 15:58:56.083056 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 15:58:56.083066 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 15:58:56.083076 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:58:56.083085 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 15:58:56.083094 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 15:58:56.083102 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 15:58:56.083115 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 13 15:58:56.083123 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:58:56.083132 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:58:56.083141 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:58:56.083149 kernel: landlock: Up and running. Feb 13 15:58:56.083159 kernel: SELinux: Initializing. Feb 13 15:58:56.083168 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:58:56.083176 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:58:56.083187 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 15:58:56.083195 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:58:56.083204 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:58:56.083216 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:58:56.083224 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 15:58:56.083235 kernel: signal: max sigframe size: 3632 Feb 13 15:58:56.083243 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:58:56.083253 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:58:56.083262 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:58:56.083270 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:58:56.083281 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:58:56.083289 kernel: .... node #0, CPUs: #1 Feb 13 15:58:56.083302 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 13 15:58:56.083313 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:58:56.083324 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:58:56.083334 kernel: smpboot: Max logical packages: 1 Feb 13 15:58:56.083346 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 13 15:58:56.083364 kernel: devtmpfs: initialized Feb 13 15:58:56.083377 kernel: x86/mm: Memory block size: 128MB Feb 13 15:58:56.083389 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 13 15:58:56.083405 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:58:56.083414 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:58:56.083426 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:58:56.083435 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:58:56.083445 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:58:56.083453 kernel: audit: type=2000 audit(1739462334.028:1): state=initialized audit_enabled=0 res=1 Feb 13 15:58:56.083462 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:58:56.083472 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:58:56.083480 kernel: cpuidle: using governor menu Feb 13 15:58:56.083490 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:58:56.083498 kernel: dca service started, version 1.12.1 Feb 13 15:58:56.083506 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Feb 13 15:58:56.083514 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:58:56.083522 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:58:56.083530 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:58:56.083538 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:58:56.083546 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:58:56.083553 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:58:56.083563 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:58:56.083585 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:58:56.083593 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:58:56.083601 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:58:56.083613 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:58:56.083622 kernel: ACPI: Interpreter enabled Feb 13 15:58:56.083633 kernel: ACPI: PM: (supports S0 S5) Feb 13 15:58:56.083644 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:58:56.083653 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:58:56.083669 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 15:58:56.083682 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 13 15:58:56.083696 kernel: iommu: Default domain type: Translated Feb 13 15:58:56.083709 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:58:56.083722 kernel: efivars: Registered efivars operations Feb 13 15:58:56.083735 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:58:56.083749 kernel: PCI: System does not support PCI Feb 13 15:58:56.083762 kernel: vgaarb: loaded Feb 13 15:58:56.083776 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 13 15:58:56.083793 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:58:56.083807 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:58:56.083820 kernel: pnp: PnP ACPI init Feb 13 15:58:56.083834 kernel: pnp: PnP ACPI: found 3 devices Feb 13 15:58:56.083847 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:58:56.083861 kernel: NET: Registered PF_INET protocol family Feb 13 15:58:56.083873 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:58:56.083887 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 15:58:56.083900 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:58:56.083915 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:58:56.083931 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 15:58:56.083943 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 15:58:56.083957 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:58:56.083970 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:58:56.083983 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:58:56.083997 kernel: NET: Registered PF_XDP protocol family Feb 13 15:58:56.084009 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:58:56.084022 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 15:58:56.084042 kernel: software IO TLB: mapped [mem 0x000000003ad8c000-0x000000003ed8c000] (64MB) Feb 13 15:58:56.084056 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:58:56.084070 kernel: Initialise system trusted keyrings Feb 13 15:58:56.084081 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 15:58:56.084093 kernel: Key type asymmetric registered Feb 13 15:58:56.084106 kernel: Asymmetric key parser 'x509' registered Feb 13 15:58:56.084119 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:58:56.084132 kernel: io scheduler mq-deadline registered Feb 13 15:58:56.084145 kernel: io scheduler kyber registered Feb 13 15:58:56.084163 kernel: io scheduler bfq registered Feb 13 15:58:56.084176 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:58:56.084189 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:58:56.084203 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:58:56.084219 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 15:58:56.084233 kernel: i8042: PNP: No PS/2 controller found. Feb 13 15:58:56.084417 kernel: rtc_cmos 00:02: registered as rtc0 Feb 13 15:58:56.084555 kernel: rtc_cmos 00:02: setting system clock to 2025-02-13T15:58:55 UTC (1739462335) Feb 13 15:58:56.084711 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 13 15:58:56.084730 kernel: intel_pstate: CPU model not supported Feb 13 15:58:56.084744 kernel: efifb: probing for efifb Feb 13 15:58:56.084756 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 13 15:58:56.084771 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 13 15:58:56.084783 kernel: efifb: scrolling: redraw Feb 13 15:58:56.084795 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:58:56.084806 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:58:56.084820 kernel: fb0: EFI VGA frame buffer device Feb 13 15:58:56.084838 kernel: pstore: Using crash dump compression: deflate Feb 13 15:58:56.084853 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:58:56.084867 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:58:56.084881 kernel: Segment Routing with IPv6 Feb 13 15:58:56.084895 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:58:56.084908 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:58:56.084920 kernel: Key type dns_resolver registered Feb 13 15:58:56.084933 kernel: IPI shorthand broadcast: enabled Feb 13 15:58:56.084946 kernel: sched_clock: Marking stable (838002700, 49746600)->(1110677000, -222927700) Feb 13 15:58:56.084963 kernel: registered taskstats version 1 Feb 13 15:58:56.084977 kernel: Loading compiled-in X.509 certificates Feb 13 15:58:56.084990 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 3d19ae6dcd850c11d55bf09bd44e00c45ed399eb' Feb 13 15:58:56.085005 kernel: Key type .fscrypt registered Feb 13 15:58:56.085018 kernel: Key type fscrypt-provisioning registered Feb 13 15:58:56.085032 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:58:56.085046 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:58:56.085060 kernel: ima: No architecture policies found Feb 13 15:58:56.085078 kernel: clk: Disabling unused clocks Feb 13 15:58:56.085092 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 15:58:56.085106 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:58:56.085120 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 15:58:56.085134 kernel: Run /init as init process Feb 13 15:58:56.085148 kernel: with arguments: Feb 13 15:58:56.085162 kernel: /init Feb 13 15:58:56.085176 kernel: with environment: Feb 13 15:58:56.085189 kernel: HOME=/ Feb 13 15:58:56.085203 kernel: TERM=linux Feb 13 15:58:56.085218 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:58:56.085236 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:58:56.085254 systemd[1]: Detected virtualization microsoft. Feb 13 15:58:56.085269 systemd[1]: Detected architecture x86-64. Feb 13 15:58:56.085283 systemd[1]: Running in initrd. Feb 13 15:58:56.085297 systemd[1]: No hostname configured, using default hostname. Feb 13 15:58:56.085312 systemd[1]: Hostname set to . Feb 13 15:58:56.085330 systemd[1]: Initializing machine ID from random generator. Feb 13 15:58:56.085345 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:58:56.085360 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:58:56.085375 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:58:56.085391 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:58:56.085406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:58:56.085421 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:58:56.085436 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:58:56.085456 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:58:56.085471 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:58:56.085486 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:58:56.085501 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:58:56.085516 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:58:56.085531 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:58:56.085545 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:58:56.085563 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:58:56.085591 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:58:56.085605 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:58:56.085621 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:58:56.085636 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:58:56.085651 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:58:56.085666 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:58:56.085682 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:58:56.085696 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:58:56.085714 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:58:56.085730 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:58:56.085745 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:58:56.085759 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:58:56.085774 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:58:56.085789 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:58:56.085804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:58:56.085844 systemd-journald[177]: Collecting audit messages is disabled. Feb 13 15:58:56.085882 systemd-journald[177]: Journal started Feb 13 15:58:56.085914 systemd-journald[177]: Runtime Journal (/run/log/journal/acdef269e6604ead9f2214f519fcb5c4) is 8.0M, max 158.8M, 150.8M free. Feb 13 15:58:56.089703 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:58:56.096688 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:58:56.099857 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:58:56.100993 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:58:56.110318 systemd-modules-load[178]: Inserted module 'overlay' Feb 13 15:58:56.113876 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:58:56.129767 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:58:56.137723 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:58:56.146618 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:58:56.158223 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:58:56.160587 kernel: Bridge firewalling registered Feb 13 15:58:56.160581 systemd-modules-load[178]: Inserted module 'br_netfilter' Feb 13 15:58:56.165781 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:58:56.182766 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:58:56.188513 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:58:56.191274 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:58:56.194736 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:58:56.204791 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:58:56.213777 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:58:56.222789 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:58:56.225498 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:58:56.240149 dracut-cmdline[211]: dracut-dracut-053 Feb 13 15:58:56.240726 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:58:56.247786 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:58:56.297852 systemd-resolved[217]: Positive Trust Anchors: Feb 13 15:58:56.300465 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:58:56.300525 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:58:56.324131 systemd-resolved[217]: Defaulting to hostname 'linux'. Feb 13 15:58:56.327697 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:58:56.330718 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:58:56.344589 kernel: SCSI subsystem initialized Feb 13 15:58:56.355588 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:58:56.366592 kernel: iscsi: registered transport (tcp) Feb 13 15:58:56.387607 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:58:56.387679 kernel: QLogic iSCSI HBA Driver Feb 13 15:58:56.423285 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:58:56.434773 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:58:56.463452 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:58:56.463542 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:58:56.466651 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:58:56.506593 kernel: raid6: avx512x4 gen() 18303 MB/s Feb 13 15:58:56.525583 kernel: raid6: avx512x2 gen() 18281 MB/s Feb 13 15:58:56.544583 kernel: raid6: avx512x1 gen() 18322 MB/s Feb 13 15:58:56.563585 kernel: raid6: avx2x4 gen() 18237 MB/s Feb 13 15:58:56.582583 kernel: raid6: avx2x2 gen() 18284 MB/s Feb 13 15:58:56.602405 kernel: raid6: avx2x1 gen() 13732 MB/s Feb 13 15:58:56.602451 kernel: raid6: using algorithm avx512x1 gen() 18322 MB/s Feb 13 15:58:56.624401 kernel: raid6: .... xor() 26448 MB/s, rmw enabled Feb 13 15:58:56.624456 kernel: raid6: using avx512x2 recovery algorithm Feb 13 15:58:56.646596 kernel: xor: automatically using best checksumming function avx Feb 13 15:58:56.786605 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:58:56.796451 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:58:56.806228 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:58:56.819472 systemd-udevd[396]: Using default interface naming scheme 'v255'. Feb 13 15:58:56.823931 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:58:56.839746 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:58:56.853507 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Feb 13 15:58:56.879553 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:58:56.890730 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:58:56.932296 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:58:56.945819 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:58:56.974557 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:58:56.982236 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:58:56.987010 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:58:56.995354 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:58:57.006504 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:58:57.025588 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:58:57.030815 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:58:57.048186 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:58:57.055931 kernel: AES CTR mode by8 optimization enabled Feb 13 15:58:57.060023 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:58:57.061236 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:58:57.069096 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:58:57.075566 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:58:57.076746 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:58:57.084312 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:58:57.094375 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:58:57.101776 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:58:57.110132 kernel: hv_vmbus: Vmbus version:5.2 Feb 13 15:58:57.101883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:58:57.116914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:58:57.145280 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:58:57.159459 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 15:58:57.159512 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 15:58:57.161751 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:58:57.171631 kernel: PTP clock support registered Feb 13 15:58:57.178593 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:58:57.191191 kernel: hv_vmbus: registering driver hv_netvsc Feb 13 15:58:57.195259 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 13 15:58:57.201589 kernel: hv_vmbus: registering driver hid_hyperv Feb 13 15:58:57.209262 kernel: hv_vmbus: registering driver hv_storvsc Feb 13 15:58:57.209296 kernel: hv_utils: Registering HyperV Utility Driver Feb 13 15:58:57.209312 kernel: hv_vmbus: registering driver hv_utils Feb 13 15:58:57.209688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:58:57.913085 kernel: hv_utils: TimeSync IC version 4.0 Feb 13 15:58:57.913117 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 13 15:58:57.913141 kernel: hv_utils: Heartbeat IC version 3.0 Feb 13 15:58:57.913153 kernel: hv_utils: Shutdown IC version 3.2 Feb 13 15:58:57.913165 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 13 15:58:57.913178 kernel: scsi host0: storvsc_host_t Feb 13 15:58:57.913343 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 13 15:58:57.913368 kernel: scsi host1: storvsc_host_t Feb 13 15:58:57.908036 systemd-resolved[217]: Clock change detected. Flushing caches. Feb 13 15:58:57.921774 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 13 15:58:57.925135 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 13 15:58:57.947860 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 13 15:58:57.950932 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:58:57.950955 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 13 15:58:57.958567 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 13 15:58:57.971552 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 15:58:57.971680 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 15:58:57.971782 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 13 15:58:57.971882 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 13 15:58:57.971978 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:58:57.971990 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 15:58:58.073722 kernel: hv_netvsc 6045bddd-4d8f-6045-bddd-4d8f6045bddd eth0: VF slot 1 added Feb 13 15:58:58.083660 kernel: hv_vmbus: registering driver hv_pci Feb 13 15:58:58.088337 kernel: hv_pci 458487d7-ae16-45e9-a4b1-e117919f9a03: PCI VMBus probing: Using version 0x10004 Feb 13 15:58:58.130517 kernel: hv_pci 458487d7-ae16-45e9-a4b1-e117919f9a03: PCI host bridge to bus ae16:00 Feb 13 15:58:58.130807 kernel: pci_bus ae16:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 13 15:58:58.130998 kernel: pci_bus ae16:00: No busn resource found for root bus, will use [bus 00-ff] Feb 13 15:58:58.131595 kernel: pci ae16:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 13 15:58:58.131796 kernel: pci ae16:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 15:58:58.131986 kernel: pci ae16:00:02.0: enabling Extended Tags Feb 13 15:58:58.132185 kernel: pci ae16:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ae16:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 13 15:58:58.132353 kernel: pci_bus ae16:00: busn_res: [bus 00-ff] end is updated to 00 Feb 13 15:58:58.132502 kernel: pci ae16:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 13 15:58:58.294044 kernel: mlx5_core ae16:00:02.0: enabling device (0000 -> 0002) Feb 13 15:58:58.532472 kernel: mlx5_core ae16:00:02.0: firmware version: 14.30.5000 Feb 13 15:58:58.532683 kernel: hv_netvsc 6045bddd-4d8f-6045-bddd-4d8f6045bddd eth0: VF registering: eth1 Feb 13 15:58:58.532848 kernel: mlx5_core ae16:00:02.0 eth1: joined to eth0 Feb 13 15:58:58.533020 kernel: mlx5_core ae16:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 13 15:58:58.533207 kernel: BTRFS: device fsid 0e178e67-0100-48b1-87c9-422b9a68652a devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (450) Feb 13 15:58:58.439306 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Feb 13 15:58:58.538230 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Feb 13 15:58:58.546668 kernel: mlx5_core ae16:00:02.0 enP44566s1: renamed from eth1 Feb 13 15:58:58.554678 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (445) Feb 13 15:58:58.577555 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Feb 13 15:58:58.580977 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Feb 13 15:58:58.597622 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:58:58.611226 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:58:58.623127 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:58:59.635827 disk-uuid[609]: The operation has completed successfully. Feb 13 15:58:59.639245 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:58:59.717555 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:58:59.717683 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:58:59.739275 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:58:59.744927 sh[695]: Success Feb 13 15:58:59.775528 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:58:59.990968 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:59:00.008241 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:59:00.012571 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:59:00.030125 kernel: BTRFS info (device dm-0): first mount of filesystem 0e178e67-0100-48b1-87c9-422b9a68652a Feb 13 15:59:00.030180 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:59:00.035947 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:59:00.038772 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:59:00.041356 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:59:00.356925 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:59:00.360502 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:59:00.375346 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:59:00.382292 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:59:00.405941 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:59:00.405999 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:59:00.406019 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:59:00.427130 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:59:00.438045 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:59:00.443681 kernel: BTRFS info (device sda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:59:00.452399 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:59:00.462307 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:59:00.477476 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:59:00.489342 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:59:00.511232 systemd-networkd[879]: lo: Link UP Feb 13 15:59:00.511242 systemd-networkd[879]: lo: Gained carrier Feb 13 15:59:00.513310 systemd-networkd[879]: Enumeration completed Feb 13 15:59:00.513559 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:59:00.514725 systemd[1]: Reached target network.target - Network. Feb 13 15:59:00.516742 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:59:00.516746 systemd-networkd[879]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:59:00.577125 kernel: mlx5_core ae16:00:02.0 enP44566s1: Link up Feb 13 15:59:00.615775 kernel: hv_netvsc 6045bddd-4d8f-6045-bddd-4d8f6045bddd eth0: Data path switched to VF: enP44566s1 Feb 13 15:59:00.615309 systemd-networkd[879]: enP44566s1: Link UP Feb 13 15:59:00.615457 systemd-networkd[879]: eth0: Link UP Feb 13 15:59:00.615657 systemd-networkd[879]: eth0: Gained carrier Feb 13 15:59:00.615671 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:59:00.622825 systemd-networkd[879]: enP44566s1: Gained carrier Feb 13 15:59:00.666158 systemd-networkd[879]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 15:59:01.419835 ignition[860]: Ignition 2.20.0 Feb 13 15:59:01.419847 ignition[860]: Stage: fetch-offline Feb 13 15:59:01.419890 ignition[860]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:59:01.419901 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:59:01.420008 ignition[860]: parsed url from cmdline: "" Feb 13 15:59:01.420013 ignition[860]: no config URL provided Feb 13 15:59:01.420020 ignition[860]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:59:01.420031 ignition[860]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:59:01.420038 ignition[860]: failed to fetch config: resource requires networking Feb 13 15:59:01.422249 ignition[860]: Ignition finished successfully Feb 13 15:59:01.439865 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:59:01.450309 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:59:01.463180 ignition[887]: Ignition 2.20.0 Feb 13 15:59:01.463193 ignition[887]: Stage: fetch Feb 13 15:59:01.463391 ignition[887]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:59:01.463405 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:59:01.463510 ignition[887]: parsed url from cmdline: "" Feb 13 15:59:01.463514 ignition[887]: no config URL provided Feb 13 15:59:01.463518 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:59:01.463526 ignition[887]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:59:01.463552 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 13 15:59:01.536494 ignition[887]: GET result: OK Feb 13 15:59:01.536644 ignition[887]: config has been read from IMDS userdata Feb 13 15:59:01.536682 ignition[887]: parsing config with SHA512: 2663c1513a0fb138b89d02c8fef7b5e681f9f1cf2c73c09699f671d528af0d4b9007393f2f31638bf0e56997f9052b78b8ce496c2099b88860194a7c19d0c0ff Feb 13 15:59:01.542563 unknown[887]: fetched base config from "system" Feb 13 15:59:01.542579 unknown[887]: fetched base config from "system" Feb 13 15:59:01.542590 unknown[887]: fetched user config from "azure" Feb 13 15:59:01.549467 ignition[887]: fetch: fetch complete Feb 13 15:59:01.549477 ignition[887]: fetch: fetch passed Feb 13 15:59:01.549537 ignition[887]: Ignition finished successfully Feb 13 15:59:01.553627 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:59:01.563269 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:59:01.577769 ignition[893]: Ignition 2.20.0 Feb 13 15:59:01.577780 ignition[893]: Stage: kargs Feb 13 15:59:01.577984 ignition[893]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:59:01.577998 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:59:01.578830 ignition[893]: kargs: kargs passed Feb 13 15:59:01.583013 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:59:01.578874 ignition[893]: Ignition finished successfully Feb 13 15:59:01.598280 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:59:01.610143 ignition[899]: Ignition 2.20.0 Feb 13 15:59:01.610154 ignition[899]: Stage: disks Feb 13 15:59:01.612036 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:59:01.610362 ignition[899]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:59:01.616389 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:59:01.610375 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:59:01.620812 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:59:01.611244 ignition[899]: disks: disks passed Feb 13 15:59:01.623596 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:59:01.611286 ignition[899]: Ignition finished successfully Feb 13 15:59:01.628015 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:59:01.646100 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:59:01.657291 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:59:01.727908 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Feb 13 15:59:01.734530 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:59:01.749619 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:59:01.839130 kernel: EXT4-fs (sda9): mounted filesystem e45e00fd-a630-4f0f-91bb-bc879e42a47e r/w with ordered data mode. Quota mode: none. Feb 13 15:59:01.839320 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:59:01.841099 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:59:01.875298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:59:01.880039 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:59:01.888272 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:59:01.895674 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (918) Feb 13 15:59:01.897190 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:59:01.897234 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:59:01.911695 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:59:01.919264 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:59:01.919300 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:59:01.919316 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:59:01.923121 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:59:01.927315 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:59:01.933920 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:59:02.062367 systemd-networkd[879]: enP44566s1: Gained IPv6LL Feb 13 15:59:02.638397 systemd-networkd[879]: eth0: Gained IPv6LL Feb 13 15:59:02.666826 coreos-metadata[920]: Feb 13 15:59:02.666 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:59:02.673655 coreos-metadata[920]: Feb 13 15:59:02.673 INFO Fetch successful Feb 13 15:59:02.676345 coreos-metadata[920]: Feb 13 15:59:02.673 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:59:02.688240 coreos-metadata[920]: Feb 13 15:59:02.688 INFO Fetch successful Feb 13 15:59:02.705096 coreos-metadata[920]: Feb 13 15:59:02.703 INFO wrote hostname ci-4186.1.1-a-254057132e to /sysroot/etc/hostname Feb 13 15:59:02.707704 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:59:02.712533 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:59:02.769690 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:59:02.777762 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:59:02.782609 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:59:03.603944 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:59:03.614208 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:59:03.623273 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:59:03.632668 kernel: BTRFS info (device sda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:59:03.631523 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:59:03.656436 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:59:03.661633 ignition[1038]: INFO : Ignition 2.20.0 Feb 13 15:59:03.661633 ignition[1038]: INFO : Stage: mount Feb 13 15:59:03.661633 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:59:03.661633 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:59:03.673620 ignition[1038]: INFO : mount: mount passed Feb 13 15:59:03.673620 ignition[1038]: INFO : Ignition finished successfully Feb 13 15:59:03.664561 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:59:03.684181 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:59:03.691909 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:59:03.710903 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1049) Feb 13 15:59:03.710955 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:59:03.712117 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:59:03.716761 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:59:03.722119 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:59:03.723627 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:59:03.744994 ignition[1066]: INFO : Ignition 2.20.0 Feb 13 15:59:03.744994 ignition[1066]: INFO : Stage: files Feb 13 15:59:03.749627 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:59:03.749627 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:59:03.749627 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:59:03.749627 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:59:03.749627 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:59:03.813556 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:59:03.817981 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:59:03.817981 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:59:03.814188 unknown[1066]: wrote ssh authorized keys file for user: core Feb 13 15:59:03.840833 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:59:03.845697 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:59:04.093641 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:59:04.228524 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:59:04.234059 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:59:04.234059 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:59:04.747470 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:59:04.867112 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:59:04.872563 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:59:05.322994 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:59:05.606649 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:59:05.606649 ignition[1066]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:59:05.697472 ignition[1066]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:59:05.706396 ignition[1066]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:59:05.706396 ignition[1066]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:59:05.706396 ignition[1066]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:59:05.706396 ignition[1066]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:59:05.706396 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:59:05.706396 ignition[1066]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:59:05.706396 ignition[1066]: INFO : files: files passed Feb 13 15:59:05.706396 ignition[1066]: INFO : Ignition finished successfully Feb 13 15:59:05.699830 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:59:05.719331 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:59:05.730255 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:59:05.742431 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:59:05.742518 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:59:05.762149 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:59:05.762149 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:59:05.771093 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:59:05.775497 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:59:05.777302 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:59:05.787354 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:59:05.812472 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:59:05.812592 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:59:05.816848 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:59:05.817079 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:59:05.817564 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:59:05.820254 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:59:05.835458 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:59:05.852308 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:59:05.862515 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:59:05.868320 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:59:05.869294 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:59:05.869634 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:59:05.869741 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:59:05.870969 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:59:05.871374 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:59:05.871759 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:59:05.872171 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:59:05.872551 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:59:05.872947 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:59:05.873342 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:59:05.873734 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:59:05.874141 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:59:05.874608 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:59:05.874957 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:59:05.875086 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:59:05.875780 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:59:05.876205 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:59:05.876542 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:59:05.979264 ignition[1118]: INFO : Ignition 2.20.0 Feb 13 15:59:05.979264 ignition[1118]: INFO : Stage: umount Feb 13 15:59:05.979264 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:59:05.979264 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 13 15:59:05.979264 ignition[1118]: INFO : umount: umount passed Feb 13 15:59:05.979264 ignition[1118]: INFO : Ignition finished successfully Feb 13 15:59:05.912542 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:59:05.915901 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:59:05.916060 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:59:05.921091 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:59:05.921253 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:59:05.928526 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:59:05.930947 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:59:05.935473 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:59:05.935610 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:59:05.949180 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:59:05.960871 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:59:05.965200 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:59:05.967206 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:59:05.979451 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:59:05.979830 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:59:05.989747 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:59:05.989830 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:59:05.997031 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:59:06.001044 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:59:06.005696 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:59:06.005790 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:59:06.009341 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:59:06.009394 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:59:06.011900 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:59:06.011943 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:59:06.016249 systemd[1]: Stopped target network.target - Network. Feb 13 15:59:06.020570 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:59:06.020627 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:59:06.029090 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:59:06.035433 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:59:06.035505 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:59:06.040877 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:59:06.043039 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:59:06.047815 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:59:06.047864 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:59:06.052346 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:59:06.052383 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:59:06.054853 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:59:06.054907 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:59:06.059817 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:59:06.059869 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:59:06.065269 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:59:06.071796 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:59:06.076099 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:59:06.082207 systemd-networkd[879]: eth0: DHCPv6 lease lost Feb 13 15:59:06.084986 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:59:06.085093 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:59:06.089724 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:59:06.089759 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:59:06.115376 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:59:06.123947 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:59:06.124020 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:59:06.135294 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:59:06.179548 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:59:06.179672 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:59:06.192549 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:59:06.193684 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:59:06.200145 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:59:06.200221 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:59:06.208778 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:59:06.208823 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:59:06.213498 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:59:06.213553 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:59:06.216637 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:59:06.216679 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:59:06.221072 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:59:06.221136 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:59:06.239294 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:59:06.244622 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:59:06.244684 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:59:06.247395 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:59:06.247456 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:59:06.252551 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:59:06.257683 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:59:06.270122 kernel: hv_netvsc 6045bddd-4d8f-6045-bddd-4d8f6045bddd eth0: Data path switched from VF: enP44566s1 Feb 13 15:59:06.271743 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:59:06.271808 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:59:06.281409 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:59:06.281471 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:59:06.289613 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:59:06.289674 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:59:06.295223 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:59:06.295277 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:59:06.306032 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:59:06.308310 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:59:06.313088 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:59:06.315475 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:59:06.397752 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:59:06.397923 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:59:06.402906 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:59:06.409942 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:59:06.410015 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:59:06.421357 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:59:06.431096 systemd[1]: Switching root. Feb 13 15:59:06.515387 systemd-journald[177]: Journal stopped Feb 13 15:59:11.274758 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Feb 13 15:59:11.274804 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:59:11.274826 kernel: SELinux: policy capability open_perms=1 Feb 13 15:59:11.274843 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:59:11.274862 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:59:11.274881 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:59:11.274899 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:59:11.274924 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:59:11.274943 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:59:11.274961 kernel: audit: type=1403 audit(1739462348.235:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:59:11.274980 systemd[1]: Successfully loaded SELinux policy in 114.687ms. Feb 13 15:59:11.274999 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.179ms. Feb 13 15:59:11.275021 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:59:11.275044 systemd[1]: Detected virtualization microsoft. Feb 13 15:59:11.275070 systemd[1]: Detected architecture x86-64. Feb 13 15:59:11.275091 systemd[1]: Detected first boot. Feb 13 15:59:11.275127 systemd[1]: Hostname set to . Feb 13 15:59:11.275144 systemd[1]: Initializing machine ID from random generator. Feb 13 15:59:11.275158 zram_generator::config[1161]: No configuration found. Feb 13 15:59:11.275177 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:59:11.275192 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:59:11.275208 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:59:11.275223 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:59:11.275236 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:59:11.275248 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:59:11.275266 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:59:11.275288 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:59:11.275303 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:59:11.275321 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:59:11.275338 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:59:11.275355 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:59:11.275372 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:59:11.275390 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:59:11.275407 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:59:11.275428 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:59:11.275446 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:59:11.275464 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:59:11.275482 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:59:11.275500 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:59:11.275517 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:59:11.275539 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:59:11.275557 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:59:11.275578 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:59:11.275596 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:59:11.275614 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:59:11.275631 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:59:11.275649 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:59:11.275665 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:59:11.275688 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:59:11.275709 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:59:11.275730 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:59:11.275752 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:59:11.275769 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:59:11.275785 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:59:11.275802 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:59:11.275814 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:59:11.275827 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:59:11.275838 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:59:11.275849 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:59:11.275862 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:59:11.275873 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:59:11.275885 systemd[1]: Reached target machines.target - Containers. Feb 13 15:59:11.275899 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:59:11.275910 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:59:11.275920 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:59:11.275933 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:59:11.275943 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:59:11.275954 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:59:11.275966 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:59:11.275978 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:59:11.275990 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:59:11.276005 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:59:11.276017 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:59:11.276030 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:59:11.276040 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:59:11.276053 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:59:11.276063 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:59:11.276076 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:59:11.276086 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:59:11.276156 kernel: loop: module loaded Feb 13 15:59:11.276169 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:59:11.276182 kernel: fuse: init (API version 7.39) Feb 13 15:59:11.276192 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:59:11.276205 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:59:11.276236 systemd-journald[1257]: Collecting audit messages is disabled. Feb 13 15:59:11.276266 systemd[1]: Stopped verity-setup.service. Feb 13 15:59:11.276280 systemd-journald[1257]: Journal started Feb 13 15:59:11.276304 systemd-journald[1257]: Runtime Journal (/run/log/journal/f4cd069085c348af90b77f97907cde1d) is 8.0M, max 158.8M, 150.8M free. Feb 13 15:59:11.286167 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:59:10.504874 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:59:10.740082 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:59:10.740479 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:59:11.292256 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:59:11.296120 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:59:11.298345 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:59:11.300988 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:59:11.303855 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:59:11.306676 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:59:11.310366 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:59:11.315463 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:59:11.318924 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:59:11.322568 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:59:11.322727 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:59:11.326536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:59:11.326933 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:59:11.333672 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:59:11.333925 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:59:11.340657 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:59:11.341423 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:59:11.346915 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:59:11.347117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:59:11.350060 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:59:11.354561 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:59:11.359702 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:59:11.365355 kernel: ACPI: bus type drm_connector registered Feb 13 15:59:11.366417 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:59:11.366593 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:59:11.380457 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:59:11.389286 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:59:11.400555 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:59:11.403711 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:59:11.403752 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:59:11.408711 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:59:11.418260 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:59:11.427599 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:59:11.430539 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:59:11.432214 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:59:11.451226 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:59:11.454368 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:59:11.462207 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:59:11.464994 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:59:11.466338 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:59:11.473877 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:59:11.480233 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:59:11.486306 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:59:11.491845 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:59:11.500071 systemd-journald[1257]: Time spent on flushing to /var/log/journal/f4cd069085c348af90b77f97907cde1d is 53.919ms for 961 entries. Feb 13 15:59:11.500071 systemd-journald[1257]: System Journal (/var/log/journal/f4cd069085c348af90b77f97907cde1d) is 8.0M, max 2.6G, 2.6G free. Feb 13 15:59:11.578388 systemd-journald[1257]: Received client request to flush runtime journal. Feb 13 15:59:11.578440 kernel: loop0: detected capacity change from 0 to 211296 Feb 13 15:59:11.497651 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:59:11.503072 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:59:11.528269 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:59:11.535890 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:59:11.540782 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:59:11.552275 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:59:11.561452 udevadm[1305]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:59:11.580478 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:59:11.648136 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:59:11.652658 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:59:11.653388 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:59:11.667793 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:59:11.687131 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 15:59:11.693127 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Feb 13 15:59:11.693149 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Feb 13 15:59:11.698538 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:59:11.709071 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:59:11.930067 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:59:11.941354 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:59:11.958382 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Feb 13 15:59:11.958964 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Feb 13 15:59:11.965400 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:59:12.368137 kernel: loop2: detected capacity change from 0 to 28304 Feb 13 15:59:12.697482 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:59:12.709316 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:59:12.734915 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Feb 13 15:59:12.775169 kernel: loop3: detected capacity change from 0 to 141000 Feb 13 15:59:13.181130 kernel: loop4: detected capacity change from 0 to 211296 Feb 13 15:59:13.192128 kernel: loop5: detected capacity change from 0 to 138184 Feb 13 15:59:13.207125 kernel: loop6: detected capacity change from 0 to 28304 Feb 13 15:59:13.216128 kernel: loop7: detected capacity change from 0 to 141000 Feb 13 15:59:13.226079 (sd-merge)[1327]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Feb 13 15:59:13.226679 (sd-merge)[1327]: Merged extensions into '/usr'. Feb 13 15:59:13.230826 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:59:13.230843 systemd[1]: Reloading... Feb 13 15:59:13.278136 zram_generator::config[1349]: No configuration found. Feb 13 15:59:13.518142 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:59:13.578243 kernel: hv_vmbus: registering driver hv_balloon Feb 13 15:59:13.588136 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 13 15:59:13.588238 kernel: hv_vmbus: registering driver hyperv_fb Feb 13 15:59:13.603630 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 13 15:59:13.603692 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 13 15:59:13.612290 kernel: Console: switching to colour dummy device 80x25 Feb 13 15:59:13.614126 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:59:13.629428 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:59:13.765506 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:59:13.766432 systemd[1]: Reloading finished in 535 ms. Feb 13 15:59:13.799801 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:59:13.814585 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:59:13.872506 systemd[1]: Starting ensure-sysext.service... Feb 13 15:59:13.881287 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:59:13.899290 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:59:13.922287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:59:13.938190 systemd[1]: Reloading requested from client PID 1454 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:59:13.938210 systemd[1]: Reloading... Feb 13 15:59:13.981072 systemd-tmpfiles[1457]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:59:13.982529 systemd-tmpfiles[1457]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:59:13.987482 systemd-tmpfiles[1457]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:59:13.988251 systemd-tmpfiles[1457]: ACLs are not supported, ignoring. Feb 13 15:59:13.988343 systemd-tmpfiles[1457]: ACLs are not supported, ignoring. Feb 13 15:59:14.008162 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1391) Feb 13 15:59:14.014387 systemd-tmpfiles[1457]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:59:14.014399 systemd-tmpfiles[1457]: Skipping /boot Feb 13 15:59:14.070866 systemd-tmpfiles[1457]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:59:14.076144 systemd-tmpfiles[1457]: Skipping /boot Feb 13 15:59:14.087153 zram_generator::config[1516]: No configuration found. Feb 13 15:59:14.107440 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Feb 13 15:59:14.304930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:59:14.394684 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Feb 13 15:59:14.398719 systemd[1]: Reloading finished in 460 ms. Feb 13 15:59:14.420570 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:59:14.439063 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:59:14.456121 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:59:14.469136 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:59:14.473373 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:59:14.477086 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:59:14.480479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:59:14.482384 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:59:14.492460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:59:14.501048 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:59:14.510146 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:59:14.515093 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:59:14.516601 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:59:14.523399 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:59:14.534204 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:59:14.543408 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:59:14.557881 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:59:14.564289 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:59:14.564489 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:59:14.569071 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:59:14.585677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:59:14.589965 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:59:14.592557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:59:14.593344 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:59:14.597988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:59:14.598220 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:59:14.602754 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:59:14.605157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:59:14.617129 lvm[1608]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:59:14.627638 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:59:14.628066 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:59:14.637530 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:59:14.653090 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:59:14.663517 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:59:14.675492 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:59:14.678465 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:59:14.679290 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:59:14.686667 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:59:14.690346 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:59:14.701811 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:59:14.713128 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:59:14.721131 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:59:14.730733 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:59:14.735736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:59:14.736201 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:59:14.740876 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:59:14.741510 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:59:14.744823 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:59:14.744948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:59:14.749458 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:59:14.749620 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:59:14.762505 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:59:14.772197 systemd[1]: Finished ensure-sysext.service. Feb 13 15:59:14.784610 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:59:14.793315 augenrules[1665]: No rules Feb 13 15:59:14.795245 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:59:14.798482 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:59:14.798557 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:59:14.798912 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:59:14.800174 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:59:14.813252 lvm[1669]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:59:14.843903 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:59:14.860929 systemd-resolved[1615]: Positive Trust Anchors: Feb 13 15:59:14.860952 systemd-resolved[1615]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:59:14.860999 systemd-resolved[1615]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:59:14.877050 systemd-resolved[1615]: Using system hostname 'ci-4186.1.1-a-254057132e'. Feb 13 15:59:14.879239 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:59:14.882559 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:59:14.888359 systemd-networkd[1456]: lo: Link UP Feb 13 15:59:14.888368 systemd-networkd[1456]: lo: Gained carrier Feb 13 15:59:14.890911 systemd-networkd[1456]: Enumeration completed Feb 13 15:59:14.891499 systemd-networkd[1456]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:59:14.891510 systemd-networkd[1456]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:59:14.891741 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:59:14.894724 systemd[1]: Reached target network.target - Network. Feb 13 15:59:14.907309 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:59:14.912016 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:59:14.916324 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:59:14.952122 kernel: mlx5_core ae16:00:02.0 enP44566s1: Link up Feb 13 15:59:14.979520 kernel: hv_netvsc 6045bddd-4d8f-6045-bddd-4d8f6045bddd eth0: Data path switched to VF: enP44566s1 Feb 13 15:59:14.978838 systemd-networkd[1456]: enP44566s1: Link UP Feb 13 15:59:14.979254 systemd-networkd[1456]: eth0: Link UP Feb 13 15:59:14.979261 systemd-networkd[1456]: eth0: Gained carrier Feb 13 15:59:14.979289 systemd-networkd[1456]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:59:14.984999 systemd-networkd[1456]: enP44566s1: Gained carrier Feb 13 15:59:15.020168 systemd-networkd[1456]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 15:59:16.142463 systemd-networkd[1456]: enP44566s1: Gained IPv6LL Feb 13 15:59:16.462378 systemd-networkd[1456]: eth0: Gained IPv6LL Feb 13 15:59:16.465269 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:59:16.469346 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:59:17.650742 ldconfig[1292]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:59:17.662822 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:59:17.677280 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:59:17.704682 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:59:17.708226 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:59:17.711416 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:59:17.714850 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:59:17.718445 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:59:17.721340 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:59:17.725555 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:59:17.729439 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:59:17.729491 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:59:17.731602 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:59:17.735551 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:59:17.739843 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:59:17.749980 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:59:17.753699 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:59:17.756307 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:59:17.758560 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:59:17.761078 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:59:17.761124 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:59:17.768256 systemd[1]: Starting chronyd.service - NTP client/server... Feb 13 15:59:17.774228 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:59:17.780574 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:59:17.788285 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:59:17.792713 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:59:17.802297 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:59:17.805007 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:59:17.805058 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Feb 13 15:59:17.808287 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Feb 13 15:59:17.811178 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Feb 13 15:59:17.821039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:59:17.830825 KVP[1691]: KVP starting; pid is:1691 Feb 13 15:59:17.837400 jq[1686]: false Feb 13 15:59:17.832871 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:59:17.839339 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:59:17.842974 (chronyd)[1682]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Feb 13 15:59:17.849313 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:59:17.856318 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:59:17.860864 chronyd[1699]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Feb 13 15:59:17.862096 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:59:17.865642 KVP[1691]: KVP LIC Version: 3.1 Feb 13 15:59:17.866141 kernel: hv_utils: KVP IC version 4.0 Feb 13 15:59:17.877444 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:59:17.883426 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:59:17.884082 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:59:17.884881 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:59:17.892232 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:59:17.899326 chronyd[1699]: Timezone right/UTC failed leap second check, ignoring Feb 13 15:59:17.901289 chronyd[1699]: Loaded seccomp filter (level 2) Feb 13 15:59:17.907559 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:59:17.908185 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:59:17.908601 systemd[1]: Started chronyd.service - NTP client/server. Feb 13 15:59:17.929613 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:59:17.929830 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:59:17.930265 jq[1705]: true Feb 13 15:59:17.942523 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:59:17.942748 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:59:17.965700 extend-filesystems[1690]: Found loop4 Feb 13 15:59:17.969599 extend-filesystems[1690]: Found loop5 Feb 13 15:59:17.969599 extend-filesystems[1690]: Found loop6 Feb 13 15:59:17.969599 extend-filesystems[1690]: Found loop7 Feb 13 15:59:17.969599 extend-filesystems[1690]: Found sda Feb 13 15:59:17.969599 extend-filesystems[1690]: Found sda1 Feb 13 15:59:17.969599 extend-filesystems[1690]: Found sda2 Feb 13 15:59:17.969599 extend-filesystems[1690]: Found sda3 Feb 13 15:59:17.969599 extend-filesystems[1690]: Found usr Feb 13 15:59:17.969599 extend-filesystems[1690]: Found sda4 Feb 13 15:59:17.969599 extend-filesystems[1690]: Found sda6 Feb 13 15:59:17.969599 extend-filesystems[1690]: Found sda7 Feb 13 15:59:17.969599 extend-filesystems[1690]: Found sda9 Feb 13 15:59:17.969599 extend-filesystems[1690]: Checking size of /dev/sda9 Feb 13 15:59:18.055137 update_engine[1704]: I20250213 15:59:18.004745 1704 main.cc:92] Flatcar Update Engine starting Feb 13 15:59:18.055137 update_engine[1704]: I20250213 15:59:18.033814 1704 update_check_scheduler.cc:74] Next update check in 2m40s Feb 13 15:59:18.008639 dbus-daemon[1685]: [system] SELinux support is enabled Feb 13 15:59:18.008822 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:59:18.055770 jq[1718]: true Feb 13 15:59:18.022903 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:59:18.022937 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:59:18.030284 (ntainerd)[1719]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:59:18.039007 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:59:18.039031 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:59:18.043149 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:59:18.059750 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:59:18.070325 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:59:18.100216 extend-filesystems[1690]: Old size kept for /dev/sda9 Feb 13 15:59:18.100216 extend-filesystems[1690]: Found sr0 Feb 13 15:59:18.097074 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:59:18.111705 tar[1716]: linux-amd64/helm Feb 13 15:59:18.097364 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:59:18.164275 systemd-logind[1702]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:59:18.166233 systemd-logind[1702]: New seat seat0. Feb 13 15:59:18.170959 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:59:18.199522 coreos-metadata[1684]: Feb 13 15:59:18.199 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 13 15:59:18.209219 coreos-metadata[1684]: Feb 13 15:59:18.205 INFO Fetch successful Feb 13 15:59:18.209219 coreos-metadata[1684]: Feb 13 15:59:18.205 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 13 15:59:18.215201 coreos-metadata[1684]: Feb 13 15:59:18.214 INFO Fetch successful Feb 13 15:59:18.216555 coreos-metadata[1684]: Feb 13 15:59:18.216 INFO Fetching http://168.63.129.16/machine/24da0b2d-3f41-456f-b3ea-218d6628a2b8/f93d8635%2D9e48%2D47ed%2D8d90%2Dc3064daee684.%5Fci%2D4186.1.1%2Da%2D254057132e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 13 15:59:18.219036 coreos-metadata[1684]: Feb 13 15:59:18.219 INFO Fetch successful Feb 13 15:59:18.221986 coreos-metadata[1684]: Feb 13 15:59:18.221 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 13 15:59:18.234779 coreos-metadata[1684]: Feb 13 15:59:18.234 INFO Fetch successful Feb 13 15:59:18.294910 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1774) Feb 13 15:59:18.295874 bash[1770]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:59:18.297593 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:59:18.304889 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:59:18.363774 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:59:18.375900 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:59:18.475059 locksmithd[1742]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:59:18.810852 sshd_keygen[1717]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:59:18.879079 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:59:18.894555 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:59:18.906387 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Feb 13 15:59:18.931628 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:59:18.931873 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:59:18.950304 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:59:18.958770 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Feb 13 15:59:18.973620 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:59:18.984578 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:59:18.994445 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:59:18.997847 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:59:19.081121 tar[1716]: linux-amd64/LICENSE Feb 13 15:59:19.081280 tar[1716]: linux-amd64/README.md Feb 13 15:59:19.092477 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:59:19.341658 containerd[1719]: time="2025-02-13T15:59:19.341444400Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:59:19.375380 containerd[1719]: time="2025-02-13T15:59:19.375312900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:59:19.376951 containerd[1719]: time="2025-02-13T15:59:19.376905500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:59:19.376951 containerd[1719]: time="2025-02-13T15:59:19.376939600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:59:19.377101 containerd[1719]: time="2025-02-13T15:59:19.376961500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:59:19.377187 containerd[1719]: time="2025-02-13T15:59:19.377161500Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:59:19.377237 containerd[1719]: time="2025-02-13T15:59:19.377192600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:59:19.377304 containerd[1719]: time="2025-02-13T15:59:19.377282800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:59:19.377356 containerd[1719]: time="2025-02-13T15:59:19.377301200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:59:19.377524 containerd[1719]: time="2025-02-13T15:59:19.377500200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:59:19.377524 containerd[1719]: time="2025-02-13T15:59:19.377520100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:59:19.377598 containerd[1719]: time="2025-02-13T15:59:19.377539000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:59:19.377598 containerd[1719]: time="2025-02-13T15:59:19.377557700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:59:19.377694 containerd[1719]: time="2025-02-13T15:59:19.377670600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:59:19.377916 containerd[1719]: time="2025-02-13T15:59:19.377890300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:59:19.378052 containerd[1719]: time="2025-02-13T15:59:19.378029000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:59:19.378052 containerd[1719]: time="2025-02-13T15:59:19.378047900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:59:19.378198 containerd[1719]: time="2025-02-13T15:59:19.378178000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:59:19.378260 containerd[1719]: time="2025-02-13T15:59:19.378241400Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:59:19.402526 containerd[1719]: time="2025-02-13T15:59:19.401738500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:59:19.402526 containerd[1719]: time="2025-02-13T15:59:19.401828800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:59:19.402526 containerd[1719]: time="2025-02-13T15:59:19.401856700Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:59:19.402526 containerd[1719]: time="2025-02-13T15:59:19.401932300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:59:19.402526 containerd[1719]: time="2025-02-13T15:59:19.401958800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:59:19.402526 containerd[1719]: time="2025-02-13T15:59:19.402209600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:59:19.402833 containerd[1719]: time="2025-02-13T15:59:19.402659500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:59:19.402833 containerd[1719]: time="2025-02-13T15:59:19.402814200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:59:19.402904 containerd[1719]: time="2025-02-13T15:59:19.402836400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:59:19.402904 containerd[1719]: time="2025-02-13T15:59:19.402862700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:59:19.402904 containerd[1719]: time="2025-02-13T15:59:19.402882900Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:59:19.403021 containerd[1719]: time="2025-02-13T15:59:19.402904000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:59:19.403021 containerd[1719]: time="2025-02-13T15:59:19.402922100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:59:19.403021 containerd[1719]: time="2025-02-13T15:59:19.402942400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:59:19.403021 containerd[1719]: time="2025-02-13T15:59:19.402962300Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:59:19.403021 containerd[1719]: time="2025-02-13T15:59:19.402979200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:59:19.403021 containerd[1719]: time="2025-02-13T15:59:19.402996700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:59:19.403021 containerd[1719]: time="2025-02-13T15:59:19.403016400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:59:19.403249 containerd[1719]: time="2025-02-13T15:59:19.403045700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403249 containerd[1719]: time="2025-02-13T15:59:19.403066300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403249 containerd[1719]: time="2025-02-13T15:59:19.403083500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403249 containerd[1719]: time="2025-02-13T15:59:19.403119600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403249 containerd[1719]: time="2025-02-13T15:59:19.403140000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403249 containerd[1719]: time="2025-02-13T15:59:19.403158500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403249 containerd[1719]: time="2025-02-13T15:59:19.403175800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403249 containerd[1719]: time="2025-02-13T15:59:19.403205500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403249 containerd[1719]: time="2025-02-13T15:59:19.403227200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403249 containerd[1719]: time="2025-02-13T15:59:19.403246700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403577 containerd[1719]: time="2025-02-13T15:59:19.403274200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403577 containerd[1719]: time="2025-02-13T15:59:19.403294100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403577 containerd[1719]: time="2025-02-13T15:59:19.403313500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403577 containerd[1719]: time="2025-02-13T15:59:19.403334900Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:59:19.403577 containerd[1719]: time="2025-02-13T15:59:19.403363000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403577 containerd[1719]: time="2025-02-13T15:59:19.403382600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403577 containerd[1719]: time="2025-02-13T15:59:19.403398200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:59:19.403577 containerd[1719]: time="2025-02-13T15:59:19.403483900Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:59:19.403577 containerd[1719]: time="2025-02-13T15:59:19.403510500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:59:19.403931 containerd[1719]: time="2025-02-13T15:59:19.403589000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:59:19.403931 containerd[1719]: time="2025-02-13T15:59:19.403610800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:59:19.403931 containerd[1719]: time="2025-02-13T15:59:19.403626500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.403931 containerd[1719]: time="2025-02-13T15:59:19.403646200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:59:19.403931 containerd[1719]: time="2025-02-13T15:59:19.403659900Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:59:19.403931 containerd[1719]: time="2025-02-13T15:59:19.403673900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:59:19.407184 containerd[1719]: time="2025-02-13T15:59:19.405505900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:59:19.407184 containerd[1719]: time="2025-02-13T15:59:19.405595700Z" level=info msg="Connect containerd service" Feb 13 15:59:19.407184 containerd[1719]: time="2025-02-13T15:59:19.405663400Z" level=info msg="using legacy CRI server" Feb 13 15:59:19.407184 containerd[1719]: time="2025-02-13T15:59:19.405676100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:59:19.407184 containerd[1719]: time="2025-02-13T15:59:19.405848300Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:59:19.408352 containerd[1719]: time="2025-02-13T15:59:19.407923500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:59:19.409731 containerd[1719]: time="2025-02-13T15:59:19.408243600Z" level=info msg="Start subscribing containerd event" Feb 13 15:59:19.409731 containerd[1719]: time="2025-02-13T15:59:19.409138700Z" level=info msg="Start recovering state" Feb 13 15:59:19.409731 containerd[1719]: time="2025-02-13T15:59:19.409221500Z" level=info msg="Start event monitor" Feb 13 15:59:19.409731 containerd[1719]: time="2025-02-13T15:59:19.409241800Z" level=info msg="Start snapshots syncer" Feb 13 15:59:19.409731 containerd[1719]: time="2025-02-13T15:59:19.409256200Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:59:19.409731 containerd[1719]: time="2025-02-13T15:59:19.409265800Z" level=info msg="Start streaming server" Feb 13 15:59:19.409731 containerd[1719]: time="2025-02-13T15:59:19.408817000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:59:19.409731 containerd[1719]: time="2025-02-13T15:59:19.409443400Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:59:19.410287 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:59:19.419511 containerd[1719]: time="2025-02-13T15:59:19.417628400Z" level=info msg="containerd successfully booted in 0.077389s" Feb 13 15:59:19.478988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:59:19.482958 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:59:19.484965 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:59:19.489486 systemd[1]: Startup finished in 869ms (firmware) + 27.462s (loader) + 980ms (kernel) + 11.740s (initrd) + 11.366s (userspace) = 52.419s. Feb 13 15:59:19.514868 agetty[1856]: failed to open credentials directory Feb 13 15:59:19.514868 agetty[1857]: failed to open credentials directory Feb 13 15:59:19.760442 login[1856]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 15:59:19.764210 login[1857]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 15:59:19.778538 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:59:19.785014 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:59:19.789420 systemd-logind[1702]: New session 2 of user core. Feb 13 15:59:19.795700 systemd-logind[1702]: New session 1 of user core. Feb 13 15:59:19.806601 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:59:19.815409 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:59:19.834187 (systemd)[1880]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:59:20.010458 systemd[1880]: Queued start job for default target default.target. Feb 13 15:59:20.015895 systemd[1880]: Created slice app.slice - User Application Slice. Feb 13 15:59:20.016098 systemd[1880]: Reached target paths.target - Paths. Feb 13 15:59:20.016143 systemd[1880]: Reached target timers.target - Timers. Feb 13 15:59:20.018542 systemd[1880]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:59:20.038359 systemd[1880]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:59:20.038631 systemd[1880]: Reached target sockets.target - Sockets. Feb 13 15:59:20.038660 systemd[1880]: Reached target basic.target - Basic System. Feb 13 15:59:20.038708 systemd[1880]: Reached target default.target - Main User Target. Feb 13 15:59:20.038743 systemd[1880]: Startup finished in 197ms. Feb 13 15:59:20.038833 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:59:20.044277 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:59:20.046140 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:59:20.304547 kubelet[1869]: E0213 15:59:20.304320 1869 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:59:20.307211 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:59:20.307410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:59:20.682015 waagent[1853]: 2025-02-13T15:59:20.681839Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Feb 13 15:59:20.685449 waagent[1853]: 2025-02-13T15:59:20.685372Z INFO Daemon Daemon OS: flatcar 4186.1.1 Feb 13 15:59:20.687762 waagent[1853]: 2025-02-13T15:59:20.687704Z INFO Daemon Daemon Python: 3.11.10 Feb 13 15:59:20.690274 waagent[1853]: 2025-02-13T15:59:20.690214Z INFO Daemon Daemon Run daemon Feb 13 15:59:20.692334 waagent[1853]: 2025-02-13T15:59:20.692283Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.1' Feb 13 15:59:20.696325 waagent[1853]: 2025-02-13T15:59:20.696274Z INFO Daemon Daemon Using waagent for provisioning Feb 13 15:59:20.698793 waagent[1853]: 2025-02-13T15:59:20.698744Z INFO Daemon Daemon Activate resource disk Feb 13 15:59:20.700988 waagent[1853]: 2025-02-13T15:59:20.700935Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 13 15:59:20.708890 waagent[1853]: 2025-02-13T15:59:20.708829Z INFO Daemon Daemon Found device: None Feb 13 15:59:20.711140 waagent[1853]: 2025-02-13T15:59:20.711080Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 13 15:59:20.715154 waagent[1853]: 2025-02-13T15:59:20.715091Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 13 15:59:20.720593 waagent[1853]: 2025-02-13T15:59:20.720532Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 15:59:20.723452 waagent[1853]: 2025-02-13T15:59:20.723400Z INFO Daemon Daemon Running default provisioning handler Feb 13 15:59:20.733539 waagent[1853]: 2025-02-13T15:59:20.733479Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Feb 13 15:59:20.740066 waagent[1853]: 2025-02-13T15:59:20.740016Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 13 15:59:20.747786 waagent[1853]: 2025-02-13T15:59:20.740958Z INFO Daemon Daemon cloud-init is enabled: False Feb 13 15:59:20.747786 waagent[1853]: 2025-02-13T15:59:20.741712Z INFO Daemon Daemon Copying ovf-env.xml Feb 13 15:59:20.817609 waagent[1853]: 2025-02-13T15:59:20.815089Z INFO Daemon Daemon Successfully mounted dvd Feb 13 15:59:20.843451 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 13 15:59:20.845329 waagent[1853]: 2025-02-13T15:59:20.845255Z INFO Daemon Daemon Detect protocol endpoint Feb 13 15:59:20.848272 waagent[1853]: 2025-02-13T15:59:20.848210Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 13 15:59:20.860458 waagent[1853]: 2025-02-13T15:59:20.849328Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 13 15:59:20.860458 waagent[1853]: 2025-02-13T15:59:20.849749Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 13 15:59:20.860458 waagent[1853]: 2025-02-13T15:59:20.850797Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 13 15:59:20.860458 waagent[1853]: 2025-02-13T15:59:20.851608Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 13 15:59:20.883039 waagent[1853]: 2025-02-13T15:59:20.882976Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 13 15:59:20.891924 waagent[1853]: 2025-02-13T15:59:20.886445Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 13 15:59:20.891924 waagent[1853]: 2025-02-13T15:59:20.890795Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 13 15:59:21.058024 waagent[1853]: 2025-02-13T15:59:21.057852Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 13 15:59:21.062098 waagent[1853]: 2025-02-13T15:59:21.062020Z INFO Daemon Daemon Forcing an update of the goal state. Feb 13 15:59:21.069014 waagent[1853]: 2025-02-13T15:59:21.068952Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 15:59:21.085048 waagent[1853]: 2025-02-13T15:59:21.084985Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Feb 13 15:59:21.099926 waagent[1853]: 2025-02-13T15:59:21.086479Z INFO Daemon Feb 13 15:59:21.099926 waagent[1853]: 2025-02-13T15:59:21.088134Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 3deffcda-a161-413c-9485-6b922218abca eTag: 3830387527962899310 source: Fabric] Feb 13 15:59:21.099926 waagent[1853]: 2025-02-13T15:59:21.089496Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Feb 13 15:59:21.099926 waagent[1853]: 2025-02-13T15:59:21.090218Z INFO Daemon Feb 13 15:59:21.099926 waagent[1853]: 2025-02-13T15:59:21.090966Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Feb 13 15:59:21.102560 waagent[1853]: 2025-02-13T15:59:21.102518Z INFO Daemon Daemon Downloading artifacts profile blob Feb 13 15:59:21.262463 waagent[1853]: 2025-02-13T15:59:21.262379Z INFO Daemon Downloaded certificate {'thumbprint': 'F5B0A52CC0B4C262369EA9AE9B602556B3033782', 'hasPrivateKey': True} Feb 13 15:59:21.274487 waagent[1853]: 2025-02-13T15:59:21.263896Z INFO Daemon Downloaded certificate {'thumbprint': '1360AEA44BF577B479CDD88DDE86824A17AF1625', 'hasPrivateKey': False} Feb 13 15:59:21.274487 waagent[1853]: 2025-02-13T15:59:21.264851Z INFO Daemon Fetch goal state completed Feb 13 15:59:21.315838 waagent[1853]: 2025-02-13T15:59:21.315665Z INFO Daemon Daemon Starting provisioning Feb 13 15:59:21.320974 waagent[1853]: 2025-02-13T15:59:21.317525Z INFO Daemon Daemon Handle ovf-env.xml. Feb 13 15:59:21.320974 waagent[1853]: 2025-02-13T15:59:21.318874Z INFO Daemon Daemon Set hostname [ci-4186.1.1-a-254057132e] Feb 13 15:59:21.352517 waagent[1853]: 2025-02-13T15:59:21.352431Z INFO Daemon Daemon Publish hostname [ci-4186.1.1-a-254057132e] Feb 13 15:59:21.359925 waagent[1853]: 2025-02-13T15:59:21.353933Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 13 15:59:21.359925 waagent[1853]: 2025-02-13T15:59:21.354634Z INFO Daemon Daemon Primary interface is [eth0] Feb 13 15:59:21.378182 systemd-networkd[1456]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:59:21.378192 systemd-networkd[1456]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:59:21.378241 systemd-networkd[1456]: eth0: DHCP lease lost Feb 13 15:59:21.379550 waagent[1853]: 2025-02-13T15:59:21.379464Z INFO Daemon Daemon Create user account if not exists Feb 13 15:59:21.382280 systemd-networkd[1456]: eth0: DHCPv6 lease lost Feb 13 15:59:21.393990 waagent[1853]: 2025-02-13T15:59:21.382271Z INFO Daemon Daemon User core already exists, skip useradd Feb 13 15:59:21.393990 waagent[1853]: 2025-02-13T15:59:21.383203Z INFO Daemon Daemon Configure sudoer Feb 13 15:59:21.393990 waagent[1853]: 2025-02-13T15:59:21.384397Z INFO Daemon Daemon Configure sshd Feb 13 15:59:21.393990 waagent[1853]: 2025-02-13T15:59:21.385135Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Feb 13 15:59:21.393990 waagent[1853]: 2025-02-13T15:59:21.385729Z INFO Daemon Daemon Deploy ssh public key. Feb 13 15:59:21.432238 systemd-networkd[1456]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 13 15:59:30.557910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:59:30.564386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:59:30.658833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:59:30.663463 (kubelet)[1948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:59:31.219680 kubelet[1948]: E0213 15:59:31.219612 1948 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:59:31.223908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:59:31.224136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:59:41.474915 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:59:41.480366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:59:41.573711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:59:41.578193 (kubelet)[1965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:59:41.695589 chronyd[1699]: Selected source PHC0 Feb 13 15:59:42.170091 kubelet[1965]: E0213 15:59:42.170021 1965 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:59:42.172798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:59:42.173003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:59:51.464616 waagent[1853]: 2025-02-13T15:59:51.464546Z INFO Daemon Daemon Provisioning complete Feb 13 15:59:51.478867 waagent[1853]: 2025-02-13T15:59:51.478806Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 13 15:59:51.485498 waagent[1853]: 2025-02-13T15:59:51.479939Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 13 15:59:51.485498 waagent[1853]: 2025-02-13T15:59:51.480880Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Feb 13 15:59:51.606255 waagent[1974]: 2025-02-13T15:59:51.606151Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 13 15:59:51.606709 waagent[1974]: 2025-02-13T15:59:51.606332Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.1 Feb 13 15:59:51.606709 waagent[1974]: 2025-02-13T15:59:51.606413Z INFO ExtHandler ExtHandler Python: 3.11.10 Feb 13 15:59:51.664763 waagent[1974]: 2025-02-13T15:59:51.664652Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 13 15:59:51.666457 waagent[1974]: 2025-02-13T15:59:51.666374Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:59:51.666615 waagent[1974]: 2025-02-13T15:59:51.666554Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:59:51.674902 waagent[1974]: 2025-02-13T15:59:51.674827Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 13 15:59:51.686082 waagent[1974]: 2025-02-13T15:59:51.686027Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Feb 13 15:59:51.686583 waagent[1974]: 2025-02-13T15:59:51.686524Z INFO ExtHandler Feb 13 15:59:51.686670 waagent[1974]: 2025-02-13T15:59:51.686621Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0a6800cb-bb86-4577-b44d-fce955b1380b eTag: 3830387527962899310 source: Fabric] Feb 13 15:59:51.686995 waagent[1974]: 2025-02-13T15:59:51.686943Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 13 15:59:51.687595 waagent[1974]: 2025-02-13T15:59:51.687537Z INFO ExtHandler Feb 13 15:59:51.687668 waagent[1974]: 2025-02-13T15:59:51.687626Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 13 15:59:51.691809 waagent[1974]: 2025-02-13T15:59:51.691766Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 13 15:59:51.771031 waagent[1974]: 2025-02-13T15:59:51.770896Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F5B0A52CC0B4C262369EA9AE9B602556B3033782', 'hasPrivateKey': True} Feb 13 15:59:51.771468 waagent[1974]: 2025-02-13T15:59:51.771411Z INFO ExtHandler Downloaded certificate {'thumbprint': '1360AEA44BF577B479CDD88DDE86824A17AF1625', 'hasPrivateKey': False} Feb 13 15:59:51.771925 waagent[1974]: 2025-02-13T15:59:51.771859Z INFO ExtHandler Fetch goal state completed Feb 13 15:59:51.789675 waagent[1974]: 2025-02-13T15:59:51.789605Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1974 Feb 13 15:59:51.789843 waagent[1974]: 2025-02-13T15:59:51.789793Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Feb 13 15:59:51.791467 waagent[1974]: 2025-02-13T15:59:51.791406Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.1', '', 'Flatcar Container Linux by Kinvolk'] Feb 13 15:59:51.791846 waagent[1974]: 2025-02-13T15:59:51.791799Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 13 15:59:51.824956 waagent[1974]: 2025-02-13T15:59:51.824903Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 13 15:59:51.825220 waagent[1974]: 2025-02-13T15:59:51.825163Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 13 15:59:51.833798 waagent[1974]: 2025-02-13T15:59:51.833607Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 13 15:59:51.841167 systemd[1]: Reloading requested from client PID 1989 ('systemctl') (unit waagent.service)... Feb 13 15:59:51.841184 systemd[1]: Reloading... Feb 13 15:59:51.918174 zram_generator::config[2022]: No configuration found. Feb 13 15:59:52.045615 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:59:52.133452 systemd[1]: Reloading finished in 291 ms. Feb 13 15:59:52.165135 waagent[1974]: 2025-02-13T15:59:52.160548Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Feb 13 15:59:52.170005 systemd[1]: Reloading requested from client PID 2080 ('systemctl') (unit waagent.service)... Feb 13 15:59:52.170019 systemd[1]: Reloading... Feb 13 15:59:52.248156 zram_generator::config[2114]: No configuration found. Feb 13 15:59:52.369118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:59:52.457622 systemd[1]: Reloading finished in 287 ms. Feb 13 15:59:52.489596 waagent[1974]: 2025-02-13T15:59:52.487345Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Feb 13 15:59:52.489596 waagent[1974]: 2025-02-13T15:59:52.487574Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Feb 13 15:59:52.488695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:59:52.497444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:59:52.659525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:59:52.670462 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:59:53.157539 kubelet[2185]: E0213 15:59:53.157472 2185 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:59:53.160208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:59:53.160406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:59:53.396247 waagent[1974]: 2025-02-13T15:59:53.396139Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 13 15:59:53.397079 waagent[1974]: 2025-02-13T15:59:53.397001Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 13 15:59:53.398028 waagent[1974]: 2025-02-13T15:59:53.397961Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 13 15:59:53.398550 waagent[1974]: 2025-02-13T15:59:53.398472Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 13 15:59:53.398711 waagent[1974]: 2025-02-13T15:59:53.398656Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:59:53.398857 waagent[1974]: 2025-02-13T15:59:53.398799Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:59:53.399358 waagent[1974]: 2025-02-13T15:59:53.399284Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 13 15:59:53.399484 waagent[1974]: 2025-02-13T15:59:53.399398Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 13 15:59:53.399818 waagent[1974]: 2025-02-13T15:59:53.399732Z INFO EnvHandler ExtHandler Configure routes Feb 13 15:59:53.400207 waagent[1974]: 2025-02-13T15:59:53.400154Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 13 15:59:53.400569 waagent[1974]: 2025-02-13T15:59:53.400493Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 13 15:59:53.400724 waagent[1974]: 2025-02-13T15:59:53.400646Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 13 15:59:53.400906 waagent[1974]: 2025-02-13T15:59:53.400854Z INFO EnvHandler ExtHandler Gateway:None Feb 13 15:59:53.401012 waagent[1974]: 2025-02-13T15:59:53.400964Z INFO EnvHandler ExtHandler Routes:None Feb 13 15:59:53.401934 waagent[1974]: 2025-02-13T15:59:53.401881Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 13 15:59:53.402552 waagent[1974]: 2025-02-13T15:59:53.402476Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 13 15:59:53.409173 waagent[1974]: 2025-02-13T15:59:53.408309Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 13 15:59:53.409173 waagent[1974]: 2025-02-13T15:59:53.408562Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 13 15:59:53.409173 waagent[1974]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 13 15:59:53.409173 waagent[1974]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 13 15:59:53.409173 waagent[1974]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 13 15:59:53.409173 waagent[1974]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:59:53.409173 waagent[1974]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:59:53.409173 waagent[1974]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 13 15:59:53.411523 waagent[1974]: 2025-02-13T15:59:53.411482Z INFO ExtHandler ExtHandler Feb 13 15:59:53.411627 waagent[1974]: 2025-02-13T15:59:53.411581Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a25e3044-04a4-45c1-9597-9ed759bb3564 correlation 254ec5b0-0b67-495a-90d7-f54609003822 created: 2025-02-13T15:58:16.558975Z] Feb 13 15:59:53.412088 waagent[1974]: 2025-02-13T15:59:53.412035Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 13 15:59:53.412894 waagent[1974]: 2025-02-13T15:59:53.412841Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Feb 13 15:59:53.451158 waagent[1974]: 2025-02-13T15:59:53.451085Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A88CBC14-8783-4AD2-9DD9-862D9CAE5E8E;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Feb 13 15:59:53.506005 waagent[1974]: 2025-02-13T15:59:53.505937Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 13 15:59:53.506005 waagent[1974]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:59:53.506005 waagent[1974]: pkts bytes target prot opt in out source destination Feb 13 15:59:53.506005 waagent[1974]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:59:53.506005 waagent[1974]: pkts bytes target prot opt in out source destination Feb 13 15:59:53.506005 waagent[1974]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:59:53.506005 waagent[1974]: pkts bytes target prot opt in out source destination Feb 13 15:59:53.506005 waagent[1974]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 15:59:53.506005 waagent[1974]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 15:59:53.506005 waagent[1974]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 15:59:53.509254 waagent[1974]: 2025-02-13T15:59:53.509195Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 13 15:59:53.509254 waagent[1974]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:59:53.509254 waagent[1974]: pkts bytes target prot opt in out source destination Feb 13 15:59:53.509254 waagent[1974]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:59:53.509254 waagent[1974]: pkts bytes target prot opt in out source destination Feb 13 15:59:53.509254 waagent[1974]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 13 15:59:53.509254 waagent[1974]: pkts bytes target prot opt in out source destination Feb 13 15:59:53.509254 waagent[1974]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 13 15:59:53.509254 waagent[1974]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 13 15:59:53.509254 waagent[1974]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 13 15:59:53.509605 waagent[1974]: 2025-02-13T15:59:53.509497Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 13 15:59:53.515836 waagent[1974]: 2025-02-13T15:59:53.515777Z INFO MonitorHandler ExtHandler Network interfaces: Feb 13 15:59:53.515836 waagent[1974]: Executing ['ip', '-a', '-o', 'link']: Feb 13 15:59:53.515836 waagent[1974]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 13 15:59:53.515836 waagent[1974]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:dd:4d:8f brd ff:ff:ff:ff:ff:ff Feb 13 15:59:53.515836 waagent[1974]: 3: enP44566s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:dd:4d:8f brd ff:ff:ff:ff:ff:ff\ altname enP44566p0s2 Feb 13 15:59:53.515836 waagent[1974]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 13 15:59:53.515836 waagent[1974]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 13 15:59:53.515836 waagent[1974]: 2: eth0 inet 10.200.8.12/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 13 15:59:53.515836 waagent[1974]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 13 15:59:53.515836 waagent[1974]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Feb 13 15:59:53.515836 waagent[1974]: 2: eth0 inet6 fe80::6245:bdff:fedd:4d8f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 15:59:53.515836 waagent[1974]: 3: enP44566s1 inet6 fe80::6245:bdff:fedd:4d8f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Feb 13 16:00:01.693925 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 13 16:00:03.162430 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 16:00:03.175362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:00:03.267841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:00:03.277416 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:00:03.438355 update_engine[1704]: I20250213 16:00:03.438176 1704 update_attempter.cc:509] Updating boot flags... Feb 13 16:00:03.879242 kubelet[2229]: E0213 16:00:03.878996 2229 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:00:03.881373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:00:03.881540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:00:03.911126 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2252) Feb 13 16:00:13.912327 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 16:00:13.919395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:00:14.010955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:00:14.022566 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:00:14.067639 kubelet[2308]: E0213 16:00:14.067522 2308 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:00:14.070344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:00:14.070557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:00:23.927656 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 16:00:23.933418 systemd[1]: Started sshd@0-10.200.8.12:22-10.200.16.10:38784.service - OpenSSH per-connection server daemon (10.200.16.10:38784). Feb 13 16:00:24.162362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 16:00:24.170392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:00:24.530179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:00:24.540427 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:00:24.788792 kubelet[2328]: E0213 16:00:24.788637 2328 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:00:24.791557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:00:24.791755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:00:24.907033 sshd[2318]: Accepted publickey for core from 10.200.16.10 port 38784 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:00:24.908750 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:24.913829 systemd-logind[1702]: New session 3 of user core. Feb 13 16:00:24.924413 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 16:00:25.457246 systemd[1]: Started sshd@1-10.200.8.12:22-10.200.16.10:38786.service - OpenSSH per-connection server daemon (10.200.16.10:38786). Feb 13 16:00:26.088882 sshd[2340]: Accepted publickey for core from 10.200.16.10 port 38786 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:00:26.090466 sshd-session[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:26.095359 systemd-logind[1702]: New session 4 of user core. Feb 13 16:00:26.104278 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 16:00:26.531325 sshd[2342]: Connection closed by 10.200.16.10 port 38786 Feb 13 16:00:26.532228 sshd-session[2340]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:26.535659 systemd[1]: sshd@1-10.200.8.12:22-10.200.16.10:38786.service: Deactivated successfully. Feb 13 16:00:26.538019 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 16:00:26.539815 systemd-logind[1702]: Session 4 logged out. Waiting for processes to exit. Feb 13 16:00:26.540871 systemd-logind[1702]: Removed session 4. Feb 13 16:00:26.642919 systemd[1]: Started sshd@2-10.200.8.12:22-10.200.16.10:38802.service - OpenSSH per-connection server daemon (10.200.16.10:38802). Feb 13 16:00:27.270816 sshd[2347]: Accepted publickey for core from 10.200.16.10 port 38802 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:00:27.272568 sshd-session[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:27.277624 systemd-logind[1702]: New session 5 of user core. Feb 13 16:00:27.287261 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 16:00:27.709647 sshd[2349]: Connection closed by 10.200.16.10 port 38802 Feb 13 16:00:27.710526 sshd-session[2347]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:27.713893 systemd[1]: sshd@2-10.200.8.12:22-10.200.16.10:38802.service: Deactivated successfully. Feb 13 16:00:27.716095 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 16:00:27.717577 systemd-logind[1702]: Session 5 logged out. Waiting for processes to exit. Feb 13 16:00:27.718707 systemd-logind[1702]: Removed session 5. Feb 13 16:00:27.821195 systemd[1]: Started sshd@3-10.200.8.12:22-10.200.16.10:38806.service - OpenSSH per-connection server daemon (10.200.16.10:38806). Feb 13 16:00:28.449302 sshd[2354]: Accepted publickey for core from 10.200.16.10 port 38806 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:00:28.451026 sshd-session[2354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:28.455349 systemd-logind[1702]: New session 6 of user core. Feb 13 16:00:28.462487 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 16:00:28.908259 sshd[2356]: Connection closed by 10.200.16.10 port 38806 Feb 13 16:00:28.909425 sshd-session[2354]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:28.912358 systemd[1]: sshd@3-10.200.8.12:22-10.200.16.10:38806.service: Deactivated successfully. Feb 13 16:00:28.914325 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 16:00:28.915870 systemd-logind[1702]: Session 6 logged out. Waiting for processes to exit. Feb 13 16:00:28.916911 systemd-logind[1702]: Removed session 6. Feb 13 16:00:29.019225 systemd[1]: Started sshd@4-10.200.8.12:22-10.200.16.10:50574.service - OpenSSH per-connection server daemon (10.200.16.10:50574). Feb 13 16:00:29.646262 sshd[2361]: Accepted publickey for core from 10.200.16.10 port 50574 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:00:29.647959 sshd-session[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:29.653617 systemd-logind[1702]: New session 7 of user core. Feb 13 16:00:29.660255 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 16:00:30.131483 sudo[2364]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 16:00:30.131850 sudo[2364]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:00:30.160865 sudo[2364]: pam_unix(sudo:session): session closed for user root Feb 13 16:00:30.260893 sshd[2363]: Connection closed by 10.200.16.10 port 50574 Feb 13 16:00:30.261967 sshd-session[2361]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:30.265169 systemd[1]: sshd@4-10.200.8.12:22-10.200.16.10:50574.service: Deactivated successfully. Feb 13 16:00:30.267360 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 16:00:30.268766 systemd-logind[1702]: Session 7 logged out. Waiting for processes to exit. Feb 13 16:00:30.269919 systemd-logind[1702]: Removed session 7. Feb 13 16:00:30.377415 systemd[1]: Started sshd@5-10.200.8.12:22-10.200.16.10:50590.service - OpenSSH per-connection server daemon (10.200.16.10:50590). Feb 13 16:00:31.002633 sshd[2369]: Accepted publickey for core from 10.200.16.10 port 50590 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:00:31.004318 sshd-session[2369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:31.008490 systemd-logind[1702]: New session 8 of user core. Feb 13 16:00:31.015259 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 16:00:31.347261 sudo[2373]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 16:00:31.347718 sudo[2373]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:00:31.351935 sudo[2373]: pam_unix(sudo:session): session closed for user root Feb 13 16:00:31.357056 sudo[2372]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 16:00:31.357408 sudo[2372]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:00:31.370502 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 16:00:31.397178 augenrules[2395]: No rules Feb 13 16:00:31.398601 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 16:00:31.398835 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 16:00:31.400088 sudo[2372]: pam_unix(sudo:session): session closed for user root Feb 13 16:00:31.526029 sshd[2371]: Connection closed by 10.200.16.10 port 50590 Feb 13 16:00:31.526894 sshd-session[2369]: pam_unix(sshd:session): session closed for user core Feb 13 16:00:31.531637 systemd[1]: sshd@5-10.200.8.12:22-10.200.16.10:50590.service: Deactivated successfully. Feb 13 16:00:31.533882 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 16:00:31.534948 systemd-logind[1702]: Session 8 logged out. Waiting for processes to exit. Feb 13 16:00:31.536051 systemd-logind[1702]: Removed session 8. Feb 13 16:00:31.636143 systemd[1]: Started sshd@6-10.200.8.12:22-10.200.16.10:50596.service - OpenSSH per-connection server daemon (10.200.16.10:50596). Feb 13 16:00:32.263173 sshd[2403]: Accepted publickey for core from 10.200.16.10 port 50596 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:00:32.264608 sshd-session[2403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:00:32.269444 systemd-logind[1702]: New session 9 of user core. Feb 13 16:00:32.276269 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 16:00:32.605607 sudo[2406]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 16:00:32.605982 sudo[2406]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:00:34.245461 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 16:00:34.245541 (dockerd)[2424]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 16:00:34.912307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 16:00:34.917337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:00:35.518274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:00:35.523220 (kubelet)[2432]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:00:35.571672 kubelet[2432]: E0213 16:00:35.571560 2432 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:00:35.574333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:00:35.574534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:00:36.231025 dockerd[2424]: time="2025-02-13T16:00:36.230096735Z" level=info msg="Starting up" Feb 13 16:00:36.779947 dockerd[2424]: time="2025-02-13T16:00:36.779898915Z" level=info msg="Loading containers: start." Feb 13 16:00:37.140135 kernel: Initializing XFRM netlink socket Feb 13 16:00:37.280904 systemd-networkd[1456]: docker0: Link UP Feb 13 16:00:37.325483 dockerd[2424]: time="2025-02-13T16:00:37.325430205Z" level=info msg="Loading containers: done." Feb 13 16:00:37.388707 dockerd[2424]: time="2025-02-13T16:00:37.388639636Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 16:00:37.388924 dockerd[2424]: time="2025-02-13T16:00:37.388777839Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 16:00:37.388983 dockerd[2424]: time="2025-02-13T16:00:37.388945043Z" level=info msg="Daemon has completed initialization" Feb 13 16:00:37.442581 dockerd[2424]: time="2025-02-13T16:00:37.442364868Z" level=info msg="API listen on /run/docker.sock" Feb 13 16:00:37.442967 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 16:00:39.514292 containerd[1719]: time="2025-02-13T16:00:39.514242091Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 16:00:40.216835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933009419.mount: Deactivated successfully. Feb 13 16:00:42.134669 containerd[1719]: time="2025-02-13T16:00:42.134519261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:42.140302 containerd[1719]: time="2025-02-13T16:00:42.140051878Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=35142291" Feb 13 16:00:42.144813 containerd[1719]: time="2025-02-13T16:00:42.144750277Z" level=info msg="ImageCreate event name:\"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:42.150228 containerd[1719]: time="2025-02-13T16:00:42.150178292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:42.151446 containerd[1719]: time="2025-02-13T16:00:42.151245315Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"35139083\" in 2.636956822s" Feb 13 16:00:42.151446 containerd[1719]: time="2025-02-13T16:00:42.151285916Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\"" Feb 13 16:00:42.174193 containerd[1719]: time="2025-02-13T16:00:42.174161200Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 16:00:44.030708 containerd[1719]: time="2025-02-13T16:00:44.030628500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:44.036993 containerd[1719]: time="2025-02-13T16:00:44.036915633Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=32213172" Feb 13 16:00:44.039541 containerd[1719]: time="2025-02-13T16:00:44.039485387Z" level=info msg="ImageCreate event name:\"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:44.045979 containerd[1719]: time="2025-02-13T16:00:44.045948624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:44.047124 containerd[1719]: time="2025-02-13T16:00:44.046933745Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"33659710\" in 1.872699944s" Feb 13 16:00:44.047124 containerd[1719]: time="2025-02-13T16:00:44.046976246Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\"" Feb 13 16:00:44.071302 containerd[1719]: time="2025-02-13T16:00:44.071192459Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 16:00:45.402364 containerd[1719]: time="2025-02-13T16:00:45.402297537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:45.404254 containerd[1719]: time="2025-02-13T16:00:45.404193177Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=17334064" Feb 13 16:00:45.408372 containerd[1719]: time="2025-02-13T16:00:45.408316965Z" level=info msg="ImageCreate event name:\"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:45.413752 containerd[1719]: time="2025-02-13T16:00:45.413690178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:45.418025 containerd[1719]: time="2025-02-13T16:00:45.416807944Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"18780620\" in 1.345570485s" Feb 13 16:00:45.418025 containerd[1719]: time="2025-02-13T16:00:45.416849245Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\"" Feb 13 16:00:45.440178 containerd[1719]: time="2025-02-13T16:00:45.440143638Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 16:00:45.662275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 16:00:45.667695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:00:45.762339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:00:45.769404 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:00:45.813667 kubelet[2711]: E0213 16:00:45.813606 2711 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:00:45.816369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:00:45.816583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:00:47.122800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3706245865.mount: Deactivated successfully. Feb 13 16:00:47.597890 containerd[1719]: time="2025-02-13T16:00:47.597823899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:47.601975 containerd[1719]: time="2025-02-13T16:00:47.601899086Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620600" Feb 13 16:00:47.605695 containerd[1719]: time="2025-02-13T16:00:47.605641265Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:47.610830 containerd[1719]: time="2025-02-13T16:00:47.610777475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:47.611544 containerd[1719]: time="2025-02-13T16:00:47.611390688Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 2.171210748s" Feb 13 16:00:47.611544 containerd[1719]: time="2025-02-13T16:00:47.611426988Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 16:00:47.632038 containerd[1719]: time="2025-02-13T16:00:47.632005626Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 16:00:48.187271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount587049438.mount: Deactivated successfully. Feb 13 16:00:49.456591 containerd[1719]: time="2025-02-13T16:00:49.456532637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:49.460783 containerd[1719]: time="2025-02-13T16:00:49.460723826Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Feb 13 16:00:49.469469 containerd[1719]: time="2025-02-13T16:00:49.469411911Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:49.474901 containerd[1719]: time="2025-02-13T16:00:49.474817226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:49.476299 containerd[1719]: time="2025-02-13T16:00:49.476129154Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.844069726s" Feb 13 16:00:49.476299 containerd[1719]: time="2025-02-13T16:00:49.476172355Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 16:00:49.499944 containerd[1719]: time="2025-02-13T16:00:49.499766957Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 16:00:50.061758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293970531.mount: Deactivated successfully. Feb 13 16:00:50.084704 containerd[1719]: time="2025-02-13T16:00:50.084644698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:50.086984 containerd[1719]: time="2025-02-13T16:00:50.086920046Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Feb 13 16:00:50.091238 containerd[1719]: time="2025-02-13T16:00:50.091190637Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:50.096410 containerd[1719]: time="2025-02-13T16:00:50.096359847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:50.097261 containerd[1719]: time="2025-02-13T16:00:50.097088263Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 597.275806ms" Feb 13 16:00:50.097261 containerd[1719]: time="2025-02-13T16:00:50.097142464Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 16:00:50.121174 containerd[1719]: time="2025-02-13T16:00:50.121132174Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 16:00:50.724242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1916931634.mount: Deactivated successfully. Feb 13 16:00:53.048741 containerd[1719]: time="2025-02-13T16:00:53.048677148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:53.051821 containerd[1719]: time="2025-02-13T16:00:53.051761714Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Feb 13 16:00:53.056798 containerd[1719]: time="2025-02-13T16:00:53.056739319Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:53.062193 containerd[1719]: time="2025-02-13T16:00:53.062138834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:00:53.063204 containerd[1719]: time="2025-02-13T16:00:53.063166856Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.941987381s" Feb 13 16:00:53.063289 containerd[1719]: time="2025-02-13T16:00:53.063210257Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Feb 13 16:00:55.912276 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Feb 13 16:00:55.922393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:00:56.056282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:00:56.067692 (kubelet)[2897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:00:56.177381 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:00:56.542970 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 16:00:56.543318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:00:56.553471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:00:56.574917 systemd[1]: Reloading requested from client PID 2910 ('systemctl') (unit session-9.scope)... Feb 13 16:00:56.574933 systemd[1]: Reloading... Feb 13 16:00:56.683136 zram_generator::config[2951]: No configuration found. Feb 13 16:00:56.813717 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:00:56.903989 systemd[1]: Reloading finished in 328 ms. Feb 13 16:00:56.961557 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 16:00:56.961665 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 16:00:56.961928 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:00:56.968547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:00:57.259797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:00:57.275465 (kubelet)[3017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:00:57.321746 kubelet[3017]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:00:57.321746 kubelet[3017]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:00:57.321746 kubelet[3017]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:00:57.322270 kubelet[3017]: I0213 16:00:57.321796 3017 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:00:57.677609 kubelet[3017]: I0213 16:00:57.677566 3017 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 16:00:57.677609 kubelet[3017]: I0213 16:00:57.677598 3017 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:00:57.677905 kubelet[3017]: I0213 16:00:57.677882 3017 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 16:00:57.713528 kubelet[3017]: E0213 16:00:57.713478 3017 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:57.714021 kubelet[3017]: I0213 16:00:57.713988 3017 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:00:57.725136 kubelet[3017]: I0213 16:00:57.725092 3017 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:00:57.726081 kubelet[3017]: I0213 16:00:57.726050 3017 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:00:57.726300 kubelet[3017]: I0213 16:00:57.726276 3017 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 16:00:57.726465 kubelet[3017]: I0213 16:00:57.726308 3017 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:00:57.726465 kubelet[3017]: I0213 16:00:57.726341 3017 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 16:00:57.726543 kubelet[3017]: I0213 16:00:57.726491 3017 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:00:57.726621 kubelet[3017]: I0213 16:00:57.726601 3017 kubelet.go:396] "Attempting to sync node with API server" Feb 13 16:00:57.726674 kubelet[3017]: I0213 16:00:57.726625 3017 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:00:57.726674 kubelet[3017]: I0213 16:00:57.726663 3017 kubelet.go:312] "Adding apiserver pod source" Feb 13 16:00:57.726871 kubelet[3017]: I0213 16:00:57.726684 3017 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:00:57.729337 kubelet[3017]: W0213 16:00:57.727985 3017 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:57.729337 kubelet[3017]: E0213 16:00:57.728041 3017 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:57.729337 kubelet[3017]: W0213 16:00:57.728337 3017 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-254057132e&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:57.729337 kubelet[3017]: E0213 16:00:57.728379 3017 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-254057132e&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:57.729953 kubelet[3017]: I0213 16:00:57.729671 3017 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 16:00:57.733007 kubelet[3017]: I0213 16:00:57.732984 3017 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:00:57.733742 kubelet[3017]: W0213 16:00:57.733719 3017 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 16:00:57.734888 kubelet[3017]: I0213 16:00:57.734866 3017 server.go:1256] "Started kubelet" Feb 13 16:00:57.737866 kubelet[3017]: I0213 16:00:57.737817 3017 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:00:57.742275 kubelet[3017]: E0213 16:00:57.742071 3017 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.1-a-254057132e.1823cfe3a5698c40 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.1-a-254057132e,UID:ci-4186.1.1-a-254057132e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.1-a-254057132e,},FirstTimestamp:2025-02-13 16:00:57.734827072 +0000 UTC m=+0.454600861,LastTimestamp:2025-02-13 16:00:57.734827072 +0000 UTC m=+0.454600861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.1-a-254057132e,}" Feb 13 16:00:57.745941 kubelet[3017]: I0213 16:00:57.744374 3017 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:00:57.745941 kubelet[3017]: I0213 16:00:57.744543 3017 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:00:57.745941 kubelet[3017]: I0213 16:00:57.744843 3017 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:00:57.745941 kubelet[3017]: I0213 16:00:57.745463 3017 server.go:461] "Adding debug handlers to kubelet server" Feb 13 16:00:57.749327 kubelet[3017]: I0213 16:00:57.749066 3017 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 16:00:57.749327 kubelet[3017]: I0213 16:00:57.749252 3017 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 16:00:57.750748 kubelet[3017]: I0213 16:00:57.750725 3017 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 16:00:57.751505 kubelet[3017]: E0213 16:00:57.751148 3017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-254057132e?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="200ms" Feb 13 16:00:57.751505 kubelet[3017]: W0213 16:00:57.751243 3017 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:57.751505 kubelet[3017]: E0213 16:00:57.751294 3017 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:57.752796 kubelet[3017]: E0213 16:00:57.752220 3017 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:00:57.752796 kubelet[3017]: I0213 16:00:57.752465 3017 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:00:57.752796 kubelet[3017]: I0213 16:00:57.752554 3017 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:00:57.756202 kubelet[3017]: I0213 16:00:57.754271 3017 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:00:57.797529 kubelet[3017]: I0213 16:00:57.797502 3017 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:00:57.797690 kubelet[3017]: I0213 16:00:57.797536 3017 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:00:57.797690 kubelet[3017]: I0213 16:00:57.797565 3017 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:00:57.804168 kubelet[3017]: I0213 16:00:57.804030 3017 policy_none.go:49] "None policy: Start" Feb 13 16:00:57.804832 kubelet[3017]: I0213 16:00:57.804681 3017 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:00:57.804832 kubelet[3017]: I0213 16:00:57.804791 3017 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:00:57.808995 kubelet[3017]: I0213 16:00:57.808841 3017 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:00:57.810625 kubelet[3017]: I0213 16:00:57.810569 3017 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:00:57.810625 kubelet[3017]: I0213 16:00:57.810600 3017 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:00:57.810751 kubelet[3017]: I0213 16:00:57.810669 3017 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 16:00:57.810751 kubelet[3017]: E0213 16:00:57.810732 3017 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:00:57.815339 kubelet[3017]: W0213 16:00:57.815313 3017 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:57.815560 kubelet[3017]: E0213 16:00:57.815452 3017 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:57.819346 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 16:00:57.829870 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 16:00:57.842635 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 16:00:57.844049 kubelet[3017]: I0213 16:00:57.844024 3017 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:00:57.844754 kubelet[3017]: I0213 16:00:57.844482 3017 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:00:57.846236 kubelet[3017]: E0213 16:00:57.846213 3017 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.1-a-254057132e\" not found" Feb 13 16:00:57.850950 kubelet[3017]: I0213 16:00:57.850907 3017 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.1-a-254057132e" Feb 13 16:00:57.851286 kubelet[3017]: E0213 16:00:57.851260 3017 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4186.1.1-a-254057132e" Feb 13 16:00:57.911170 kubelet[3017]: I0213 16:00:57.911100 3017 topology_manager.go:215] "Topology Admit Handler" podUID="be172158f3f95cb2ae212b2afc80b1ba" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.1-a-254057132e" Feb 13 16:00:57.913391 kubelet[3017]: I0213 16:00:57.913353 3017 topology_manager.go:215] "Topology Admit Handler" podUID="6572e495cff4d5096bef49be53c6e917" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.1-a-254057132e" Feb 13 16:00:57.915442 kubelet[3017]: I0213 16:00:57.915133 3017 topology_manager.go:215] "Topology Admit Handler" podUID="91e31d0271884f74fb669c54bc5fe84b" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.1-a-254057132e" Feb 13 16:00:57.922871 systemd[1]: Created slice kubepods-burstable-podbe172158f3f95cb2ae212b2afc80b1ba.slice - libcontainer container kubepods-burstable-podbe172158f3f95cb2ae212b2afc80b1ba.slice. Feb 13 16:00:57.937255 systemd[1]: Created slice kubepods-burstable-pod91e31d0271884f74fb669c54bc5fe84b.slice - libcontainer container kubepods-burstable-pod91e31d0271884f74fb669c54bc5fe84b.slice. Feb 13 16:00:57.942745 systemd[1]: Created slice kubepods-burstable-pod6572e495cff4d5096bef49be53c6e917.slice - libcontainer container kubepods-burstable-pod6572e495cff4d5096bef49be53c6e917.slice. Feb 13 16:00:57.951922 kubelet[3017]: E0213 16:00:57.951892 3017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-254057132e?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="400ms" Feb 13 16:00:58.053170 kubelet[3017]: I0213 16:00:58.052646 3017 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6572e495cff4d5096bef49be53c6e917-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.1-a-254057132e\" (UID: \"6572e495cff4d5096bef49be53c6e917\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-254057132e" Feb 13 16:00:58.053170 kubelet[3017]: I0213 16:00:58.052721 3017 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6572e495cff4d5096bef49be53c6e917-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.1-a-254057132e\" (UID: \"6572e495cff4d5096bef49be53c6e917\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-254057132e" Feb 13 16:00:58.053170 kubelet[3017]: I0213 16:00:58.052765 3017 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6572e495cff4d5096bef49be53c6e917-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.1-a-254057132e\" (UID: \"6572e495cff4d5096bef49be53c6e917\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-254057132e" Feb 13 16:00:58.053170 kubelet[3017]: I0213 16:00:58.052844 3017 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91e31d0271884f74fb669c54bc5fe84b-kubeconfig\") pod \"kube-scheduler-ci-4186.1.1-a-254057132e\" (UID: \"91e31d0271884f74fb669c54bc5fe84b\") " pod="kube-system/kube-scheduler-ci-4186.1.1-a-254057132e" Feb 13 16:00:58.053170 kubelet[3017]: I0213 16:00:58.052886 3017 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6572e495cff4d5096bef49be53c6e917-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-254057132e\" (UID: \"6572e495cff4d5096bef49be53c6e917\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-254057132e" Feb 13 16:00:58.053637 kubelet[3017]: I0213 16:00:58.052937 3017 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be172158f3f95cb2ae212b2afc80b1ba-ca-certs\") pod \"kube-apiserver-ci-4186.1.1-a-254057132e\" (UID: \"be172158f3f95cb2ae212b2afc80b1ba\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-254057132e" Feb 13 16:00:58.053637 kubelet[3017]: I0213 16:00:58.053031 3017 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be172158f3f95cb2ae212b2afc80b1ba-k8s-certs\") pod \"kube-apiserver-ci-4186.1.1-a-254057132e\" (UID: \"be172158f3f95cb2ae212b2afc80b1ba\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-254057132e" Feb 13 16:00:58.053637 kubelet[3017]: I0213 16:00:58.053090 3017 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be172158f3f95cb2ae212b2afc80b1ba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.1-a-254057132e\" (UID: \"be172158f3f95cb2ae212b2afc80b1ba\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-254057132e" Feb 13 16:00:58.053637 kubelet[3017]: I0213 16:00:58.053627 3017 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6572e495cff4d5096bef49be53c6e917-ca-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-254057132e\" (UID: \"6572e495cff4d5096bef49be53c6e917\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-254057132e" Feb 13 16:00:58.054149 kubelet[3017]: I0213 16:00:58.053964 3017 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.1-a-254057132e" Feb 13 16:00:58.054499 kubelet[3017]: E0213 16:00:58.054472 3017 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4186.1.1-a-254057132e" Feb 13 16:00:58.236484 containerd[1719]: time="2025-02-13T16:00:58.236334950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.1-a-254057132e,Uid:be172158f3f95cb2ae212b2afc80b1ba,Namespace:kube-system,Attempt:0,}" Feb 13 16:00:58.242344 containerd[1719]: time="2025-02-13T16:00:58.242042473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.1-a-254057132e,Uid:91e31d0271884f74fb669c54bc5fe84b,Namespace:kube-system,Attempt:0,}" Feb 13 16:00:58.245943 containerd[1719]: time="2025-02-13T16:00:58.245911057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.1-a-254057132e,Uid:6572e495cff4d5096bef49be53c6e917,Namespace:kube-system,Attempt:0,}" Feb 13 16:00:58.352548 kubelet[3017]: E0213 16:00:58.352509 3017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-254057132e?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="800ms" Feb 13 16:00:58.457351 kubelet[3017]: I0213 16:00:58.457308 3017 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.1-a-254057132e" Feb 13 16:00:58.460033 kubelet[3017]: E0213 16:00:58.457725 3017 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4186.1.1-a-254057132e" Feb 13 16:00:58.788896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017833538.mount: Deactivated successfully. Feb 13 16:00:58.826565 containerd[1719]: time="2025-02-13T16:00:58.826510151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:00:58.835117 containerd[1719]: time="2025-02-13T16:00:58.835064336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 16:00:58.853629 containerd[1719]: time="2025-02-13T16:00:58.853575238Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:00:58.858635 containerd[1719]: time="2025-02-13T16:00:58.858585746Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:00:58.864484 containerd[1719]: time="2025-02-13T16:00:58.864443973Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:00:58.867653 containerd[1719]: time="2025-02-13T16:00:58.867602642Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:00:58.867775 kubelet[3017]: W0213 16:00:58.867710 3017 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-254057132e&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:58.867834 kubelet[3017]: E0213 16:00:58.867788 3017 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-254057132e&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:58.871444 containerd[1719]: time="2025-02-13T16:00:58.871401424Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:00:58.876512 containerd[1719]: time="2025-02-13T16:00:58.876454734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:00:58.877962 containerd[1719]: time="2025-02-13T16:00:58.877364554Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 631.368694ms" Feb 13 16:00:58.891302 containerd[1719]: time="2025-02-13T16:00:58.891008149Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 648.803472ms" Feb 13 16:00:58.896144 containerd[1719]: time="2025-02-13T16:00:58.896095860Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 659.630408ms" Feb 13 16:00:59.136708 kubelet[3017]: W0213 16:00:59.136575 3017 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:59.136708 kubelet[3017]: E0213 16:00:59.136623 3017 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:59.153770 kubelet[3017]: E0213 16:00:59.153729 3017 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-254057132e?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="1.6s" Feb 13 16:00:59.260479 kubelet[3017]: I0213 16:00:59.260443 3017 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.1-a-254057132e" Feb 13 16:00:59.260849 kubelet[3017]: E0213 16:00:59.260822 3017 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-4186.1.1-a-254057132e" Feb 13 16:00:59.285583 kubelet[3017]: W0213 16:00:59.285521 3017 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:59.285710 kubelet[3017]: E0213 16:00:59.285589 3017 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:59.357486 kubelet[3017]: W0213 16:00:59.357440 3017 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:59.357486 kubelet[3017]: E0213 16:00:59.357494 3017 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:59.512403 containerd[1719]: time="2025-02-13T16:00:59.510279281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:00:59.512403 containerd[1719]: time="2025-02-13T16:00:59.510362183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:00:59.512403 containerd[1719]: time="2025-02-13T16:00:59.510382284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:00:59.513585 containerd[1719]: time="2025-02-13T16:00:59.508533444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:00:59.513585 containerd[1719]: time="2025-02-13T16:00:59.512584231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:00:59.513585 containerd[1719]: time="2025-02-13T16:00:59.512652133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:00:59.513817 containerd[1719]: time="2025-02-13T16:00:59.513387249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:00:59.513817 containerd[1719]: time="2025-02-13T16:00:59.513422050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:00:59.518438 containerd[1719]: time="2025-02-13T16:00:59.518192953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:00:59.518438 containerd[1719]: time="2025-02-13T16:00:59.518251854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:00:59.518438 containerd[1719]: time="2025-02-13T16:00:59.518269955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:00:59.518438 containerd[1719]: time="2025-02-13T16:00:59.518366757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:00:59.549294 systemd[1]: Started cri-containerd-f9a05f41940bae03d517477cd3bc5bb7529aefe7ccda2c77973d6dab5d0a06f0.scope - libcontainer container f9a05f41940bae03d517477cd3bc5bb7529aefe7ccda2c77973d6dab5d0a06f0. Feb 13 16:00:59.556531 systemd[1]: Started cri-containerd-b7f985afe6ca5a77e76cb0cf238e5d5d3ae8176554d20ed27174e7807814883a.scope - libcontainer container b7f985afe6ca5a77e76cb0cf238e5d5d3ae8176554d20ed27174e7807814883a. Feb 13 16:00:59.559784 systemd[1]: Started cri-containerd-ce19c61c9c31be86be46f4ec0645e3be1aa8d975d4197476ffd8fed54938b870.scope - libcontainer container ce19c61c9c31be86be46f4ec0645e3be1aa8d975d4197476ffd8fed54938b870. Feb 13 16:00:59.637294 containerd[1719]: time="2025-02-13T16:00:59.637239935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.1-a-254057132e,Uid:91e31d0271884f74fb669c54bc5fe84b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9a05f41940bae03d517477cd3bc5bb7529aefe7ccda2c77973d6dab5d0a06f0\"" Feb 13 16:00:59.644600 containerd[1719]: time="2025-02-13T16:00:59.643420469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.1-a-254057132e,Uid:be172158f3f95cb2ae212b2afc80b1ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7f985afe6ca5a77e76cb0cf238e5d5d3ae8176554d20ed27174e7807814883a\"" Feb 13 16:00:59.650391 containerd[1719]: time="2025-02-13T16:00:59.650354320Z" level=info msg="CreateContainer within sandbox \"f9a05f41940bae03d517477cd3bc5bb7529aefe7ccda2c77973d6dab5d0a06f0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 16:00:59.651078 containerd[1719]: time="2025-02-13T16:00:59.651050135Z" level=info msg="CreateContainer within sandbox \"b7f985afe6ca5a77e76cb0cf238e5d5d3ae8176554d20ed27174e7807814883a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 16:00:59.653217 containerd[1719]: time="2025-02-13T16:00:59.653189681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.1-a-254057132e,Uid:6572e495cff4d5096bef49be53c6e917,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce19c61c9c31be86be46f4ec0645e3be1aa8d975d4197476ffd8fed54938b870\"" Feb 13 16:00:59.655495 containerd[1719]: time="2025-02-13T16:00:59.655474831Z" level=info msg="CreateContainer within sandbox \"ce19c61c9c31be86be46f4ec0645e3be1aa8d975d4197476ffd8fed54938b870\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 16:00:59.727670 containerd[1719]: time="2025-02-13T16:00:59.727609095Z" level=info msg="CreateContainer within sandbox \"f9a05f41940bae03d517477cd3bc5bb7529aefe7ccda2c77973d6dab5d0a06f0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c2efed57c5a374991cf538c9f99a86d8422e3da8f5684f1c4bcd32bd3d33d6fd\"" Feb 13 16:00:59.728402 containerd[1719]: time="2025-02-13T16:00:59.728361712Z" level=info msg="StartContainer for \"c2efed57c5a374991cf538c9f99a86d8422e3da8f5684f1c4bcd32bd3d33d6fd\"" Feb 13 16:00:59.746194 containerd[1719]: time="2025-02-13T16:00:59.745295379Z" level=info msg="CreateContainer within sandbox \"b7f985afe6ca5a77e76cb0cf238e5d5d3ae8176554d20ed27174e7807814883a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"04fae79955094b4a78757ee775c9cf87b8b12d477714a9c4b96ef62249dbcab5\"" Feb 13 16:00:59.746942 containerd[1719]: time="2025-02-13T16:00:59.746912614Z" level=info msg="StartContainer for \"04fae79955094b4a78757ee775c9cf87b8b12d477714a9c4b96ef62249dbcab5\"" Feb 13 16:00:59.758319 systemd[1]: Started cri-containerd-c2efed57c5a374991cf538c9f99a86d8422e3da8f5684f1c4bcd32bd3d33d6fd.scope - libcontainer container c2efed57c5a374991cf538c9f99a86d8422e3da8f5684f1c4bcd32bd3d33d6fd. Feb 13 16:00:59.764847 containerd[1719]: time="2025-02-13T16:00:59.763031964Z" level=info msg="CreateContainer within sandbox \"ce19c61c9c31be86be46f4ec0645e3be1aa8d975d4197476ffd8fed54938b870\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"58cee1c8611ce9cba1f397e7096f4bd504b1bfec5bcba2ba7f1adc8b9c068cc8\"" Feb 13 16:00:59.764847 containerd[1719]: time="2025-02-13T16:00:59.763633477Z" level=info msg="StartContainer for \"58cee1c8611ce9cba1f397e7096f4bd504b1bfec5bcba2ba7f1adc8b9c068cc8\"" Feb 13 16:00:59.822260 systemd[1]: Started cri-containerd-04fae79955094b4a78757ee775c9cf87b8b12d477714a9c4b96ef62249dbcab5.scope - libcontainer container 04fae79955094b4a78757ee775c9cf87b8b12d477714a9c4b96ef62249dbcab5. Feb 13 16:00:59.834287 systemd[1]: Started cri-containerd-58cee1c8611ce9cba1f397e7096f4bd504b1bfec5bcba2ba7f1adc8b9c068cc8.scope - libcontainer container 58cee1c8611ce9cba1f397e7096f4bd504b1bfec5bcba2ba7f1adc8b9c068cc8. Feb 13 16:00:59.882391 kubelet[3017]: E0213 16:00:59.882332 3017 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.12:6443: connect: connection refused Feb 13 16:00:59.893124 containerd[1719]: time="2025-02-13T16:00:59.892976882Z" level=info msg="StartContainer for \"c2efed57c5a374991cf538c9f99a86d8422e3da8f5684f1c4bcd32bd3d33d6fd\" returns successfully" Feb 13 16:00:59.921207 containerd[1719]: time="2025-02-13T16:00:59.921158093Z" level=info msg="StartContainer for \"04fae79955094b4a78757ee775c9cf87b8b12d477714a9c4b96ef62249dbcab5\" returns successfully" Feb 13 16:00:59.927476 containerd[1719]: time="2025-02-13T16:00:59.927438930Z" level=info msg="StartContainer for \"58cee1c8611ce9cba1f397e7096f4bd504b1bfec5bcba2ba7f1adc8b9c068cc8\" returns successfully" Feb 13 16:01:00.863574 kubelet[3017]: I0213 16:01:00.863541 3017 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.1-a-254057132e" Feb 13 16:01:02.111600 kubelet[3017]: E0213 16:01:02.111553 3017 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.1-a-254057132e\" not found" node="ci-4186.1.1-a-254057132e" Feb 13 16:01:03.349820 kubelet[3017]: I0213 16:01:03.348582 3017 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.1-a-254057132e" Feb 13 16:01:03.367668 kubelet[3017]: W0213 16:01:03.367632 3017 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 16:01:04.347253 kubelet[3017]: I0213 16:01:04.347191 3017 apiserver.go:52] "Watching apiserver" Feb 13 16:01:04.350924 kubelet[3017]: I0213 16:01:04.350877 3017 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 16:01:05.355244 systemd[1]: Reloading requested from client PID 3294 ('systemctl') (unit session-9.scope)... Feb 13 16:01:05.355262 systemd[1]: Reloading... Feb 13 16:01:05.453146 zram_generator::config[3335]: No configuration found. Feb 13 16:01:05.585601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:01:05.692224 systemd[1]: Reloading finished in 336 ms. Feb 13 16:01:05.735395 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:01:05.748428 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 16:01:05.748713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:01:05.755384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:01:05.886519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:01:05.893301 (kubelet)[3401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:01:05.955508 kubelet[3401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:01:05.955508 kubelet[3401]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:01:05.955508 kubelet[3401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:01:05.955508 kubelet[3401]: I0213 16:01:05.955373 3401 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:01:05.963224 kubelet[3401]: I0213 16:01:05.962237 3401 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 16:01:05.963224 kubelet[3401]: I0213 16:01:05.962259 3401 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:01:05.963224 kubelet[3401]: I0213 16:01:05.962439 3401 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 16:01:05.964074 kubelet[3401]: I0213 16:01:05.964050 3401 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 16:01:05.968382 kubelet[3401]: I0213 16:01:05.968010 3401 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:01:05.977570 kubelet[3401]: I0213 16:01:05.977545 3401 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:01:05.978288 kubelet[3401]: I0213 16:01:05.978263 3401 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:01:05.978538 kubelet[3401]: I0213 16:01:05.978515 3401 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 16:01:05.978671 kubelet[3401]: I0213 16:01:05.978548 3401 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:01:05.978671 kubelet[3401]: I0213 16:01:05.978587 3401 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 16:01:05.979798 kubelet[3401]: I0213 16:01:05.978750 3401 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:01:05.979798 kubelet[3401]: I0213 16:01:05.978900 3401 kubelet.go:396] "Attempting to sync node with API server" Feb 13 16:01:05.979798 kubelet[3401]: I0213 16:01:05.978930 3401 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:01:05.979798 kubelet[3401]: I0213 16:01:05.978979 3401 kubelet.go:312] "Adding apiserver pod source" Feb 13 16:01:05.979798 kubelet[3401]: I0213 16:01:05.978995 3401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:01:05.998687 kubelet[3401]: I0213 16:01:05.998663 3401 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 16:01:05.999003 kubelet[3401]: I0213 16:01:05.998986 3401 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:01:05.999615 kubelet[3401]: I0213 16:01:05.999595 3401 server.go:1256] "Started kubelet" Feb 13 16:01:06.004086 kubelet[3401]: I0213 16:01:06.004035 3401 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:01:06.005221 kubelet[3401]: I0213 16:01:06.005201 3401 server.go:461] "Adding debug handlers to kubelet server" Feb 13 16:01:06.006517 kubelet[3401]: I0213 16:01:06.006495 3401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:01:06.007703 kubelet[3401]: I0213 16:01:06.007682 3401 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:01:06.008022 kubelet[3401]: I0213 16:01:06.008001 3401 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:01:06.015929 kubelet[3401]: I0213 16:01:06.015740 3401 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 16:01:06.016207 kubelet[3401]: E0213 16:01:06.016090 3401 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:01:06.016868 kubelet[3401]: I0213 16:01:06.016766 3401 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 16:01:06.017431 kubelet[3401]: I0213 16:01:06.017215 3401 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 16:01:06.018335 kubelet[3401]: I0213 16:01:06.018058 3401 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:01:06.020130 kubelet[3401]: I0213 16:01:06.018738 3401 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:01:06.021146 kubelet[3401]: I0213 16:01:06.020747 3401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:01:06.022303 kubelet[3401]: I0213 16:01:06.022278 3401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:01:06.022383 kubelet[3401]: I0213 16:01:06.022311 3401 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:01:06.022383 kubelet[3401]: I0213 16:01:06.022350 3401 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 16:01:06.022461 kubelet[3401]: E0213 16:01:06.022412 3401 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:01:06.028452 kubelet[3401]: I0213 16:01:06.028426 3401 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:01:06.085646 kubelet[3401]: I0213 16:01:06.085380 3401 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:01:06.085646 kubelet[3401]: I0213 16:01:06.085400 3401 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:01:06.085646 kubelet[3401]: I0213 16:01:06.085417 3401 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:01:06.085646 kubelet[3401]: I0213 16:01:06.085556 3401 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 16:01:06.085646 kubelet[3401]: I0213 16:01:06.085574 3401 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 16:01:06.085646 kubelet[3401]: I0213 16:01:06.085580 3401 policy_none.go:49] "None policy: Start" Feb 13 16:01:06.086494 kubelet[3401]: I0213 16:01:06.086412 3401 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:01:06.086494 kubelet[3401]: I0213 16:01:06.086451 3401 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:01:06.086734 kubelet[3401]: I0213 16:01:06.086711 3401 state_mem.go:75] "Updated machine memory state" Feb 13 16:01:06.090933 kubelet[3401]: I0213 16:01:06.090907 3401 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:01:06.091426 kubelet[3401]: I0213 16:01:06.091210 3401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:01:06.119722 kubelet[3401]: I0213 16:01:06.119680 3401 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.1-a-254057132e" Feb 13 16:01:06.123382 kubelet[3401]: I0213 16:01:06.123352 3401 topology_manager.go:215] "Topology Admit Handler" podUID="be172158f3f95cb2ae212b2afc80b1ba" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.1-a-254057132e" Feb 13 16:01:06.123510 kubelet[3401]: I0213 16:01:06.123461 3401 topology_manager.go:215] "Topology Admit Handler" podUID="6572e495cff4d5096bef49be53c6e917" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.1-a-254057132e" Feb 13 16:01:06.123559 kubelet[3401]: I0213 16:01:06.123515 3401 topology_manager.go:215] "Topology Admit Handler" podUID="91e31d0271884f74fb669c54bc5fe84b" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.1-a-254057132e" Feb 13 16:01:06.141575 kubelet[3401]: W0213 16:01:06.141376 3401 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 16:01:06.141575 kubelet[3401]: E0213 16:01:06.141475 3401 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.1-a-254057132e\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.1-a-254057132e" Feb 13 16:01:06.142812 kubelet[3401]: W0213 16:01:06.142568 3401 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 16:01:06.142812 kubelet[3401]: W0213 16:01:06.142693 3401 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 16:01:06.145969 kubelet[3401]: I0213 16:01:06.145942 3401 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.1.1-a-254057132e" Feb 13 16:01:06.146071 kubelet[3401]: I0213 16:01:06.146042 3401 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.1-a-254057132e" Feb 13 16:01:06.319755 kubelet[3401]: I0213 16:01:06.318704 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be172158f3f95cb2ae212b2afc80b1ba-ca-certs\") pod \"kube-apiserver-ci-4186.1.1-a-254057132e\" (UID: \"be172158f3f95cb2ae212b2afc80b1ba\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-254057132e" Feb 13 16:01:06.319755 kubelet[3401]: I0213 16:01:06.318790 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be172158f3f95cb2ae212b2afc80b1ba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.1-a-254057132e\" (UID: \"be172158f3f95cb2ae212b2afc80b1ba\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-254057132e" Feb 13 16:01:06.319755 kubelet[3401]: I0213 16:01:06.318848 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91e31d0271884f74fb669c54bc5fe84b-kubeconfig\") pod \"kube-scheduler-ci-4186.1.1-a-254057132e\" (UID: \"91e31d0271884f74fb669c54bc5fe84b\") " pod="kube-system/kube-scheduler-ci-4186.1.1-a-254057132e" Feb 13 16:01:06.319755 kubelet[3401]: I0213 16:01:06.318894 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6572e495cff4d5096bef49be53c6e917-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.1-a-254057132e\" (UID: \"6572e495cff4d5096bef49be53c6e917\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-254057132e" Feb 13 16:01:06.319755 kubelet[3401]: I0213 16:01:06.318939 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be172158f3f95cb2ae212b2afc80b1ba-k8s-certs\") pod \"kube-apiserver-ci-4186.1.1-a-254057132e\" (UID: \"be172158f3f95cb2ae212b2afc80b1ba\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-254057132e" Feb 13 16:01:06.320146 kubelet[3401]: I0213 16:01:06.318976 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6572e495cff4d5096bef49be53c6e917-ca-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-254057132e\" (UID: \"6572e495cff4d5096bef49be53c6e917\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-254057132e" Feb 13 16:01:06.320146 kubelet[3401]: I0213 16:01:06.319009 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6572e495cff4d5096bef49be53c6e917-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.1-a-254057132e\" (UID: \"6572e495cff4d5096bef49be53c6e917\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-254057132e" Feb 13 16:01:06.320146 kubelet[3401]: I0213 16:01:06.319045 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6572e495cff4d5096bef49be53c6e917-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-254057132e\" (UID: \"6572e495cff4d5096bef49be53c6e917\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-254057132e" Feb 13 16:01:06.320146 kubelet[3401]: I0213 16:01:06.319081 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6572e495cff4d5096bef49be53c6e917-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.1-a-254057132e\" (UID: \"6572e495cff4d5096bef49be53c6e917\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-254057132e" Feb 13 16:01:06.362944 sudo[3433]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 16:01:06.363378 sudo[3433]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 16:01:06.887296 sudo[3433]: pam_unix(sudo:session): session closed for user root Feb 13 16:01:06.983314 kubelet[3401]: I0213 16:01:06.983255 3401 apiserver.go:52] "Watching apiserver" Feb 13 16:01:07.017994 kubelet[3401]: I0213 16:01:07.017928 3401 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 16:01:07.071570 kubelet[3401]: W0213 16:01:07.071334 3401 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 16:01:07.071570 kubelet[3401]: E0213 16:01:07.071428 3401 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.1-a-254057132e\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.1-a-254057132e" Feb 13 16:01:07.095667 kubelet[3401]: I0213 16:01:07.094013 3401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.1-a-254057132e" podStartSLOduration=1.093962178 podStartE2EDuration="1.093962178s" podCreationTimestamp="2025-02-13 16:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:01:07.09314156 +0000 UTC m=+1.194460171" watchObservedRunningTime="2025-02-13 16:01:07.093962178 +0000 UTC m=+1.195280789" Feb 13 16:01:07.122404 kubelet[3401]: I0213 16:01:07.122374 3401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.1-a-254057132e" podStartSLOduration=4.122334314 podStartE2EDuration="4.122334314s" podCreationTimestamp="2025-02-13 16:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:01:07.111037161 +0000 UTC m=+1.212355672" watchObservedRunningTime="2025-02-13 16:01:07.122334314 +0000 UTC m=+1.223652925" Feb 13 16:01:08.086634 sudo[2406]: pam_unix(sudo:session): session closed for user root Feb 13 16:01:08.186370 sshd[2405]: Connection closed by 10.200.16.10 port 50596 Feb 13 16:01:08.187239 sshd-session[2403]: pam_unix(sshd:session): session closed for user core Feb 13 16:01:08.192349 systemd[1]: sshd@6-10.200.8.12:22-10.200.16.10:50596.service: Deactivated successfully. Feb 13 16:01:08.194799 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 16:01:08.195026 systemd[1]: session-9.scope: Consumed 4.600s CPU time, 187.5M memory peak, 0B memory swap peak. Feb 13 16:01:08.195640 systemd-logind[1702]: Session 9 logged out. Waiting for processes to exit. Feb 13 16:01:08.196733 systemd-logind[1702]: Removed session 9. Feb 13 16:01:11.276163 kubelet[3401]: I0213 16:01:11.275894 3401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.1-a-254057132e" podStartSLOduration=5.275848646 podStartE2EDuration="5.275848646s" podCreationTimestamp="2025-02-13 16:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:01:07.122930428 +0000 UTC m=+1.224248939" watchObservedRunningTime="2025-02-13 16:01:11.275848646 +0000 UTC m=+5.377167157" Feb 13 16:01:17.953035 kubelet[3401]: I0213 16:01:17.952996 3401 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 16:01:17.953655 containerd[1719]: time="2025-02-13T16:01:17.953439494Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 16:01:17.953993 kubelet[3401]: I0213 16:01:17.953656 3401 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 16:01:18.617899 kubelet[3401]: I0213 16:01:18.616671 3401 topology_manager.go:215] "Topology Admit Handler" podUID="39b38daf-96d2-45cf-9972-4a7802c57634" podNamespace="kube-system" podName="kube-proxy-hv92v" Feb 13 16:01:18.619377 kubelet[3401]: I0213 16:01:18.619044 3401 topology_manager.go:215] "Topology Admit Handler" podUID="ed6ce577-4f34-4690-8ce1-47c2d3b20f42" podNamespace="kube-system" podName="cilium-sbgrj" Feb 13 16:01:18.632892 systemd[1]: Created slice kubepods-besteffort-pod39b38daf_96d2_45cf_9972_4a7802c57634.slice - libcontainer container kubepods-besteffort-pod39b38daf_96d2_45cf_9972_4a7802c57634.slice. Feb 13 16:01:18.648406 systemd[1]: Created slice kubepods-burstable-poded6ce577_4f34_4690_8ce1_47c2d3b20f42.slice - libcontainer container kubepods-burstable-poded6ce577_4f34_4690_8ce1_47c2d3b20f42.slice. Feb 13 16:01:18.692955 kubelet[3401]: I0213 16:01:18.692895 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39b38daf-96d2-45cf-9972-4a7802c57634-xtables-lock\") pod \"kube-proxy-hv92v\" (UID: \"39b38daf-96d2-45cf-9972-4a7802c57634\") " pod="kube-system/kube-proxy-hv92v" Feb 13 16:01:18.693129 kubelet[3401]: I0213 16:01:18.692964 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39b38daf-96d2-45cf-9972-4a7802c57634-lib-modules\") pod \"kube-proxy-hv92v\" (UID: \"39b38daf-96d2-45cf-9972-4a7802c57634\") " pod="kube-system/kube-proxy-hv92v" Feb 13 16:01:18.693129 kubelet[3401]: I0213 16:01:18.692995 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cilium-run\") pod \"cilium-sbgrj\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " pod="kube-system/cilium-sbgrj" Feb 13 16:01:18.693129 kubelet[3401]: I0213 16:01:18.693023 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cni-path\") pod \"cilium-sbgrj\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " pod="kube-system/cilium-sbgrj" Feb 13 16:01:18.693129 kubelet[3401]: I0213 16:01:18.693047 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-etc-cni-netd\") pod \"cilium-sbgrj\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " pod="kube-system/cilium-sbgrj" Feb 13 16:01:18.693129 kubelet[3401]: I0213 16:01:18.693074 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-clustermesh-secrets\") pod \"cilium-sbgrj\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " pod="kube-system/cilium-sbgrj" Feb 13 16:01:18.693129 kubelet[3401]: I0213 16:01:18.693098 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/39b38daf-96d2-45cf-9972-4a7802c57634-kube-proxy\") pod \"kube-proxy-hv92v\" (UID: \"39b38daf-96d2-45cf-9972-4a7802c57634\") " pod="kube-system/kube-proxy-hv92v" Feb 13 16:01:18.693400 kubelet[3401]: I0213 16:01:18.693143 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-bpf-maps\") pod \"cilium-sbgrj\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " pod="kube-system/cilium-sbgrj" Feb 13 16:01:18.693400 kubelet[3401]: I0213 16:01:18.693185 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-lib-modules\") pod \"cilium-sbgrj\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " pod="kube-system/cilium-sbgrj" Feb 13 16:01:18.693400 kubelet[3401]: I0213 16:01:18.693216 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-host-proc-sys-kernel\") pod \"cilium-sbgrj\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " pod="kube-system/cilium-sbgrj" Feb 13 16:01:18.693400 kubelet[3401]: I0213 16:01:18.693242 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-hostproc\") pod \"cilium-sbgrj\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " pod="kube-system/cilium-sbgrj" Feb 13 16:01:18.693400 kubelet[3401]: I0213 16:01:18.693271 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-hubble-tls\") pod \"cilium-sbgrj\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " pod="kube-system/cilium-sbgrj" Feb 13 16:01:18.693400 kubelet[3401]: I0213 16:01:18.693313 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-xtables-lock\") pod \"cilium-sbgrj\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " pod="kube-system/cilium-sbgrj" Feb 13 16:01:18.693632 kubelet[3401]: I0213 16:01:18.693342 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-host-proc-sys-net\") pod \"cilium-sbgrj\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " pod="kube-system/cilium-sbgrj" Feb 13 16:01:18.693632 kubelet[3401]: I0213 16:01:18.693375 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brdxf\" (UniqueName: \"kubernetes.io/projected/39b38daf-96d2-45cf-9972-4a7802c57634-kube-api-access-brdxf\") pod \"kube-proxy-hv92v\" (UID: \"39b38daf-96d2-45cf-9972-4a7802c57634\") " pod="kube-system/kube-proxy-hv92v" Feb 13 16:01:18.693632 kubelet[3401]: I0213 16:01:18.693407 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cilium-config-path\") pod \"cilium-sbgrj\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " pod="kube-system/cilium-sbgrj" Feb 13 16:01:18.693632 kubelet[3401]: I0213 16:01:18.693437 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrmsd\" (UniqueName: \"kubernetes.io/projected/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-kube-api-access-hrmsd\") pod \"cilium-sbgrj\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " pod="kube-system/cilium-sbgrj" Feb 13 16:01:18.693632 kubelet[3401]: I0213 16:01:18.693470 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cilium-cgroup\") pod \"cilium-sbgrj\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " pod="kube-system/cilium-sbgrj" Feb 13 16:01:18.945055 containerd[1719]: time="2025-02-13T16:01:18.944992515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hv92v,Uid:39b38daf-96d2-45cf-9972-4a7802c57634,Namespace:kube-system,Attempt:0,}" Feb 13 16:01:18.953722 containerd[1719]: time="2025-02-13T16:01:18.953681569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sbgrj,Uid:ed6ce577-4f34-4690-8ce1-47c2d3b20f42,Namespace:kube-system,Attempt:0,}" Feb 13 16:01:18.997279 kubelet[3401]: I0213 16:01:18.997181 3401 topology_manager.go:215] "Topology Admit Handler" podUID="98885483-0b40-4a74-b9b3-323cd062a471" podNamespace="kube-system" podName="cilium-operator-5cc964979-9p9n2" Feb 13 16:01:19.028211 systemd[1]: Created slice kubepods-besteffort-pod98885483_0b40_4a74_b9b3_323cd062a471.slice - libcontainer container kubepods-besteffort-pod98885483_0b40_4a74_b9b3_323cd062a471.slice. Feb 13 16:01:19.048789 containerd[1719]: time="2025-02-13T16:01:19.046826824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:01:19.049452 containerd[1719]: time="2025-02-13T16:01:19.049186766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:01:19.049452 containerd[1719]: time="2025-02-13T16:01:19.049207367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:01:19.051872 containerd[1719]: time="2025-02-13T16:01:19.051443506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:01:19.051872 containerd[1719]: time="2025-02-13T16:01:19.051631210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:01:19.051872 containerd[1719]: time="2025-02-13T16:01:19.051675511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:01:19.051872 containerd[1719]: time="2025-02-13T16:01:19.051688211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:01:19.051872 containerd[1719]: time="2025-02-13T16:01:19.051774512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:01:19.087760 systemd[1]: Started cri-containerd-fd633446663033d5f90765164dcaefc696a84f9fc9a343a0a74a2b208313ac36.scope - libcontainer container fd633446663033d5f90765164dcaefc696a84f9fc9a343a0a74a2b208313ac36. Feb 13 16:01:19.098400 kubelet[3401]: I0213 16:01:19.098346 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98885483-0b40-4a74-b9b3-323cd062a471-cilium-config-path\") pod \"cilium-operator-5cc964979-9p9n2\" (UID: \"98885483-0b40-4a74-b9b3-323cd062a471\") " pod="kube-system/cilium-operator-5cc964979-9p9n2" Feb 13 16:01:19.098400 kubelet[3401]: I0213 16:01:19.098404 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vj7f\" (UniqueName: \"kubernetes.io/projected/98885483-0b40-4a74-b9b3-323cd062a471-kube-api-access-8vj7f\") pod \"cilium-operator-5cc964979-9p9n2\" (UID: \"98885483-0b40-4a74-b9b3-323cd062a471\") " pod="kube-system/cilium-operator-5cc964979-9p9n2" Feb 13 16:01:19.098961 systemd[1]: Started cri-containerd-ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58.scope - libcontainer container ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58. Feb 13 16:01:19.134246 containerd[1719]: time="2025-02-13T16:01:19.133834371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hv92v,Uid:39b38daf-96d2-45cf-9972-4a7802c57634,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd633446663033d5f90765164dcaefc696a84f9fc9a343a0a74a2b208313ac36\"" Feb 13 16:01:19.140202 containerd[1719]: time="2025-02-13T16:01:19.140035481Z" level=info msg="CreateContainer within sandbox \"fd633446663033d5f90765164dcaefc696a84f9fc9a343a0a74a2b208313ac36\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 16:01:19.145117 containerd[1719]: time="2025-02-13T16:01:19.145049570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sbgrj,Uid:ed6ce577-4f34-4690-8ce1-47c2d3b20f42,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58\"" Feb 13 16:01:19.146854 containerd[1719]: time="2025-02-13T16:01:19.146699599Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 16:01:19.192499 containerd[1719]: time="2025-02-13T16:01:19.192462212Z" level=info msg="CreateContainer within sandbox \"fd633446663033d5f90765164dcaefc696a84f9fc9a343a0a74a2b208313ac36\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5731e001f7e7f98beaf4d72c08f58946bda99bd80c6c5f804e64c0ab612b2e1a\"" Feb 13 16:01:19.194216 containerd[1719]: time="2025-02-13T16:01:19.193076123Z" level=info msg="StartContainer for \"5731e001f7e7f98beaf4d72c08f58946bda99bd80c6c5f804e64c0ab612b2e1a\"" Feb 13 16:01:19.227293 systemd[1]: Started cri-containerd-5731e001f7e7f98beaf4d72c08f58946bda99bd80c6c5f804e64c0ab612b2e1a.scope - libcontainer container 5731e001f7e7f98beaf4d72c08f58946bda99bd80c6c5f804e64c0ab612b2e1a. Feb 13 16:01:19.262639 containerd[1719]: time="2025-02-13T16:01:19.262494257Z" level=info msg="StartContainer for \"5731e001f7e7f98beaf4d72c08f58946bda99bd80c6c5f804e64c0ab612b2e1a\" returns successfully" Feb 13 16:01:19.334274 containerd[1719]: time="2025-02-13T16:01:19.333928926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-9p9n2,Uid:98885483-0b40-4a74-b9b3-323cd062a471,Namespace:kube-system,Attempt:0,}" Feb 13 16:01:19.390347 containerd[1719]: time="2025-02-13T16:01:19.390255027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:01:19.390347 containerd[1719]: time="2025-02-13T16:01:19.390303528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:01:19.391138 containerd[1719]: time="2025-02-13T16:01:19.390329629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:01:19.391297 containerd[1719]: time="2025-02-13T16:01:19.391175144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:01:19.410469 systemd[1]: Started cri-containerd-714303bf0e8f13995c7c5e5c28c4201da2b17311e81d4cf4326a2cb9661010ac.scope - libcontainer container 714303bf0e8f13995c7c5e5c28c4201da2b17311e81d4cf4326a2cb9661010ac. Feb 13 16:01:19.472817 containerd[1719]: time="2025-02-13T16:01:19.472752994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-9p9n2,Uid:98885483-0b40-4a74-b9b3-323cd062a471,Namespace:kube-system,Attempt:0,} returns sandbox id \"714303bf0e8f13995c7c5e5c28c4201da2b17311e81d4cf4326a2cb9661010ac\"" Feb 13 16:01:25.045001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131602308.mount: Deactivated successfully. Feb 13 16:01:27.251466 containerd[1719]: time="2025-02-13T16:01:27.251403256Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:01:27.253597 containerd[1719]: time="2025-02-13T16:01:27.253389100Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 16:01:27.256805 containerd[1719]: time="2025-02-13T16:01:27.256726874Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:01:27.258736 containerd[1719]: time="2025-02-13T16:01:27.258577215Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.111838715s" Feb 13 16:01:27.258736 containerd[1719]: time="2025-02-13T16:01:27.258622516Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 16:01:27.261168 containerd[1719]: time="2025-02-13T16:01:27.260928767Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 16:01:27.262601 containerd[1719]: time="2025-02-13T16:01:27.262427901Z" level=info msg="CreateContainer within sandbox \"ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 16:01:27.309356 containerd[1719]: time="2025-02-13T16:01:27.309250441Z" level=info msg="CreateContainer within sandbox \"ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e5eb37ba267daa16c2ff96ac64e567c606841689f6cc5abfb013f3c401695efc\"" Feb 13 16:01:27.310863 containerd[1719]: time="2025-02-13T16:01:27.310032158Z" level=info msg="StartContainer for \"e5eb37ba267daa16c2ff96ac64e567c606841689f6cc5abfb013f3c401695efc\"" Feb 13 16:01:27.343523 systemd[1]: Started cri-containerd-e5eb37ba267daa16c2ff96ac64e567c606841689f6cc5abfb013f3c401695efc.scope - libcontainer container e5eb37ba267daa16c2ff96ac64e567c606841689f6cc5abfb013f3c401695efc. Feb 13 16:01:27.374013 containerd[1719]: time="2025-02-13T16:01:27.373872177Z" level=info msg="StartContainer for \"e5eb37ba267daa16c2ff96ac64e567c606841689f6cc5abfb013f3c401695efc\" returns successfully" Feb 13 16:01:27.383379 systemd[1]: cri-containerd-e5eb37ba267daa16c2ff96ac64e567c606841689f6cc5abfb013f3c401695efc.scope: Deactivated successfully. Feb 13 16:01:28.127098 kubelet[3401]: I0213 16:01:28.127045 3401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hv92v" podStartSLOduration=10.12699551 podStartE2EDuration="10.12699551s" podCreationTimestamp="2025-02-13 16:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:01:20.102972993 +0000 UTC m=+14.204291504" watchObservedRunningTime="2025-02-13 16:01:28.12699551 +0000 UTC m=+22.228314021" Feb 13 16:01:28.293071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5eb37ba267daa16c2ff96ac64e567c606841689f6cc5abfb013f3c401695efc-rootfs.mount: Deactivated successfully. Feb 13 16:01:31.072159 containerd[1719]: time="2025-02-13T16:01:31.072064447Z" level=info msg="shim disconnected" id=e5eb37ba267daa16c2ff96ac64e567c606841689f6cc5abfb013f3c401695efc namespace=k8s.io Feb 13 16:01:31.072701 containerd[1719]: time="2025-02-13T16:01:31.072171149Z" level=warning msg="cleaning up after shim disconnected" id=e5eb37ba267daa16c2ff96ac64e567c606841689f6cc5abfb013f3c401695efc namespace=k8s.io Feb 13 16:01:31.072701 containerd[1719]: time="2025-02-13T16:01:31.072190750Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:01:31.118689 containerd[1719]: time="2025-02-13T16:01:31.118509679Z" level=info msg="CreateContainer within sandbox \"ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 16:01:31.153442 containerd[1719]: time="2025-02-13T16:01:31.153394654Z" level=info msg="CreateContainer within sandbox \"ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6ad014c74904e4e766b5813e4c98d9ed029a8ef63cbf073dadf3995ef5baa0c2\"" Feb 13 16:01:31.154138 containerd[1719]: time="2025-02-13T16:01:31.153883865Z" level=info msg="StartContainer for \"6ad014c74904e4e766b5813e4c98d9ed029a8ef63cbf073dadf3995ef5baa0c2\"" Feb 13 16:01:31.186148 systemd[1]: run-containerd-runc-k8s.io-6ad014c74904e4e766b5813e4c98d9ed029a8ef63cbf073dadf3995ef5baa0c2-runc.NTH3SS.mount: Deactivated successfully. Feb 13 16:01:31.193284 systemd[1]: Started cri-containerd-6ad014c74904e4e766b5813e4c98d9ed029a8ef63cbf073dadf3995ef5baa0c2.scope - libcontainer container 6ad014c74904e4e766b5813e4c98d9ed029a8ef63cbf073dadf3995ef5baa0c2. Feb 13 16:01:31.222991 containerd[1719]: time="2025-02-13T16:01:31.222858397Z" level=info msg="StartContainer for \"6ad014c74904e4e766b5813e4c98d9ed029a8ef63cbf073dadf3995ef5baa0c2\" returns successfully" Feb 13 16:01:31.229565 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:01:31.230381 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:01:31.230556 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:01:31.235669 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:01:31.235951 systemd[1]: cri-containerd-6ad014c74904e4e766b5813e4c98d9ed029a8ef63cbf073dadf3995ef5baa0c2.scope: Deactivated successfully. Feb 13 16:01:31.260359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ad014c74904e4e766b5813e4c98d9ed029a8ef63cbf073dadf3995ef5baa0c2-rootfs.mount: Deactivated successfully. Feb 13 16:01:31.265858 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:01:31.280447 containerd[1719]: time="2025-02-13T16:01:31.280346175Z" level=info msg="shim disconnected" id=6ad014c74904e4e766b5813e4c98d9ed029a8ef63cbf073dadf3995ef5baa0c2 namespace=k8s.io Feb 13 16:01:31.280447 containerd[1719]: time="2025-02-13T16:01:31.280441777Z" level=warning msg="cleaning up after shim disconnected" id=6ad014c74904e4e766b5813e4c98d9ed029a8ef63cbf073dadf3995ef5baa0c2 namespace=k8s.io Feb 13 16:01:31.280703 containerd[1719]: time="2025-02-13T16:01:31.280454277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:01:32.123247 containerd[1719]: time="2025-02-13T16:01:32.123092000Z" level=info msg="CreateContainer within sandbox \"ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 16:01:32.167702 containerd[1719]: time="2025-02-13T16:01:32.167652090Z" level=info msg="CreateContainer within sandbox \"ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"44be9d7a9920a5b5578b9313e6a0fa1fd825671381cf60ef00616719dbc2650c\"" Feb 13 16:01:32.168372 containerd[1719]: time="2025-02-13T16:01:32.168253603Z" level=info msg="StartContainer for \"44be9d7a9920a5b5578b9313e6a0fa1fd825671381cf60ef00616719dbc2650c\"" Feb 13 16:01:32.213270 systemd[1]: Started cri-containerd-44be9d7a9920a5b5578b9313e6a0fa1fd825671381cf60ef00616719dbc2650c.scope - libcontainer container 44be9d7a9920a5b5578b9313e6a0fa1fd825671381cf60ef00616719dbc2650c. Feb 13 16:01:32.246726 systemd[1]: cri-containerd-44be9d7a9920a5b5578b9313e6a0fa1fd825671381cf60ef00616719dbc2650c.scope: Deactivated successfully. Feb 13 16:01:32.250432 containerd[1719]: time="2025-02-13T16:01:32.250352127Z" level=info msg="StartContainer for \"44be9d7a9920a5b5578b9313e6a0fa1fd825671381cf60ef00616719dbc2650c\" returns successfully" Feb 13 16:01:32.270922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44be9d7a9920a5b5578b9313e6a0fa1fd825671381cf60ef00616719dbc2650c-rootfs.mount: Deactivated successfully. Feb 13 16:01:32.285619 containerd[1719]: time="2025-02-13T16:01:32.285555109Z" level=info msg="shim disconnected" id=44be9d7a9920a5b5578b9313e6a0fa1fd825671381cf60ef00616719dbc2650c namespace=k8s.io Feb 13 16:01:32.285845 containerd[1719]: time="2025-02-13T16:01:32.285644711Z" level=warning msg="cleaning up after shim disconnected" id=44be9d7a9920a5b5578b9313e6a0fa1fd825671381cf60ef00616719dbc2650c namespace=k8s.io Feb 13 16:01:32.285845 containerd[1719]: time="2025-02-13T16:01:32.285659512Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:01:33.132771 containerd[1719]: time="2025-02-13T16:01:33.132587929Z" level=info msg="CreateContainer within sandbox \"ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 16:01:33.196845 containerd[1719]: time="2025-02-13T16:01:33.196419748Z" level=info msg="CreateContainer within sandbox \"ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6c3ff0e3e6e290f2b7f0e9871bf3125d94d7b8768428720724d4f86e55d43e9c\"" Feb 13 16:01:33.197970 containerd[1719]: time="2025-02-13T16:01:33.197942282Z" level=info msg="StartContainer for \"6c3ff0e3e6e290f2b7f0e9871bf3125d94d7b8768428720724d4f86e55d43e9c\"" Feb 13 16:01:33.235395 systemd[1]: Started cri-containerd-6c3ff0e3e6e290f2b7f0e9871bf3125d94d7b8768428720724d4f86e55d43e9c.scope - libcontainer container 6c3ff0e3e6e290f2b7f0e9871bf3125d94d7b8768428720724d4f86e55d43e9c. Feb 13 16:01:33.276726 systemd[1]: cri-containerd-6c3ff0e3e6e290f2b7f0e9871bf3125d94d7b8768428720724d4f86e55d43e9c.scope: Deactivated successfully. Feb 13 16:01:33.281138 containerd[1719]: time="2025-02-13T16:01:33.280765822Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded6ce577_4f34_4690_8ce1_47c2d3b20f42.slice/cri-containerd-6c3ff0e3e6e290f2b7f0e9871bf3125d94d7b8768428720724d4f86e55d43e9c.scope/memory.events\": no such file or directory" Feb 13 16:01:33.284389 containerd[1719]: time="2025-02-13T16:01:33.284353002Z" level=info msg="StartContainer for \"6c3ff0e3e6e290f2b7f0e9871bf3125d94d7b8768428720724d4f86e55d43e9c\" returns successfully" Feb 13 16:01:33.509762 containerd[1719]: time="2025-02-13T16:01:33.509696608Z" level=info msg="shim disconnected" id=6c3ff0e3e6e290f2b7f0e9871bf3125d94d7b8768428720724d4f86e55d43e9c namespace=k8s.io Feb 13 16:01:33.509762 containerd[1719]: time="2025-02-13T16:01:33.509759510Z" level=warning msg="cleaning up after shim disconnected" id=6c3ff0e3e6e290f2b7f0e9871bf3125d94d7b8768428720724d4f86e55d43e9c namespace=k8s.io Feb 13 16:01:33.509762 containerd[1719]: time="2025-02-13T16:01:33.509769110Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:01:33.819931 containerd[1719]: time="2025-02-13T16:01:33.819793698Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:01:33.822059 containerd[1719]: time="2025-02-13T16:01:33.821990047Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 16:01:33.825768 containerd[1719]: time="2025-02-13T16:01:33.825714230Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:01:33.827151 containerd[1719]: time="2025-02-13T16:01:33.826987358Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.56602209s" Feb 13 16:01:33.827151 containerd[1719]: time="2025-02-13T16:01:33.827027359Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 16:01:33.829505 containerd[1719]: time="2025-02-13T16:01:33.829344211Z" level=info msg="CreateContainer within sandbox \"714303bf0e8f13995c7c5e5c28c4201da2b17311e81d4cf4326a2cb9661010ac\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 16:01:33.858618 containerd[1719]: time="2025-02-13T16:01:33.858567160Z" level=info msg="CreateContainer within sandbox \"714303bf0e8f13995c7c5e5c28c4201da2b17311e81d4cf4326a2cb9661010ac\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41\"" Feb 13 16:01:33.860455 containerd[1719]: time="2025-02-13T16:01:33.859192274Z" level=info msg="StartContainer for \"3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41\"" Feb 13 16:01:33.886341 systemd[1]: Started cri-containerd-3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41.scope - libcontainer container 3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41. Feb 13 16:01:33.914631 containerd[1719]: time="2025-02-13T16:01:33.914582627Z" level=info msg="StartContainer for \"3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41\" returns successfully" Feb 13 16:01:34.135458 containerd[1719]: time="2025-02-13T16:01:34.135333415Z" level=info msg="CreateContainer within sandbox \"ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 16:01:34.154618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c3ff0e3e6e290f2b7f0e9871bf3125d94d7b8768428720724d4f86e55d43e9c-rootfs.mount: Deactivated successfully. Feb 13 16:01:34.188671 containerd[1719]: time="2025-02-13T16:01:34.188620922Z" level=info msg="CreateContainer within sandbox \"ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d\"" Feb 13 16:01:34.189609 containerd[1719]: time="2025-02-13T16:01:34.189544441Z" level=info msg="StartContainer for \"7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d\"" Feb 13 16:01:34.257625 systemd[1]: Started cri-containerd-7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d.scope - libcontainer container 7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d. Feb 13 16:01:34.334790 containerd[1719]: time="2025-02-13T16:01:34.334739658Z" level=info msg="StartContainer for \"7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d\" returns successfully" Feb 13 16:01:34.533201 kubelet[3401]: I0213 16:01:34.532123 3401 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 16:01:34.680009 kubelet[3401]: I0213 16:01:34.679471 3401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-9p9n2" podStartSLOduration=2.326631779 podStartE2EDuration="16.67939802s" podCreationTimestamp="2025-02-13 16:01:18 +0000 UTC" firstStartedPulling="2025-02-13 16:01:19.474736629 +0000 UTC m=+13.576055140" lastFinishedPulling="2025-02-13 16:01:33.82750277 +0000 UTC m=+27.928821381" observedRunningTime="2025-02-13 16:01:34.293893509 +0000 UTC m=+28.395212020" watchObservedRunningTime="2025-02-13 16:01:34.67939802 +0000 UTC m=+28.780716631" Feb 13 16:01:34.680009 kubelet[3401]: I0213 16:01:34.679738 3401 topology_manager.go:215] "Topology Admit Handler" podUID="28ac3de1-6ecd-4546-bde8-dc21776fd476" podNamespace="kube-system" podName="coredns-76f75df574-xzzbd" Feb 13 16:01:34.690201 systemd[1]: Created slice kubepods-burstable-pod28ac3de1_6ecd_4546_bde8_dc21776fd476.slice - libcontainer container kubepods-burstable-pod28ac3de1_6ecd_4546_bde8_dc21776fd476.slice. Feb 13 16:01:34.699453 kubelet[3401]: W0213 16:01:34.699421 3401 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186.1.1-a-254057132e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.1-a-254057132e' and this object Feb 13 16:01:34.699599 kubelet[3401]: E0213 16:01:34.699480 3401 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186.1.1-a-254057132e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.1-a-254057132e' and this object Feb 13 16:01:34.704812 kubelet[3401]: I0213 16:01:34.704777 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28ac3de1-6ecd-4546-bde8-dc21776fd476-config-volume\") pod \"coredns-76f75df574-xzzbd\" (UID: \"28ac3de1-6ecd-4546-bde8-dc21776fd476\") " pod="kube-system/coredns-76f75df574-xzzbd" Feb 13 16:01:34.704942 kubelet[3401]: I0213 16:01:34.704833 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2p4h\" (UniqueName: \"kubernetes.io/projected/28ac3de1-6ecd-4546-bde8-dc21776fd476-kube-api-access-j2p4h\") pod \"coredns-76f75df574-xzzbd\" (UID: \"28ac3de1-6ecd-4546-bde8-dc21776fd476\") " pod="kube-system/coredns-76f75df574-xzzbd" Feb 13 16:01:34.707556 kubelet[3401]: I0213 16:01:34.707527 3401 topology_manager.go:215] "Topology Admit Handler" podUID="c677e3d1-d554-4568-a8a6-2d183e874531" podNamespace="kube-system" podName="coredns-76f75df574-5bm6q" Feb 13 16:01:34.717917 systemd[1]: Created slice kubepods-burstable-podc677e3d1_d554_4568_a8a6_2d183e874531.slice - libcontainer container kubepods-burstable-podc677e3d1_d554_4568_a8a6_2d183e874531.slice. Feb 13 16:01:34.805577 kubelet[3401]: I0213 16:01:34.805421 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2k7c\" (UniqueName: \"kubernetes.io/projected/c677e3d1-d554-4568-a8a6-2d183e874531-kube-api-access-l2k7c\") pod \"coredns-76f75df574-5bm6q\" (UID: \"c677e3d1-d554-4568-a8a6-2d183e874531\") " pod="kube-system/coredns-76f75df574-5bm6q" Feb 13 16:01:34.805577 kubelet[3401]: I0213 16:01:34.805509 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c677e3d1-d554-4568-a8a6-2d183e874531-config-volume\") pod \"coredns-76f75df574-5bm6q\" (UID: \"c677e3d1-d554-4568-a8a6-2d183e874531\") " pod="kube-system/coredns-76f75df574-5bm6q" Feb 13 16:01:35.214823 kubelet[3401]: I0213 16:01:35.214525 3401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-sbgrj" podStartSLOduration=9.101592402 podStartE2EDuration="17.214465738s" podCreationTimestamp="2025-02-13 16:01:18 +0000 UTC" firstStartedPulling="2025-02-13 16:01:19.14619569 +0000 UTC m=+13.247514201" lastFinishedPulling="2025-02-13 16:01:27.259069026 +0000 UTC m=+21.360387537" observedRunningTime="2025-02-13 16:01:35.208338711 +0000 UTC m=+29.309657222" watchObservedRunningTime="2025-02-13 16:01:35.214465738 +0000 UTC m=+29.315784249" Feb 13 16:01:35.895898 containerd[1719]: time="2025-02-13T16:01:35.895846597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xzzbd,Uid:28ac3de1-6ecd-4546-bde8-dc21776fd476,Namespace:kube-system,Attempt:0,}" Feb 13 16:01:35.923874 containerd[1719]: time="2025-02-13T16:01:35.923534972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5bm6q,Uid:c677e3d1-d554-4568-a8a6-2d183e874531,Namespace:kube-system,Attempt:0,}" Feb 13 16:01:37.790154 systemd-networkd[1456]: cilium_host: Link UP Feb 13 16:01:37.790344 systemd-networkd[1456]: cilium_net: Link UP Feb 13 16:01:37.790547 systemd-networkd[1456]: cilium_net: Gained carrier Feb 13 16:01:37.790722 systemd-networkd[1456]: cilium_host: Gained carrier Feb 13 16:01:37.984674 systemd-networkd[1456]: cilium_vxlan: Link UP Feb 13 16:01:37.984685 systemd-networkd[1456]: cilium_vxlan: Gained carrier Feb 13 16:01:38.254146 kernel: NET: Registered PF_ALG protocol family Feb 13 16:01:38.543000 systemd-networkd[1456]: cilium_net: Gained IPv6LL Feb 13 16:01:38.671375 systemd-networkd[1456]: cilium_host: Gained IPv6LL Feb 13 16:01:39.055972 systemd-networkd[1456]: lxc_health: Link UP Feb 13 16:01:39.068348 systemd-networkd[1456]: lxc_health: Gained carrier Feb 13 16:01:39.480304 systemd-networkd[1456]: lxc12773dcd3da4: Link UP Feb 13 16:01:39.485725 kernel: eth0: renamed from tmp2209f Feb 13 16:01:39.491865 systemd-networkd[1456]: lxc12773dcd3da4: Gained carrier Feb 13 16:01:39.530131 kernel: eth0: renamed from tmp79928 Feb 13 16:01:39.538244 systemd-networkd[1456]: lxc4b6edba1de46: Link UP Feb 13 16:01:39.541656 systemd-networkd[1456]: lxc4b6edba1de46: Gained carrier Feb 13 16:01:40.014282 systemd-networkd[1456]: cilium_vxlan: Gained IPv6LL Feb 13 16:01:40.719477 systemd-networkd[1456]: lxc_health: Gained IPv6LL Feb 13 16:01:41.294404 systemd-networkd[1456]: lxc12773dcd3da4: Gained IPv6LL Feb 13 16:01:41.360293 systemd-networkd[1456]: lxc4b6edba1de46: Gained IPv6LL Feb 13 16:01:43.291851 containerd[1719]: time="2025-02-13T16:01:43.291693444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:01:43.292474 containerd[1719]: time="2025-02-13T16:01:43.291855348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:01:43.292474 containerd[1719]: time="2025-02-13T16:01:43.291907249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:01:43.292474 containerd[1719]: time="2025-02-13T16:01:43.292031851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:01:43.301817 containerd[1719]: time="2025-02-13T16:01:43.299543107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:01:43.301817 containerd[1719]: time="2025-02-13T16:01:43.299604508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:01:43.301817 containerd[1719]: time="2025-02-13T16:01:43.299622208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:01:43.301817 containerd[1719]: time="2025-02-13T16:01:43.299713610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:01:43.342983 systemd[1]: Started cri-containerd-7992810df1c63b4c091236af3b04eef0e4e67de81958a9a70c1083dd57aa1d23.scope - libcontainer container 7992810df1c63b4c091236af3b04eef0e4e67de81958a9a70c1083dd57aa1d23. Feb 13 16:01:43.367276 systemd[1]: Started cri-containerd-2209fc34f7e127d128dda67acb433d6efab2670aa676805fb6abb1e949be7777.scope - libcontainer container 2209fc34f7e127d128dda67acb433d6efab2670aa676805fb6abb1e949be7777. Feb 13 16:01:43.449132 containerd[1719]: time="2025-02-13T16:01:43.447772173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xzzbd,Uid:28ac3de1-6ecd-4546-bde8-dc21776fd476,Namespace:kube-system,Attempt:0,} returns sandbox id \"2209fc34f7e127d128dda67acb433d6efab2670aa676805fb6abb1e949be7777\"" Feb 13 16:01:43.449399 containerd[1719]: time="2025-02-13T16:01:43.448028578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5bm6q,Uid:c677e3d1-d554-4568-a8a6-2d183e874531,Namespace:kube-system,Attempt:0,} returns sandbox id \"7992810df1c63b4c091236af3b04eef0e4e67de81958a9a70c1083dd57aa1d23\"" Feb 13 16:01:43.453941 containerd[1719]: time="2025-02-13T16:01:43.453888199Z" level=info msg="CreateContainer within sandbox \"2209fc34f7e127d128dda67acb433d6efab2670aa676805fb6abb1e949be7777\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:01:43.455297 containerd[1719]: time="2025-02-13T16:01:43.455241227Z" level=info msg="CreateContainer within sandbox \"7992810df1c63b4c091236af3b04eef0e4e67de81958a9a70c1083dd57aa1d23\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:01:43.533265 containerd[1719]: time="2025-02-13T16:01:43.533217740Z" level=info msg="CreateContainer within sandbox \"2209fc34f7e127d128dda67acb433d6efab2670aa676805fb6abb1e949be7777\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac52e64ef13816adf8f6e998d040263250608c0c99f82f9ea234f6a8a9186141\"" Feb 13 16:01:43.534135 containerd[1719]: time="2025-02-13T16:01:43.533865453Z" level=info msg="StartContainer for \"ac52e64ef13816adf8f6e998d040263250608c0c99f82f9ea234f6a8a9186141\"" Feb 13 16:01:43.535513 containerd[1719]: time="2025-02-13T16:01:43.535405685Z" level=info msg="CreateContainer within sandbox \"7992810df1c63b4c091236af3b04eef0e4e67de81958a9a70c1083dd57aa1d23\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ca0bb9b1db5f2a1a7df415c7433160a143b999d244bb90836cbf102745f95f7\"" Feb 13 16:01:43.537518 containerd[1719]: time="2025-02-13T16:01:43.536417406Z" level=info msg="StartContainer for \"0ca0bb9b1db5f2a1a7df415c7433160a143b999d244bb90836cbf102745f95f7\"" Feb 13 16:01:43.570292 systemd[1]: Started cri-containerd-ac52e64ef13816adf8f6e998d040263250608c0c99f82f9ea234f6a8a9186141.scope - libcontainer container ac52e64ef13816adf8f6e998d040263250608c0c99f82f9ea234f6a8a9186141. Feb 13 16:01:43.575092 systemd[1]: Started cri-containerd-0ca0bb9b1db5f2a1a7df415c7433160a143b999d244bb90836cbf102745f95f7.scope - libcontainer container 0ca0bb9b1db5f2a1a7df415c7433160a143b999d244bb90836cbf102745f95f7. Feb 13 16:01:43.618246 containerd[1719]: time="2025-02-13T16:01:43.618202598Z" level=info msg="StartContainer for \"0ca0bb9b1db5f2a1a7df415c7433160a143b999d244bb90836cbf102745f95f7\" returns successfully" Feb 13 16:01:43.618543 containerd[1719]: time="2025-02-13T16:01:43.618437703Z" level=info msg="StartContainer for \"ac52e64ef13816adf8f6e998d040263250608c0c99f82f9ea234f6a8a9186141\" returns successfully" Feb 13 16:01:44.202194 kubelet[3401]: I0213 16:01:44.201831 3401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xzzbd" podStartSLOduration=26.201772269 podStartE2EDuration="26.201772269s" podCreationTimestamp="2025-02-13 16:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:01:44.20087125 +0000 UTC m=+38.302189761" watchObservedRunningTime="2025-02-13 16:01:44.201772269 +0000 UTC m=+38.303090980" Feb 13 16:01:44.202194 kubelet[3401]: I0213 16:01:44.201951 3401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5bm6q" podStartSLOduration=26.201923172 podStartE2EDuration="26.201923172s" podCreationTimestamp="2025-02-13 16:01:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:01:44.186292548 +0000 UTC m=+38.287611159" watchObservedRunningTime="2025-02-13 16:01:44.201923172 +0000 UTC m=+38.303241683" Feb 13 16:01:44.304383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3592547676.mount: Deactivated successfully. Feb 13 16:01:45.060822 kubelet[3401]: I0213 16:01:45.060616 3401 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 16:01:58.416654 update_engine[1704]: I20250213 16:01:58.416571 1704 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 16:01:58.416654 update_engine[1704]: I20250213 16:01:58.416643 1704 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 16:01:58.417346 update_engine[1704]: I20250213 16:01:58.416893 1704 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 16:01:58.417619 update_engine[1704]: I20250213 16:01:58.417579 1704 omaha_request_params.cc:62] Current group set to beta Feb 13 16:01:58.418129 update_engine[1704]: I20250213 16:01:58.417737 1704 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 16:01:58.418129 update_engine[1704]: I20250213 16:01:58.417763 1704 update_attempter.cc:643] Scheduling an action processor start. Feb 13 16:01:58.418129 update_engine[1704]: I20250213 16:01:58.417788 1704 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 16:01:58.418129 update_engine[1704]: I20250213 16:01:58.417837 1704 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 16:01:58.418129 update_engine[1704]: I20250213 16:01:58.417935 1704 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 16:01:58.418129 update_engine[1704]: I20250213 16:01:58.417949 1704 omaha_request_action.cc:272] Request: Feb 13 16:01:58.418129 update_engine[1704]: Feb 13 16:01:58.418129 update_engine[1704]: Feb 13 16:01:58.418129 update_engine[1704]: Feb 13 16:01:58.418129 update_engine[1704]: Feb 13 16:01:58.418129 update_engine[1704]: Feb 13 16:01:58.418129 update_engine[1704]: Feb 13 16:01:58.418129 update_engine[1704]: Feb 13 16:01:58.418129 update_engine[1704]: Feb 13 16:01:58.418129 update_engine[1704]: I20250213 16:01:58.417960 1704 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 16:01:58.418816 locksmithd[1742]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 16:01:58.419972 update_engine[1704]: I20250213 16:01:58.419931 1704 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 16:01:58.420424 update_engine[1704]: I20250213 16:01:58.420387 1704 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 16:01:58.435377 update_engine[1704]: E20250213 16:01:58.435281 1704 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 16:01:58.435672 update_engine[1704]: I20250213 16:01:58.435482 1704 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 16:02:08.410924 update_engine[1704]: I20250213 16:02:08.410840 1704 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 16:02:08.411511 update_engine[1704]: I20250213 16:02:08.411170 1704 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 16:02:08.411511 update_engine[1704]: I20250213 16:02:08.411495 1704 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 16:02:08.475543 update_engine[1704]: E20250213 16:02:08.475466 1704 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 16:02:08.475711 update_engine[1704]: I20250213 16:02:08.475580 1704 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 16:02:18.403148 update_engine[1704]: I20250213 16:02:18.402973 1704 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 16:02:18.403706 update_engine[1704]: I20250213 16:02:18.403365 1704 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 16:02:18.403780 update_engine[1704]: I20250213 16:02:18.403726 1704 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 16:02:18.426042 update_engine[1704]: E20250213 16:02:18.425931 1704 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 16:02:18.426289 update_engine[1704]: I20250213 16:02:18.426216 1704 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 16:02:28.409996 update_engine[1704]: I20250213 16:02:28.409900 1704 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 16:02:28.410581 update_engine[1704]: I20250213 16:02:28.410295 1704 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 16:02:28.410698 update_engine[1704]: I20250213 16:02:28.410658 1704 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 16:02:28.417242 update_engine[1704]: E20250213 16:02:28.417200 1704 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 16:02:28.417406 update_engine[1704]: I20250213 16:02:28.417271 1704 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 16:02:28.417406 update_engine[1704]: I20250213 16:02:28.417284 1704 omaha_request_action.cc:617] Omaha request response: Feb 13 16:02:28.417406 update_engine[1704]: E20250213 16:02:28.417371 1704 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 16:02:28.417406 update_engine[1704]: I20250213 16:02:28.417399 1704 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 16:02:28.417551 update_engine[1704]: I20250213 16:02:28.417407 1704 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 16:02:28.417551 update_engine[1704]: I20250213 16:02:28.417415 1704 update_attempter.cc:306] Processing Done. Feb 13 16:02:28.417551 update_engine[1704]: E20250213 16:02:28.417434 1704 update_attempter.cc:619] Update failed. Feb 13 16:02:28.417551 update_engine[1704]: I20250213 16:02:28.417443 1704 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 16:02:28.417551 update_engine[1704]: I20250213 16:02:28.417450 1704 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 16:02:28.417551 update_engine[1704]: I20250213 16:02:28.417459 1704 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 16:02:28.417771 update_engine[1704]: I20250213 16:02:28.417552 1704 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 16:02:28.417771 update_engine[1704]: I20250213 16:02:28.417580 1704 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 16:02:28.417771 update_engine[1704]: I20250213 16:02:28.417588 1704 omaha_request_action.cc:272] Request: Feb 13 16:02:28.417771 update_engine[1704]: Feb 13 16:02:28.417771 update_engine[1704]: Feb 13 16:02:28.417771 update_engine[1704]: Feb 13 16:02:28.417771 update_engine[1704]: Feb 13 16:02:28.417771 update_engine[1704]: Feb 13 16:02:28.417771 update_engine[1704]: Feb 13 16:02:28.417771 update_engine[1704]: I20250213 16:02:28.417598 1704 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 16:02:28.418211 update_engine[1704]: I20250213 16:02:28.417774 1704 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 16:02:28.418211 update_engine[1704]: I20250213 16:02:28.417990 1704 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 16:02:28.418414 locksmithd[1742]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 16:02:28.429015 update_engine[1704]: E20250213 16:02:28.428977 1704 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 16:02:28.429201 update_engine[1704]: I20250213 16:02:28.429039 1704 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 16:02:28.429201 update_engine[1704]: I20250213 16:02:28.429050 1704 omaha_request_action.cc:617] Omaha request response: Feb 13 16:02:28.429201 update_engine[1704]: I20250213 16:02:28.429061 1704 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 16:02:28.429201 update_engine[1704]: I20250213 16:02:28.429068 1704 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 16:02:28.429201 update_engine[1704]: I20250213 16:02:28.429075 1704 update_attempter.cc:306] Processing Done. Feb 13 16:02:28.429201 update_engine[1704]: I20250213 16:02:28.429083 1704 update_attempter.cc:310] Error event sent. Feb 13 16:02:28.429201 update_engine[1704]: I20250213 16:02:28.429096 1704 update_check_scheduler.cc:74] Next update check in 46m36s Feb 13 16:02:28.429496 locksmithd[1742]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 16:03:40.500463 systemd[1]: Started sshd@7-10.200.8.12:22-10.200.16.10:36848.service - OpenSSH per-connection server daemon (10.200.16.10:36848). Feb 13 16:03:41.128526 sshd[4764]: Accepted publickey for core from 10.200.16.10 port 36848 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:03:41.130410 sshd-session[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:03:41.136411 systemd-logind[1702]: New session 10 of user core. Feb 13 16:03:41.141289 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 16:03:41.665168 sshd[4766]: Connection closed by 10.200.16.10 port 36848 Feb 13 16:03:41.669057 sshd-session[4764]: pam_unix(sshd:session): session closed for user core Feb 13 16:03:41.671914 systemd[1]: sshd@7-10.200.8.12:22-10.200.16.10:36848.service: Deactivated successfully. Feb 13 16:03:41.674404 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 16:03:41.676015 systemd-logind[1702]: Session 10 logged out. Waiting for processes to exit. Feb 13 16:03:41.677228 systemd-logind[1702]: Removed session 10. Feb 13 16:03:46.784417 systemd[1]: Started sshd@8-10.200.8.12:22-10.200.16.10:36854.service - OpenSSH per-connection server daemon (10.200.16.10:36854). Feb 13 16:03:47.410405 sshd[4778]: Accepted publickey for core from 10.200.16.10 port 36854 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:03:47.412072 sshd-session[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:03:47.416574 systemd-logind[1702]: New session 11 of user core. Feb 13 16:03:47.424699 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 16:03:47.932586 sshd[4780]: Connection closed by 10.200.16.10 port 36854 Feb 13 16:03:47.933455 sshd-session[4778]: pam_unix(sshd:session): session closed for user core Feb 13 16:03:47.937599 systemd[1]: sshd@8-10.200.8.12:22-10.200.16.10:36854.service: Deactivated successfully. Feb 13 16:03:47.939637 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 16:03:47.940581 systemd-logind[1702]: Session 11 logged out. Waiting for processes to exit. Feb 13 16:03:47.941610 systemd-logind[1702]: Removed session 11. Feb 13 16:03:53.046456 systemd[1]: Started sshd@9-10.200.8.12:22-10.200.16.10:58234.service - OpenSSH per-connection server daemon (10.200.16.10:58234). Feb 13 16:03:53.676972 sshd[4794]: Accepted publickey for core from 10.200.16.10 port 58234 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:03:53.678810 sshd-session[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:03:53.684895 systemd-logind[1702]: New session 12 of user core. Feb 13 16:03:53.690437 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 16:03:54.180708 sshd[4796]: Connection closed by 10.200.16.10 port 58234 Feb 13 16:03:54.181585 sshd-session[4794]: pam_unix(sshd:session): session closed for user core Feb 13 16:03:54.185063 systemd[1]: sshd@9-10.200.8.12:22-10.200.16.10:58234.service: Deactivated successfully. Feb 13 16:03:54.187899 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 16:03:54.189806 systemd-logind[1702]: Session 12 logged out. Waiting for processes to exit. Feb 13 16:03:54.190856 systemd-logind[1702]: Removed session 12. Feb 13 16:03:59.295367 systemd[1]: Started sshd@10-10.200.8.12:22-10.200.16.10:35148.service - OpenSSH per-connection server daemon (10.200.16.10:35148). Feb 13 16:03:59.925506 sshd[4807]: Accepted publickey for core from 10.200.16.10 port 35148 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:03:59.926952 sshd-session[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:03:59.931768 systemd-logind[1702]: New session 13 of user core. Feb 13 16:03:59.937285 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 16:04:00.443321 sshd[4810]: Connection closed by 10.200.16.10 port 35148 Feb 13 16:04:00.444227 sshd-session[4807]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:00.447860 systemd[1]: sshd@10-10.200.8.12:22-10.200.16.10:35148.service: Deactivated successfully. Feb 13 16:04:00.451417 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 16:04:00.452949 systemd-logind[1702]: Session 13 logged out. Waiting for processes to exit. Feb 13 16:04:00.454342 systemd-logind[1702]: Removed session 13. Feb 13 16:04:05.558502 systemd[1]: Started sshd@11-10.200.8.12:22-10.200.16.10:35154.service - OpenSSH per-connection server daemon (10.200.16.10:35154). Feb 13 16:04:06.192262 sshd[4822]: Accepted publickey for core from 10.200.16.10 port 35154 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:04:06.193854 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:06.198575 systemd-logind[1702]: New session 14 of user core. Feb 13 16:04:06.207278 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 16:04:06.693132 sshd[4826]: Connection closed by 10.200.16.10 port 35154 Feb 13 16:04:06.694000 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:06.697451 systemd[1]: sshd@11-10.200.8.12:22-10.200.16.10:35154.service: Deactivated successfully. Feb 13 16:04:06.700021 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 16:04:06.701847 systemd-logind[1702]: Session 14 logged out. Waiting for processes to exit. Feb 13 16:04:06.703378 systemd-logind[1702]: Removed session 14. Feb 13 16:04:11.810410 systemd[1]: Started sshd@12-10.200.8.12:22-10.200.16.10:56300.service - OpenSSH per-connection server daemon (10.200.16.10:56300). Feb 13 16:04:12.438848 sshd-session[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:12.443070 systemd-logind[1702]: New session 15 of user core. Feb 13 16:04:12.452437 sshd[4839]: Accepted publickey for core from 10.200.16.10 port 56300 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:04:12.449287 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 16:04:12.965098 sshd[4841]: Connection closed by 10.200.16.10 port 56300 Feb 13 16:04:12.965963 sshd-session[4839]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:12.970696 systemd[1]: sshd@12-10.200.8.12:22-10.200.16.10:56300.service: Deactivated successfully. Feb 13 16:04:12.973240 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 16:04:12.974260 systemd-logind[1702]: Session 15 logged out. Waiting for processes to exit. Feb 13 16:04:12.975550 systemd-logind[1702]: Removed session 15. Feb 13 16:04:18.079581 systemd[1]: Started sshd@13-10.200.8.12:22-10.200.16.10:56306.service - OpenSSH per-connection server daemon (10.200.16.10:56306). Feb 13 16:04:18.714271 sshd[4853]: Accepted publickey for core from 10.200.16.10 port 56306 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:04:18.716037 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:18.724579 systemd-logind[1702]: New session 16 of user core. Feb 13 16:04:18.728284 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 16:04:19.266139 sshd[4858]: Connection closed by 10.200.16.10 port 56306 Feb 13 16:04:19.267178 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:19.272072 systemd[1]: sshd@13-10.200.8.12:22-10.200.16.10:56306.service: Deactivated successfully. Feb 13 16:04:19.274676 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 16:04:19.275701 systemd-logind[1702]: Session 16 logged out. Waiting for processes to exit. Feb 13 16:04:19.277036 systemd-logind[1702]: Removed session 16. Feb 13 16:04:24.383447 systemd[1]: Started sshd@14-10.200.8.12:22-10.200.16.10:52812.service - OpenSSH per-connection server daemon (10.200.16.10:52812). Feb 13 16:04:25.008692 sshd[4872]: Accepted publickey for core from 10.200.16.10 port 52812 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:04:25.010180 sshd-session[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:25.014786 systemd-logind[1702]: New session 17 of user core. Feb 13 16:04:25.020248 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 16:04:25.514697 sshd[4874]: Connection closed by 10.200.16.10 port 52812 Feb 13 16:04:25.515533 sshd-session[4872]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:25.518601 systemd[1]: sshd@14-10.200.8.12:22-10.200.16.10:52812.service: Deactivated successfully. Feb 13 16:04:25.520949 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 16:04:25.522913 systemd-logind[1702]: Session 17 logged out. Waiting for processes to exit. Feb 13 16:04:25.523986 systemd-logind[1702]: Removed session 17. Feb 13 16:04:30.634437 systemd[1]: Started sshd@15-10.200.8.12:22-10.200.16.10:35212.service - OpenSSH per-connection server daemon (10.200.16.10:35212). Feb 13 16:04:31.261060 sshd[4886]: Accepted publickey for core from 10.200.16.10 port 35212 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:04:31.262581 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:31.267676 systemd-logind[1702]: New session 18 of user core. Feb 13 16:04:31.274275 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 16:04:31.762220 sshd[4888]: Connection closed by 10.200.16.10 port 35212 Feb 13 16:04:31.763139 sshd-session[4886]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:31.767806 systemd[1]: sshd@15-10.200.8.12:22-10.200.16.10:35212.service: Deactivated successfully. Feb 13 16:04:31.770572 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 16:04:31.771424 systemd-logind[1702]: Session 18 logged out. Waiting for processes to exit. Feb 13 16:04:31.772602 systemd-logind[1702]: Removed session 18. Feb 13 16:04:31.881457 systemd[1]: Started sshd@16-10.200.8.12:22-10.200.16.10:35216.service - OpenSSH per-connection server daemon (10.200.16.10:35216). Feb 13 16:04:32.514929 sshd[4900]: Accepted publickey for core from 10.200.16.10 port 35216 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:04:32.516748 sshd-session[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:32.521510 systemd-logind[1702]: New session 19 of user core. Feb 13 16:04:32.527265 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 16:04:33.054887 sshd[4902]: Connection closed by 10.200.16.10 port 35216 Feb 13 16:04:33.055888 sshd-session[4900]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:33.058883 systemd[1]: sshd@16-10.200.8.12:22-10.200.16.10:35216.service: Deactivated successfully. Feb 13 16:04:33.061343 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 16:04:33.063047 systemd-logind[1702]: Session 19 logged out. Waiting for processes to exit. Feb 13 16:04:33.064406 systemd-logind[1702]: Removed session 19. Feb 13 16:04:33.178454 systemd[1]: Started sshd@17-10.200.8.12:22-10.200.16.10:35218.service - OpenSSH per-connection server daemon (10.200.16.10:35218). Feb 13 16:04:33.802971 sshd[4911]: Accepted publickey for core from 10.200.16.10 port 35218 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:04:33.804694 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:33.809388 systemd-logind[1702]: New session 20 of user core. Feb 13 16:04:33.815265 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 16:04:34.319026 sshd[4913]: Connection closed by 10.200.16.10 port 35218 Feb 13 16:04:34.319951 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:34.323638 systemd[1]: sshd@17-10.200.8.12:22-10.200.16.10:35218.service: Deactivated successfully. Feb 13 16:04:34.326156 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 16:04:34.327744 systemd-logind[1702]: Session 20 logged out. Waiting for processes to exit. Feb 13 16:04:34.328986 systemd-logind[1702]: Removed session 20. Feb 13 16:04:39.431336 systemd[1]: Started sshd@18-10.200.8.12:22-10.200.16.10:48776.service - OpenSSH per-connection server daemon (10.200.16.10:48776). Feb 13 16:04:40.067865 sshd[4924]: Accepted publickey for core from 10.200.16.10 port 48776 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:04:40.069560 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:40.074148 systemd-logind[1702]: New session 21 of user core. Feb 13 16:04:40.084319 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 16:04:40.585239 sshd[4926]: Connection closed by 10.200.16.10 port 48776 Feb 13 16:04:40.586181 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:40.590021 systemd[1]: sshd@18-10.200.8.12:22-10.200.16.10:48776.service: Deactivated successfully. Feb 13 16:04:40.592823 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 16:04:40.594745 systemd-logind[1702]: Session 21 logged out. Waiting for processes to exit. Feb 13 16:04:40.595926 systemd-logind[1702]: Removed session 21. Feb 13 16:04:40.700337 systemd[1]: Started sshd@19-10.200.8.12:22-10.200.16.10:48790.service - OpenSSH per-connection server daemon (10.200.16.10:48790). Feb 13 16:04:41.339132 sshd[4938]: Accepted publickey for core from 10.200.16.10 port 48790 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:04:41.339481 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:41.345643 systemd-logind[1702]: New session 22 of user core. Feb 13 16:04:41.351260 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 16:04:42.119850 sshd[4940]: Connection closed by 10.200.16.10 port 48790 Feb 13 16:04:42.121130 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:42.124781 systemd[1]: sshd@19-10.200.8.12:22-10.200.16.10:48790.service: Deactivated successfully. Feb 13 16:04:42.127176 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 16:04:42.128891 systemd-logind[1702]: Session 22 logged out. Waiting for processes to exit. Feb 13 16:04:42.129988 systemd-logind[1702]: Removed session 22. Feb 13 16:04:42.239421 systemd[1]: Started sshd@20-10.200.8.12:22-10.200.16.10:48796.service - OpenSSH per-connection server daemon (10.200.16.10:48796). Feb 13 16:04:42.866655 sshd[4949]: Accepted publickey for core from 10.200.16.10 port 48796 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:04:42.868441 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:42.873868 systemd-logind[1702]: New session 23 of user core. Feb 13 16:04:42.880599 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 16:04:45.012967 sshd[4951]: Connection closed by 10.200.16.10 port 48796 Feb 13 16:04:45.013896 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:45.017713 systemd[1]: sshd@20-10.200.8.12:22-10.200.16.10:48796.service: Deactivated successfully. Feb 13 16:04:45.020609 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 16:04:45.022784 systemd-logind[1702]: Session 23 logged out. Waiting for processes to exit. Feb 13 16:04:45.024450 systemd-logind[1702]: Removed session 23. Feb 13 16:04:45.135456 systemd[1]: Started sshd@21-10.200.8.12:22-10.200.16.10:48798.service - OpenSSH per-connection server daemon (10.200.16.10:48798). Feb 13 16:04:45.765624 sshd[4967]: Accepted publickey for core from 10.200.16.10 port 48798 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:04:45.767145 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:45.772186 systemd-logind[1702]: New session 24 of user core. Feb 13 16:04:45.776682 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 16:04:46.372305 sshd[4969]: Connection closed by 10.200.16.10 port 48798 Feb 13 16:04:46.373395 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:46.377022 systemd[1]: sshd@21-10.200.8.12:22-10.200.16.10:48798.service: Deactivated successfully. Feb 13 16:04:46.379780 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 16:04:46.381379 systemd-logind[1702]: Session 24 logged out. Waiting for processes to exit. Feb 13 16:04:46.382766 systemd-logind[1702]: Removed session 24. Feb 13 16:04:46.489740 systemd[1]: Started sshd@22-10.200.8.12:22-10.200.16.10:48806.service - OpenSSH per-connection server daemon (10.200.16.10:48806). Feb 13 16:04:47.117819 sshd[4978]: Accepted publickey for core from 10.200.16.10 port 48806 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:04:47.119461 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:47.123800 systemd-logind[1702]: New session 25 of user core. Feb 13 16:04:47.131264 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 16:04:47.639685 sshd[4980]: Connection closed by 10.200.16.10 port 48806 Feb 13 16:04:47.640539 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:47.643891 systemd[1]: sshd@22-10.200.8.12:22-10.200.16.10:48806.service: Deactivated successfully. Feb 13 16:04:47.646550 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 16:04:47.648573 systemd-logind[1702]: Session 25 logged out. Waiting for processes to exit. Feb 13 16:04:47.649666 systemd-logind[1702]: Removed session 25. Feb 13 16:04:52.757472 systemd[1]: Started sshd@23-10.200.8.12:22-10.200.16.10:56954.service - OpenSSH per-connection server daemon (10.200.16.10:56954). Feb 13 16:04:53.384275 sshd[4997]: Accepted publickey for core from 10.200.16.10 port 56954 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:04:53.385959 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:53.391489 systemd-logind[1702]: New session 26 of user core. Feb 13 16:04:53.400274 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 16:04:53.889751 sshd[4999]: Connection closed by 10.200.16.10 port 56954 Feb 13 16:04:53.890654 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Feb 13 16:04:53.894259 systemd[1]: sshd@23-10.200.8.12:22-10.200.16.10:56954.service: Deactivated successfully. Feb 13 16:04:53.896975 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 16:04:53.899065 systemd-logind[1702]: Session 26 logged out. Waiting for processes to exit. Feb 13 16:04:53.900654 systemd-logind[1702]: Removed session 26. Feb 13 16:04:59.013412 systemd[1]: Started sshd@24-10.200.8.12:22-10.200.16.10:52800.service - OpenSSH per-connection server daemon (10.200.16.10:52800). Feb 13 16:04:59.637573 sshd[5014]: Accepted publickey for core from 10.200.16.10 port 52800 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:04:59.639355 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:04:59.643754 systemd-logind[1702]: New session 27 of user core. Feb 13 16:04:59.649254 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 16:05:00.139984 sshd[5016]: Connection closed by 10.200.16.10 port 52800 Feb 13 16:05:00.140929 sshd-session[5014]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:00.144568 systemd[1]: sshd@24-10.200.8.12:22-10.200.16.10:52800.service: Deactivated successfully. Feb 13 16:05:00.147337 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 16:05:00.149154 systemd-logind[1702]: Session 27 logged out. Waiting for processes to exit. Feb 13 16:05:00.150467 systemd-logind[1702]: Removed session 27. Feb 13 16:05:05.261433 systemd[1]: Started sshd@25-10.200.8.12:22-10.200.16.10:52814.service - OpenSSH per-connection server daemon (10.200.16.10:52814). Feb 13 16:05:05.885929 sshd[5026]: Accepted publickey for core from 10.200.16.10 port 52814 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:05:05.887498 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:05:05.892829 systemd-logind[1702]: New session 28 of user core. Feb 13 16:05:05.897311 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 16:05:06.408586 sshd[5028]: Connection closed by 10.200.16.10 port 52814 Feb 13 16:05:06.409618 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:06.413266 systemd[1]: sshd@25-10.200.8.12:22-10.200.16.10:52814.service: Deactivated successfully. Feb 13 16:05:06.415792 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 16:05:06.417442 systemd-logind[1702]: Session 28 logged out. Waiting for processes to exit. Feb 13 16:05:06.418595 systemd-logind[1702]: Removed session 28. Feb 13 16:05:06.528435 systemd[1]: Started sshd@26-10.200.8.12:22-10.200.16.10:52828.service - OpenSSH per-connection server daemon (10.200.16.10:52828). Feb 13 16:05:07.153811 sshd[5041]: Accepted publickey for core from 10.200.16.10 port 52828 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:05:07.155368 sshd-session[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:05:07.160372 systemd-logind[1702]: New session 29 of user core. Feb 13 16:05:07.166289 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 16:05:08.889649 containerd[1719]: time="2025-02-13T16:05:08.889440525Z" level=info msg="StopContainer for \"3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41\" with timeout 30 (s)" Feb 13 16:05:08.890554 containerd[1719]: time="2025-02-13T16:05:08.890463045Z" level=info msg="Stop container \"3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41\" with signal terminated" Feb 13 16:05:08.912671 systemd[1]: cri-containerd-3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41.scope: Deactivated successfully. Feb 13 16:05:08.928446 containerd[1719]: time="2025-02-13T16:05:08.928053988Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:05:08.937204 containerd[1719]: time="2025-02-13T16:05:08.937013165Z" level=info msg="StopContainer for \"7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d\" with timeout 2 (s)" Feb 13 16:05:08.937664 containerd[1719]: time="2025-02-13T16:05:08.937634477Z" level=info msg="Stop container \"7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d\" with signal terminated" Feb 13 16:05:08.948365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41-rootfs.mount: Deactivated successfully. Feb 13 16:05:08.952922 systemd-networkd[1456]: lxc_health: Link DOWN Feb 13 16:05:08.952931 systemd-networkd[1456]: lxc_health: Lost carrier Feb 13 16:05:08.964727 systemd[1]: cri-containerd-7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d.scope: Deactivated successfully. Feb 13 16:05:08.964998 systemd[1]: cri-containerd-7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d.scope: Consumed 7.535s CPU time. Feb 13 16:05:08.986577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d-rootfs.mount: Deactivated successfully. Feb 13 16:05:10.895128 containerd[1719]: time="2025-02-13T16:05:10.895000465Z" level=info msg="shim disconnected" id=7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d namespace=k8s.io Feb 13 16:05:10.896222 containerd[1719]: time="2025-02-13T16:05:10.895149368Z" level=warning msg="cleaning up after shim disconnected" id=7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d namespace=k8s.io Feb 13 16:05:10.896222 containerd[1719]: time="2025-02-13T16:05:10.895168369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:10.896222 containerd[1719]: time="2025-02-13T16:05:10.895489876Z" level=info msg="shim disconnected" id=3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41 namespace=k8s.io Feb 13 16:05:10.896222 containerd[1719]: time="2025-02-13T16:05:10.895554177Z" level=warning msg="cleaning up after shim disconnected" id=3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41 namespace=k8s.io Feb 13 16:05:10.896222 containerd[1719]: time="2025-02-13T16:05:10.895567277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:10.932346 sshd[5043]: Connection closed by 10.200.16.10 port 52828 Feb 13 16:05:10.933038 sshd-session[5041]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:10.936234 systemd[1]: sshd@26-10.200.8.12:22-10.200.16.10:52828.service: Deactivated successfully. Feb 13 16:05:10.938488 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 16:05:10.940336 systemd-logind[1702]: Session 29 logged out. Waiting for processes to exit. Feb 13 16:05:10.941456 systemd-logind[1702]: Removed session 29. Feb 13 16:05:10.946156 containerd[1719]: time="2025-02-13T16:05:10.946098149Z" level=info msg="StopContainer for \"3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41\" returns successfully" Feb 13 16:05:10.947181 containerd[1719]: time="2025-02-13T16:05:10.947154271Z" level=info msg="StopPodSandbox for \"714303bf0e8f13995c7c5e5c28c4201da2b17311e81d4cf4326a2cb9661010ac\"" Feb 13 16:05:10.947283 containerd[1719]: time="2025-02-13T16:05:10.947197972Z" level=info msg="Container to stop \"3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:05:10.950658 containerd[1719]: time="2025-02-13T16:05:10.949477020Z" level=info msg="StopContainer for \"7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d\" returns successfully" Feb 13 16:05:10.950658 containerd[1719]: time="2025-02-13T16:05:10.950186935Z" level=info msg="StopPodSandbox for \"ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58\"" Feb 13 16:05:10.950658 containerd[1719]: time="2025-02-13T16:05:10.950227136Z" level=info msg="Container to stop \"7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:05:10.950658 containerd[1719]: time="2025-02-13T16:05:10.950268037Z" level=info msg="Container to stop \"44be9d7a9920a5b5578b9313e6a0fa1fd825671381cf60ef00616719dbc2650c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:05:10.950658 containerd[1719]: time="2025-02-13T16:05:10.950279737Z" level=info msg="Container to stop \"6c3ff0e3e6e290f2b7f0e9871bf3125d94d7b8768428720724d4f86e55d43e9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:05:10.950658 containerd[1719]: time="2025-02-13T16:05:10.950293738Z" level=info msg="Container to stop \"6ad014c74904e4e766b5813e4c98d9ed029a8ef63cbf073dadf3995ef5baa0c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:05:10.950658 containerd[1719]: time="2025-02-13T16:05:10.950307038Z" level=info msg="Container to stop \"e5eb37ba267daa16c2ff96ac64e567c606841689f6cc5abfb013f3c401695efc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 16:05:10.951481 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-714303bf0e8f13995c7c5e5c28c4201da2b17311e81d4cf4326a2cb9661010ac-shm.mount: Deactivated successfully. Feb 13 16:05:10.956492 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58-shm.mount: Deactivated successfully. Feb 13 16:05:10.963855 systemd[1]: cri-containerd-ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58.scope: Deactivated successfully. Feb 13 16:05:10.965206 systemd[1]: cri-containerd-714303bf0e8f13995c7c5e5c28c4201da2b17311e81d4cf4326a2cb9661010ac.scope: Deactivated successfully. Feb 13 16:05:10.991783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58-rootfs.mount: Deactivated successfully. Feb 13 16:05:10.998206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-714303bf0e8f13995c7c5e5c28c4201da2b17311e81d4cf4326a2cb9661010ac-rootfs.mount: Deactivated successfully. Feb 13 16:05:11.544510 kubelet[3401]: E0213 16:05:11.163018 3401 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 16:05:11.544510 kubelet[3401]: I0213 16:05:11.363914 3401 setters.go:568] "Node became not ready" node="ci-4186.1.1-a-254057132e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T16:05:11Z","lastTransitionTime":"2025-02-13T16:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 16:05:11.043493 systemd[1]: Started sshd@27-10.200.8.12:22-10.200.16.10:33090.service - OpenSSH per-connection server daemon (10.200.16.10:33090). Feb 13 16:05:11.680404 sshd[5173]: Accepted publickey for core from 10.200.16.10 port 33090 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:05:11.681881 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:05:11.686730 systemd-logind[1702]: New session 30 of user core. Feb 13 16:05:11.693486 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 16:05:11.898527 containerd[1719]: time="2025-02-13T16:05:11.897537225Z" level=info msg="shim disconnected" id=714303bf0e8f13995c7c5e5c28c4201da2b17311e81d4cf4326a2cb9661010ac namespace=k8s.io Feb 13 16:05:11.898527 containerd[1719]: time="2025-02-13T16:05:11.897608326Z" level=warning msg="cleaning up after shim disconnected" id=714303bf0e8f13995c7c5e5c28c4201da2b17311e81d4cf4326a2cb9661010ac namespace=k8s.io Feb 13 16:05:11.898527 containerd[1719]: time="2025-02-13T16:05:11.897620026Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:11.900200 containerd[1719]: time="2025-02-13T16:05:11.900139480Z" level=info msg="shim disconnected" id=ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58 namespace=k8s.io Feb 13 16:05:11.900468 containerd[1719]: time="2025-02-13T16:05:11.900313684Z" level=warning msg="cleaning up after shim disconnected" id=ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58 namespace=k8s.io Feb 13 16:05:11.900468 containerd[1719]: time="2025-02-13T16:05:11.900331784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:11.919423 containerd[1719]: time="2025-02-13T16:05:11.919337587Z" level=info msg="TearDown network for sandbox \"714303bf0e8f13995c7c5e5c28c4201da2b17311e81d4cf4326a2cb9661010ac\" successfully" Feb 13 16:05:11.919423 containerd[1719]: time="2025-02-13T16:05:11.919378488Z" level=info msg="StopPodSandbox for \"714303bf0e8f13995c7c5e5c28c4201da2b17311e81d4cf4326a2cb9661010ac\" returns successfully" Feb 13 16:05:11.920241 containerd[1719]: time="2025-02-13T16:05:11.919866698Z" level=info msg="TearDown network for sandbox \"ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58\" successfully" Feb 13 16:05:11.920241 containerd[1719]: time="2025-02-13T16:05:11.919887799Z" level=info msg="StopPodSandbox for \"ed8f40e4596174d539096e5459be7541b2099288ba169f3d9db8a9ef8c551b58\" returns successfully" Feb 13 16:05:12.079487 kubelet[3401]: I0213 16:05:12.078082 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98885483-0b40-4a74-b9b3-323cd062a471-cilium-config-path\") pod \"98885483-0b40-4a74-b9b3-323cd062a471\" (UID: \"98885483-0b40-4a74-b9b3-323cd062a471\") " Feb 13 16:05:12.079487 kubelet[3401]: I0213 16:05:12.078155 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-hostproc\") pod \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " Feb 13 16:05:12.079487 kubelet[3401]: I0213 16:05:12.078183 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-xtables-lock\") pod \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " Feb 13 16:05:12.079487 kubelet[3401]: I0213 16:05:12.078209 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-host-proc-sys-net\") pod \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " Feb 13 16:05:12.079487 kubelet[3401]: I0213 16:05:12.078239 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrmsd\" (UniqueName: \"kubernetes.io/projected/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-kube-api-access-hrmsd\") pod \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " Feb 13 16:05:12.079487 kubelet[3401]: I0213 16:05:12.078264 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cni-path\") pod \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " Feb 13 16:05:12.079893 kubelet[3401]: I0213 16:05:12.078290 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-etc-cni-netd\") pod \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " Feb 13 16:05:12.079893 kubelet[3401]: I0213 16:05:12.078314 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-lib-modules\") pod \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " Feb 13 16:05:12.079893 kubelet[3401]: I0213 16:05:12.078339 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cilium-run\") pod \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " Feb 13 16:05:12.079893 kubelet[3401]: I0213 16:05:12.078375 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-host-proc-sys-kernel\") pod \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " Feb 13 16:05:12.079893 kubelet[3401]: I0213 16:05:12.078408 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cilium-cgroup\") pod \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " Feb 13 16:05:12.079893 kubelet[3401]: I0213 16:05:12.078437 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cilium-config-path\") pod \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " Feb 13 16:05:12.080151 kubelet[3401]: I0213 16:05:12.078470 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-clustermesh-secrets\") pod \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " Feb 13 16:05:12.080151 kubelet[3401]: I0213 16:05:12.078495 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-bpf-maps\") pod \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " Feb 13 16:05:12.080151 kubelet[3401]: I0213 16:05:12.078527 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-hubble-tls\") pod \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\" (UID: \"ed6ce577-4f34-4690-8ce1-47c2d3b20f42\") " Feb 13 16:05:12.080151 kubelet[3401]: I0213 16:05:12.078558 3401 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vj7f\" (UniqueName: \"kubernetes.io/projected/98885483-0b40-4a74-b9b3-323cd062a471-kube-api-access-8vj7f\") pod \"98885483-0b40-4a74-b9b3-323cd062a471\" (UID: \"98885483-0b40-4a74-b9b3-323cd062a471\") " Feb 13 16:05:12.080151 kubelet[3401]: I0213 16:05:12.079706 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ed6ce577-4f34-4690-8ce1-47c2d3b20f42" (UID: "ed6ce577-4f34-4690-8ce1-47c2d3b20f42"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:05:12.083027 kubelet[3401]: I0213 16:05:12.082687 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ed6ce577-4f34-4690-8ce1-47c2d3b20f42" (UID: "ed6ce577-4f34-4690-8ce1-47c2d3b20f42"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:05:12.083027 kubelet[3401]: I0213 16:05:12.082830 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ed6ce577-4f34-4690-8ce1-47c2d3b20f42" (UID: "ed6ce577-4f34-4690-8ce1-47c2d3b20f42"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:05:12.083027 kubelet[3401]: I0213 16:05:12.082855 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ed6ce577-4f34-4690-8ce1-47c2d3b20f42" (UID: "ed6ce577-4f34-4690-8ce1-47c2d3b20f42"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:05:12.083232 kubelet[3401]: I0213 16:05:12.083172 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-hostproc" (OuterVolumeSpecName: "hostproc") pod "ed6ce577-4f34-4690-8ce1-47c2d3b20f42" (UID: "ed6ce577-4f34-4690-8ce1-47c2d3b20f42"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:05:12.083232 kubelet[3401]: I0213 16:05:12.083209 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ed6ce577-4f34-4690-8ce1-47c2d3b20f42" (UID: "ed6ce577-4f34-4690-8ce1-47c2d3b20f42"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:05:12.083322 kubelet[3401]: I0213 16:05:12.083231 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ed6ce577-4f34-4690-8ce1-47c2d3b20f42" (UID: "ed6ce577-4f34-4690-8ce1-47c2d3b20f42"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:05:12.084371 kubelet[3401]: I0213 16:05:12.083977 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ed6ce577-4f34-4690-8ce1-47c2d3b20f42" (UID: "ed6ce577-4f34-4690-8ce1-47c2d3b20f42"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:05:12.084371 kubelet[3401]: I0213 16:05:12.084151 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cni-path" (OuterVolumeSpecName: "cni-path") pod "ed6ce577-4f34-4690-8ce1-47c2d3b20f42" (UID: "ed6ce577-4f34-4690-8ce1-47c2d3b20f42"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:05:12.084371 kubelet[3401]: I0213 16:05:12.084182 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ed6ce577-4f34-4690-8ce1-47c2d3b20f42" (UID: "ed6ce577-4f34-4690-8ce1-47c2d3b20f42"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 16:05:12.088241 kubelet[3401]: I0213 16:05:12.087075 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98885483-0b40-4a74-b9b3-323cd062a471-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "98885483-0b40-4a74-b9b3-323cd062a471" (UID: "98885483-0b40-4a74-b9b3-323cd062a471"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 16:05:12.091974 systemd[1]: var-lib-kubelet-pods-98885483\x2d0b40\x2d4a74\x2db9b3\x2d323cd062a471-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8vj7f.mount: Deactivated successfully. Feb 13 16:05:12.099396 kubelet[3401]: I0213 16:05:12.098984 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98885483-0b40-4a74-b9b3-323cd062a471-kube-api-access-8vj7f" (OuterVolumeSpecName: "kube-api-access-8vj7f") pod "98885483-0b40-4a74-b9b3-323cd062a471" (UID: "98885483-0b40-4a74-b9b3-323cd062a471"). InnerVolumeSpecName "kube-api-access-8vj7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:05:12.100724 kubelet[3401]: I0213 16:05:12.100485 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ed6ce577-4f34-4690-8ce1-47c2d3b20f42" (UID: "ed6ce577-4f34-4690-8ce1-47c2d3b20f42"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 16:05:12.105746 kubelet[3401]: I0213 16:05:12.105696 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ed6ce577-4f34-4690-8ce1-47c2d3b20f42" (UID: "ed6ce577-4f34-4690-8ce1-47c2d3b20f42"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:05:12.108209 kubelet[3401]: I0213 16:05:12.107439 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ed6ce577-4f34-4690-8ce1-47c2d3b20f42" (UID: "ed6ce577-4f34-4690-8ce1-47c2d3b20f42"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 16:05:12.108209 kubelet[3401]: I0213 16:05:12.107942 3401 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-kube-api-access-hrmsd" (OuterVolumeSpecName: "kube-api-access-hrmsd") pod "ed6ce577-4f34-4690-8ce1-47c2d3b20f42" (UID: "ed6ce577-4f34-4690-8ce1-47c2d3b20f42"). InnerVolumeSpecName "kube-api-access-hrmsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 16:05:12.108388 systemd[1]: var-lib-kubelet-pods-ed6ce577\x2d4f34\x2d4690\x2d8ce1\x2d47c2d3b20f42-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhrmsd.mount: Deactivated successfully. Feb 13 16:05:12.108521 systemd[1]: var-lib-kubelet-pods-ed6ce577\x2d4f34\x2d4690\x2d8ce1\x2d47c2d3b20f42-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 16:05:12.108601 systemd[1]: var-lib-kubelet-pods-ed6ce577\x2d4f34\x2d4690\x2d8ce1\x2d47c2d3b20f42-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 16:05:12.179017 kubelet[3401]: I0213 16:05:12.178971 3401 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-etc-cni-netd\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.179275 kubelet[3401]: I0213 16:05:12.179261 3401 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-lib-modules\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.179383 kubelet[3401]: I0213 16:05:12.179375 3401 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cni-path\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.179519 kubelet[3401]: I0213 16:05:12.179451 3401 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cilium-cgroup\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.179519 kubelet[3401]: I0213 16:05:12.179467 3401 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cilium-run\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.179519 kubelet[3401]: I0213 16:05:12.179484 3401 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-host-proc-sys-kernel\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.179519 kubelet[3401]: I0213 16:05:12.179500 3401 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-cilium-config-path\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.179845 kubelet[3401]: I0213 16:05:12.179704 3401 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-clustermesh-secrets\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.179845 kubelet[3401]: I0213 16:05:12.179726 3401 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-bpf-maps\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.179845 kubelet[3401]: I0213 16:05:12.179741 3401 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-hubble-tls\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.179845 kubelet[3401]: I0213 16:05:12.179781 3401 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8vj7f\" (UniqueName: \"kubernetes.io/projected/98885483-0b40-4a74-b9b3-323cd062a471-kube-api-access-8vj7f\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.179845 kubelet[3401]: I0213 16:05:12.179798 3401 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98885483-0b40-4a74-b9b3-323cd062a471-cilium-config-path\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.179845 kubelet[3401]: I0213 16:05:12.179816 3401 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-hostproc\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.179845 kubelet[3401]: I0213 16:05:12.179831 3401 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-xtables-lock\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.180229 kubelet[3401]: I0213 16:05:12.180097 3401 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-host-proc-sys-net\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.180229 kubelet[3401]: I0213 16:05:12.180138 3401 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hrmsd\" (UniqueName: \"kubernetes.io/projected/ed6ce577-4f34-4690-8ce1-47c2d3b20f42-kube-api-access-hrmsd\") on node \"ci-4186.1.1-a-254057132e\" DevicePath \"\"" Feb 13 16:05:12.601133 kubelet[3401]: I0213 16:05:12.599010 3401 topology_manager.go:215] "Topology Admit Handler" podUID="1aff8f3d-b347-4602-8fd1-48b3bf826b1a" podNamespace="kube-system" podName="cilium-kpntp" Feb 13 16:05:12.601133 kubelet[3401]: E0213 16:05:12.599096 3401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed6ce577-4f34-4690-8ce1-47c2d3b20f42" containerName="clean-cilium-state" Feb 13 16:05:12.601133 kubelet[3401]: E0213 16:05:12.599128 3401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98885483-0b40-4a74-b9b3-323cd062a471" containerName="cilium-operator" Feb 13 16:05:12.601133 kubelet[3401]: E0213 16:05:12.599138 3401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed6ce577-4f34-4690-8ce1-47c2d3b20f42" containerName="cilium-agent" Feb 13 16:05:12.601133 kubelet[3401]: E0213 16:05:12.599147 3401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed6ce577-4f34-4690-8ce1-47c2d3b20f42" containerName="mount-bpf-fs" Feb 13 16:05:12.601133 kubelet[3401]: E0213 16:05:12.599158 3401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed6ce577-4f34-4690-8ce1-47c2d3b20f42" containerName="mount-cgroup" Feb 13 16:05:12.601133 kubelet[3401]: E0213 16:05:12.599168 3401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed6ce577-4f34-4690-8ce1-47c2d3b20f42" containerName="apply-sysctl-overwrites" Feb 13 16:05:12.601133 kubelet[3401]: I0213 16:05:12.599200 3401 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed6ce577-4f34-4690-8ce1-47c2d3b20f42" containerName="cilium-agent" Feb 13 16:05:12.601133 kubelet[3401]: I0213 16:05:12.599211 3401 memory_manager.go:354] "RemoveStaleState removing state" podUID="98885483-0b40-4a74-b9b3-323cd062a471" containerName="cilium-operator" Feb 13 16:05:12.614364 systemd[1]: Created slice kubepods-burstable-pod1aff8f3d_b347_4602_8fd1_48b3bf826b1a.slice - libcontainer container kubepods-burstable-pod1aff8f3d_b347_4602_8fd1_48b3bf826b1a.slice. Feb 13 16:05:12.621294 kubelet[3401]: W0213 16:05:12.621265 3401 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4186.1.1-a-254057132e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.1-a-254057132e' and this object Feb 13 16:05:12.622290 kubelet[3401]: E0213 16:05:12.621746 3401 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4186.1.1-a-254057132e" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.1-a-254057132e' and this object Feb 13 16:05:12.630447 kubelet[3401]: I0213 16:05:12.630335 3401 scope.go:117] "RemoveContainer" containerID="7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d" Feb 13 16:05:12.634361 containerd[1719]: time="2025-02-13T16:05:12.634144045Z" level=info msg="RemoveContainer for \"7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d\"" Feb 13 16:05:12.650851 systemd[1]: Removed slice kubepods-burstable-poded6ce577_4f34_4690_8ce1_47c2d3b20f42.slice - libcontainer container kubepods-burstable-poded6ce577_4f34_4690_8ce1_47c2d3b20f42.slice. Feb 13 16:05:12.650974 systemd[1]: kubepods-burstable-poded6ce577_4f34_4690_8ce1_47c2d3b20f42.slice: Consumed 7.619s CPU time. Feb 13 16:05:12.655605 containerd[1719]: time="2025-02-13T16:05:12.655561399Z" level=info msg="RemoveContainer for \"7587569b62c33ef08745a77ae83342792c0243a95d8c7702e0db3ca07bd20a6d\" returns successfully" Feb 13 16:05:12.658155 sshd[5175]: Connection closed by 10.200.16.10 port 33090 Feb 13 16:05:12.659269 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:12.664127 kubelet[3401]: I0213 16:05:12.664088 3401 scope.go:117] "RemoveContainer" containerID="6c3ff0e3e6e290f2b7f0e9871bf3125d94d7b8768428720724d4f86e55d43e9c" Feb 13 16:05:12.667513 containerd[1719]: time="2025-02-13T16:05:12.667449651Z" level=info msg="RemoveContainer for \"6c3ff0e3e6e290f2b7f0e9871bf3125d94d7b8768428720724d4f86e55d43e9c\"" Feb 13 16:05:12.669270 systemd[1]: sshd@27-10.200.8.12:22-10.200.16.10:33090.service: Deactivated successfully. Feb 13 16:05:12.672033 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 16:05:12.674006 systemd-logind[1702]: Session 30 logged out. Waiting for processes to exit. Feb 13 16:05:12.674887 systemd[1]: Removed slice kubepods-besteffort-pod98885483_0b40_4a74_b9b3_323cd062a471.slice - libcontainer container kubepods-besteffort-pod98885483_0b40_4a74_b9b3_323cd062a471.slice. Feb 13 16:05:12.677642 systemd-logind[1702]: Removed session 30. Feb 13 16:05:12.683580 kubelet[3401]: I0213 16:05:12.683524 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-cni-path\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.683580 kubelet[3401]: I0213 16:05:12.683565 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-bpf-maps\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.683950 kubelet[3401]: I0213 16:05:12.683619 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-xtables-lock\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.683950 kubelet[3401]: I0213 16:05:12.683655 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-cilium-config-path\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.683950 kubelet[3401]: I0213 16:05:12.683684 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-host-proc-sys-kernel\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.683950 kubelet[3401]: I0213 16:05:12.683744 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-hostproc\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.683950 kubelet[3401]: I0213 16:05:12.683764 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-cilium-cgroup\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.683950 kubelet[3401]: I0213 16:05:12.683782 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-cilium-ipsec-secrets\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.684246 kubelet[3401]: I0213 16:05:12.683809 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-clustermesh-secrets\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.684246 kubelet[3401]: I0213 16:05:12.683853 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs9fp\" (UniqueName: \"kubernetes.io/projected/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-kube-api-access-hs9fp\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.684246 kubelet[3401]: I0213 16:05:12.683908 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-cilium-run\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.684246 kubelet[3401]: I0213 16:05:12.683942 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-lib-modules\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.684246 kubelet[3401]: I0213 16:05:12.683970 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-hubble-tls\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.684246 kubelet[3401]: I0213 16:05:12.684012 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-etc-cni-netd\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.684412 kubelet[3401]: I0213 16:05:12.684098 3401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-host-proc-sys-net\") pod \"cilium-kpntp\" (UID: \"1aff8f3d-b347-4602-8fd1-48b3bf826b1a\") " pod="kube-system/cilium-kpntp" Feb 13 16:05:12.751125 containerd[1719]: time="2025-02-13T16:05:12.751066325Z" level=info msg="RemoveContainer for \"6c3ff0e3e6e290f2b7f0e9871bf3125d94d7b8768428720724d4f86e55d43e9c\" returns successfully" Feb 13 16:05:12.751458 kubelet[3401]: I0213 16:05:12.751396 3401 scope.go:117] "RemoveContainer" containerID="44be9d7a9920a5b5578b9313e6a0fa1fd825671381cf60ef00616719dbc2650c" Feb 13 16:05:12.752854 containerd[1719]: time="2025-02-13T16:05:12.752817762Z" level=info msg="RemoveContainer for \"44be9d7a9920a5b5578b9313e6a0fa1fd825671381cf60ef00616719dbc2650c\"" Feb 13 16:05:12.765324 systemd[1]: Started sshd@28-10.200.8.12:22-10.200.16.10:33106.service - OpenSSH per-connection server daemon (10.200.16.10:33106). Feb 13 16:05:12.810131 containerd[1719]: time="2025-02-13T16:05:12.807743326Z" level=info msg="RemoveContainer for \"44be9d7a9920a5b5578b9313e6a0fa1fd825671381cf60ef00616719dbc2650c\" returns successfully" Feb 13 16:05:12.810591 kubelet[3401]: I0213 16:05:12.810537 3401 scope.go:117] "RemoveContainer" containerID="6ad014c74904e4e766b5813e4c98d9ed029a8ef63cbf073dadf3995ef5baa0c2" Feb 13 16:05:12.813726 containerd[1719]: time="2025-02-13T16:05:12.813696653Z" level=info msg="RemoveContainer for \"6ad014c74904e4e766b5813e4c98d9ed029a8ef63cbf073dadf3995ef5baa0c2\"" Feb 13 16:05:12.904653 containerd[1719]: time="2025-02-13T16:05:12.903118549Z" level=info msg="RemoveContainer for \"6ad014c74904e4e766b5813e4c98d9ed029a8ef63cbf073dadf3995ef5baa0c2\" returns successfully" Feb 13 16:05:12.904653 containerd[1719]: time="2025-02-13T16:05:12.904539279Z" level=info msg="RemoveContainer for \"e5eb37ba267daa16c2ff96ac64e567c606841689f6cc5abfb013f3c401695efc\"" Feb 13 16:05:12.905251 kubelet[3401]: I0213 16:05:12.903407 3401 scope.go:117] "RemoveContainer" containerID="e5eb37ba267daa16c2ff96ac64e567c606841689f6cc5abfb013f3c401695efc" Feb 13 16:05:13.011139 containerd[1719]: time="2025-02-13T16:05:13.011060638Z" level=info msg="RemoveContainer for \"e5eb37ba267daa16c2ff96ac64e567c606841689f6cc5abfb013f3c401695efc\" returns successfully" Feb 13 16:05:13.011424 kubelet[3401]: I0213 16:05:13.011397 3401 scope.go:117] "RemoveContainer" containerID="3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41" Feb 13 16:05:13.012692 containerd[1719]: time="2025-02-13T16:05:13.012659372Z" level=info msg="RemoveContainer for \"3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41\"" Feb 13 16:05:13.149534 containerd[1719]: time="2025-02-13T16:05:13.149479973Z" level=info msg="RemoveContainer for \"3d8e4317afa67a2941cc6991dbd08243cfc384b6f53314d28958be01bbe2be41\" returns successfully" Feb 13 16:05:13.404025 sshd[5216]: Accepted publickey for core from 10.200.16.10 port 33106 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:05:13.405825 sshd-session[5216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:05:13.410526 systemd-logind[1702]: New session 31 of user core. Feb 13 16:05:13.416258 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 16:05:13.786893 kubelet[3401]: E0213 16:05:13.786852 3401 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Feb 13 16:05:13.787428 kubelet[3401]: E0213 16:05:13.786966 3401 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-cilium-ipsec-secrets podName:1aff8f3d-b347-4602-8fd1-48b3bf826b1a nodeName:}" failed. No retries permitted until 2025-02-13 16:05:14.286939291 +0000 UTC m=+248.388257802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/1aff8f3d-b347-4602-8fd1-48b3bf826b1a-cilium-ipsec-secrets") pod "cilium-kpntp" (UID: "1aff8f3d-b347-4602-8fd1-48b3bf826b1a") : failed to sync secret cache: timed out waiting for the condition Feb 13 16:05:13.868796 sshd[5221]: Connection closed by 10.200.16.10 port 33106 Feb 13 16:05:13.869795 sshd-session[5216]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:13.874241 systemd[1]: sshd@28-10.200.8.12:22-10.200.16.10:33106.service: Deactivated successfully. Feb 13 16:05:13.877005 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 16:05:13.879260 systemd-logind[1702]: Session 31 logged out. Waiting for processes to exit. Feb 13 16:05:13.881824 systemd-logind[1702]: Removed session 31. Feb 13 16:05:13.980205 systemd[1]: Started sshd@29-10.200.8.12:22-10.200.16.10:33110.service - OpenSSH per-connection server daemon (10.200.16.10:33110). Feb 13 16:05:14.025933 kubelet[3401]: I0213 16:05:14.025901 3401 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="98885483-0b40-4a74-b9b3-323cd062a471" path="/var/lib/kubelet/pods/98885483-0b40-4a74-b9b3-323cd062a471/volumes" Feb 13 16:05:14.026493 kubelet[3401]: I0213 16:05:14.026469 3401 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ed6ce577-4f34-4690-8ce1-47c2d3b20f42" path="/var/lib/kubelet/pods/ed6ce577-4f34-4690-8ce1-47c2d3b20f42/volumes" Feb 13 16:05:14.418901 containerd[1719]: time="2025-02-13T16:05:14.418790390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kpntp,Uid:1aff8f3d-b347-4602-8fd1-48b3bf826b1a,Namespace:kube-system,Attempt:0,}" Feb 13 16:05:14.615804 sshd[5227]: Accepted publickey for core from 10.200.16.10 port 33110 ssh2: RSA SHA256:6PH5d6JcoDO5FtfSXY+scvrUftAeCScf0VozIkGZ6Nk Feb 13 16:05:14.617413 sshd-session[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:05:14.622396 systemd-logind[1702]: New session 32 of user core. Feb 13 16:05:14.628272 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 16:05:14.671425 containerd[1719]: time="2025-02-13T16:05:14.671269244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:05:14.672068 containerd[1719]: time="2025-02-13T16:05:14.671322445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:05:14.672068 containerd[1719]: time="2025-02-13T16:05:14.671341146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:14.672068 containerd[1719]: time="2025-02-13T16:05:14.671428147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:05:14.695247 systemd[1]: Started cri-containerd-08e66ea1321f6d39d144d6400b35c7f67613ae924c71d257b2c066e1eb80f29e.scope - libcontainer container 08e66ea1321f6d39d144d6400b35c7f67613ae924c71d257b2c066e1eb80f29e. Feb 13 16:05:14.717284 containerd[1719]: time="2025-02-13T16:05:14.717239619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kpntp,Uid:1aff8f3d-b347-4602-8fd1-48b3bf826b1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"08e66ea1321f6d39d144d6400b35c7f67613ae924c71d257b2c066e1eb80f29e\"" Feb 13 16:05:14.720406 containerd[1719]: time="2025-02-13T16:05:14.720217382Z" level=info msg="CreateContainer within sandbox \"08e66ea1321f6d39d144d6400b35c7f67613ae924c71d257b2c066e1eb80f29e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 16:05:14.996305 containerd[1719]: time="2025-02-13T16:05:14.996065532Z" level=info msg="CreateContainer within sandbox \"08e66ea1321f6d39d144d6400b35c7f67613ae924c71d257b2c066e1eb80f29e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5afeb8dbef82b6608993854d6d53e62f503ae1363275957ed4dccaf0d49131bd\"" Feb 13 16:05:14.999158 containerd[1719]: time="2025-02-13T16:05:14.998285679Z" level=info msg="StartContainer for \"5afeb8dbef82b6608993854d6d53e62f503ae1363275957ed4dccaf0d49131bd\"" Feb 13 16:05:15.042393 systemd[1]: Started cri-containerd-5afeb8dbef82b6608993854d6d53e62f503ae1363275957ed4dccaf0d49131bd.scope - libcontainer container 5afeb8dbef82b6608993854d6d53e62f503ae1363275957ed4dccaf0d49131bd. Feb 13 16:05:15.072737 containerd[1719]: time="2025-02-13T16:05:15.072677956Z" level=info msg="StartContainer for \"5afeb8dbef82b6608993854d6d53e62f503ae1363275957ed4dccaf0d49131bd\" returns successfully" Feb 13 16:05:15.076169 systemd[1]: cri-containerd-5afeb8dbef82b6608993854d6d53e62f503ae1363275957ed4dccaf0d49131bd.scope: Deactivated successfully. Feb 13 16:05:16.164232 kubelet[3401]: E0213 16:05:16.164192 3401 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 16:05:18.110318 containerd[1719]: time="2025-02-13T16:05:18.110218270Z" level=info msg="shim disconnected" id=5afeb8dbef82b6608993854d6d53e62f503ae1363275957ed4dccaf0d49131bd namespace=k8s.io Feb 13 16:05:18.110318 containerd[1719]: time="2025-02-13T16:05:18.110307772Z" level=warning msg="cleaning up after shim disconnected" id=5afeb8dbef82b6608993854d6d53e62f503ae1363275957ed4dccaf0d49131bd namespace=k8s.io Feb 13 16:05:18.110318 containerd[1719]: time="2025-02-13T16:05:18.110322772Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:18.655047 containerd[1719]: time="2025-02-13T16:05:18.654917214Z" level=info msg="CreateContainer within sandbox \"08e66ea1321f6d39d144d6400b35c7f67613ae924c71d257b2c066e1eb80f29e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 16:05:18.946572 containerd[1719]: time="2025-02-13T16:05:18.946520795Z" level=info msg="CreateContainer within sandbox \"08e66ea1321f6d39d144d6400b35c7f67613ae924c71d257b2c066e1eb80f29e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4a1d74557d3a7356313657f02b8a83c5b90663e93e35829cf69dfcef65f7c4e3\"" Feb 13 16:05:18.948447 containerd[1719]: time="2025-02-13T16:05:18.947258210Z" level=info msg="StartContainer for \"4a1d74557d3a7356313657f02b8a83c5b90663e93e35829cf69dfcef65f7c4e3\"" Feb 13 16:05:18.985259 systemd[1]: Started cri-containerd-4a1d74557d3a7356313657f02b8a83c5b90663e93e35829cf69dfcef65f7c4e3.scope - libcontainer container 4a1d74557d3a7356313657f02b8a83c5b90663e93e35829cf69dfcef65f7c4e3. Feb 13 16:05:19.014039 containerd[1719]: time="2025-02-13T16:05:19.013864722Z" level=info msg="StartContainer for \"4a1d74557d3a7356313657f02b8a83c5b90663e93e35829cf69dfcef65f7c4e3\" returns successfully" Feb 13 16:05:19.018032 systemd[1]: cri-containerd-4a1d74557d3a7356313657f02b8a83c5b90663e93e35829cf69dfcef65f7c4e3.scope: Deactivated successfully. Feb 13 16:05:19.037153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a1d74557d3a7356313657f02b8a83c5b90663e93e35829cf69dfcef65f7c4e3-rootfs.mount: Deactivated successfully. Feb 13 16:05:19.710683 containerd[1719]: time="2025-02-13T16:05:19.710571288Z" level=info msg="shim disconnected" id=4a1d74557d3a7356313657f02b8a83c5b90663e93e35829cf69dfcef65f7c4e3 namespace=k8s.io Feb 13 16:05:19.710683 containerd[1719]: time="2025-02-13T16:05:19.710650090Z" level=warning msg="cleaning up after shim disconnected" id=4a1d74557d3a7356313657f02b8a83c5b90663e93e35829cf69dfcef65f7c4e3 namespace=k8s.io Feb 13 16:05:19.710683 containerd[1719]: time="2025-02-13T16:05:19.710666390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:20.664813 containerd[1719]: time="2025-02-13T16:05:20.664461704Z" level=info msg="CreateContainer within sandbox \"08e66ea1321f6d39d144d6400b35c7f67613ae924c71d257b2c066e1eb80f29e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 16:05:20.955174 containerd[1719]: time="2025-02-13T16:05:20.954994162Z" level=info msg="CreateContainer within sandbox \"08e66ea1321f6d39d144d6400b35c7f67613ae924c71d257b2c066e1eb80f29e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e82a95de361d694ae6ca3844a4dffca8fa35d78cbadb6a03dc8a0fc88d97cb02\"" Feb 13 16:05:20.957567 containerd[1719]: time="2025-02-13T16:05:20.956205988Z" level=info msg="StartContainer for \"e82a95de361d694ae6ca3844a4dffca8fa35d78cbadb6a03dc8a0fc88d97cb02\"" Feb 13 16:05:20.992289 systemd[1]: Started cri-containerd-e82a95de361d694ae6ca3844a4dffca8fa35d78cbadb6a03dc8a0fc88d97cb02.scope - libcontainer container e82a95de361d694ae6ca3844a4dffca8fa35d78cbadb6a03dc8a0fc88d97cb02. Feb 13 16:05:21.023847 systemd[1]: cri-containerd-e82a95de361d694ae6ca3844a4dffca8fa35d78cbadb6a03dc8a0fc88d97cb02.scope: Deactivated successfully. Feb 13 16:05:21.026415 containerd[1719]: time="2025-02-13T16:05:21.025996067Z" level=info msg="StartContainer for \"e82a95de361d694ae6ca3844a4dffca8fa35d78cbadb6a03dc8a0fc88d97cb02\" returns successfully" Feb 13 16:05:21.046313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e82a95de361d694ae6ca3844a4dffca8fa35d78cbadb6a03dc8a0fc88d97cb02-rootfs.mount: Deactivated successfully. Feb 13 16:05:21.549049 kubelet[3401]: E0213 16:05:21.166069 3401 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 16:05:21.852533 containerd[1719]: time="2025-02-13T16:05:21.852339880Z" level=info msg="shim disconnected" id=e82a95de361d694ae6ca3844a4dffca8fa35d78cbadb6a03dc8a0fc88d97cb02 namespace=k8s.io Feb 13 16:05:21.852533 containerd[1719]: time="2025-02-13T16:05:21.852409282Z" level=warning msg="cleaning up after shim disconnected" id=e82a95de361d694ae6ca3844a4dffca8fa35d78cbadb6a03dc8a0fc88d97cb02 namespace=k8s.io Feb 13 16:05:21.852533 containerd[1719]: time="2025-02-13T16:05:21.852449183Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:22.025165 kubelet[3401]: E0213 16:05:22.023617 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-xzzbd" podUID="28ac3de1-6ecd-4546-bde8-dc21776fd476" Feb 13 16:05:22.675287 containerd[1719]: time="2025-02-13T16:05:22.675235121Z" level=info msg="CreateContainer within sandbox \"08e66ea1321f6d39d144d6400b35c7f67613ae924c71d257b2c066e1eb80f29e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 16:05:22.956987 containerd[1719]: time="2025-02-13T16:05:22.956849889Z" level=info msg="CreateContainer within sandbox \"08e66ea1321f6d39d144d6400b35c7f67613ae924c71d257b2c066e1eb80f29e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9ae1bb39729e87a8a56d57977498ebb29450dad66b05e26aef7cbf36a1e9c473\"" Feb 13 16:05:22.958045 containerd[1719]: time="2025-02-13T16:05:22.957723608Z" level=info msg="StartContainer for \"9ae1bb39729e87a8a56d57977498ebb29450dad66b05e26aef7cbf36a1e9c473\"" Feb 13 16:05:22.998253 systemd[1]: Started cri-containerd-9ae1bb39729e87a8a56d57977498ebb29450dad66b05e26aef7cbf36a1e9c473.scope - libcontainer container 9ae1bb39729e87a8a56d57977498ebb29450dad66b05e26aef7cbf36a1e9c473. Feb 13 16:05:23.021860 systemd[1]: cri-containerd-9ae1bb39729e87a8a56d57977498ebb29450dad66b05e26aef7cbf36a1e9c473.scope: Deactivated successfully. Feb 13 16:05:23.027218 containerd[1719]: time="2025-02-13T16:05:23.027176380Z" level=info msg="StartContainer for \"9ae1bb39729e87a8a56d57977498ebb29450dad66b05e26aef7cbf36a1e9c473\" returns successfully" Feb 13 16:05:23.046596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ae1bb39729e87a8a56d57977498ebb29450dad66b05e26aef7cbf36a1e9c473-rootfs.mount: Deactivated successfully. Feb 13 16:05:23.906245 containerd[1719]: time="2025-02-13T16:05:23.906161309Z" level=info msg="shim disconnected" id=9ae1bb39729e87a8a56d57977498ebb29450dad66b05e26aef7cbf36a1e9c473 namespace=k8s.io Feb 13 16:05:23.906833 containerd[1719]: time="2025-02-13T16:05:23.906268211Z" level=warning msg="cleaning up after shim disconnected" id=9ae1bb39729e87a8a56d57977498ebb29450dad66b05e26aef7cbf36a1e9c473 namespace=k8s.io Feb 13 16:05:23.906833 containerd[1719]: time="2025-02-13T16:05:23.906309112Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:05:24.025149 kubelet[3401]: E0213 16:05:24.023628 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-xzzbd" podUID="28ac3de1-6ecd-4546-bde8-dc21776fd476" Feb 13 16:05:24.691597 containerd[1719]: time="2025-02-13T16:05:24.691530554Z" level=info msg="CreateContainer within sandbox \"08e66ea1321f6d39d144d6400b35c7f67613ae924c71d257b2c066e1eb80f29e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 16:05:24.946539 containerd[1719]: time="2025-02-13T16:05:24.946338254Z" level=info msg="CreateContainer within sandbox \"08e66ea1321f6d39d144d6400b35c7f67613ae924c71d257b2c066e1eb80f29e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"abe2e73c06fe6093a4d48502d64e8756c069f74ddab9e301004ccffce59bf266\"" Feb 13 16:05:24.947624 containerd[1719]: time="2025-02-13T16:05:24.947518879Z" level=info msg="StartContainer for \"abe2e73c06fe6093a4d48502d64e8756c069f74ddab9e301004ccffce59bf266\"" Feb 13 16:05:24.981249 systemd[1]: Started cri-containerd-abe2e73c06fe6093a4d48502d64e8756c069f74ddab9e301004ccffce59bf266.scope - libcontainer container abe2e73c06fe6093a4d48502d64e8756c069f74ddab9e301004ccffce59bf266. Feb 13 16:05:25.012369 containerd[1719]: time="2025-02-13T16:05:25.012319452Z" level=info msg="StartContainer for \"abe2e73c06fe6093a4d48502d64e8756c069f74ddab9e301004ccffce59bf266\" returns successfully" Feb 13 16:05:25.460229 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 16:05:26.024154 kubelet[3401]: E0213 16:05:26.023047 3401 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-xzzbd" podUID="28ac3de1-6ecd-4546-bde8-dc21776fd476" Feb 13 16:05:27.572582 systemd[1]: run-containerd-runc-k8s.io-abe2e73c06fe6093a4d48502d64e8756c069f74ddab9e301004ccffce59bf266-runc.cDL8i4.mount: Deactivated successfully. Feb 13 16:05:28.456679 systemd-networkd[1456]: lxc_health: Link UP Feb 13 16:05:28.460631 systemd-networkd[1456]: lxc_health: Gained carrier Feb 13 16:05:29.771295 systemd[1]: run-containerd-runc-k8s.io-abe2e73c06fe6093a4d48502d64e8756c069f74ddab9e301004ccffce59bf266-runc.3J3wOH.mount: Deactivated successfully. Feb 13 16:05:30.453995 kubelet[3401]: I0213 16:05:30.453949 3401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kpntp" podStartSLOduration=18.453895105 podStartE2EDuration="18.453895105s" podCreationTimestamp="2025-02-13 16:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:05:25.720646565 +0000 UTC m=+259.821965176" watchObservedRunningTime="2025-02-13 16:05:30.453895105 +0000 UTC m=+264.555213716" Feb 13 16:05:30.478258 systemd-networkd[1456]: lxc_health: Gained IPv6LL Feb 13 16:05:32.014515 systemd[1]: run-containerd-runc-k8s.io-abe2e73c06fe6093a4d48502d64e8756c069f74ddab9e301004ccffce59bf266-runc.vC4CiT.mount: Deactivated successfully. Feb 13 16:05:32.138344 kubelet[3401]: E0213 16:05:32.138283 3401 upgradeaware.go:439] Error proxying data from backend to client: write tcp 10.200.8.12:10250->10.200.8.12:42846: write: broken pipe Feb 13 16:05:34.339618 sshd[5230]: Connection closed by 10.200.16.10 port 33110 Feb 13 16:05:34.340539 sshd-session[5227]: pam_unix(sshd:session): session closed for user core Feb 13 16:05:34.344732 systemd[1]: sshd@29-10.200.8.12:22-10.200.16.10:33110.service: Deactivated successfully. Feb 13 16:05:34.346903 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 16:05:34.347874 systemd-logind[1702]: Session 32 logged out. Waiting for processes to exit. Feb 13 16:05:34.348990 systemd-logind[1702]: Removed session 32.