Jan 30 13:06:20.068914 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:06:20.068951 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:06:20.068964 kernel: BIOS-provided physical RAM map: Jan 30 13:06:20.068974 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:06:20.068983 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 30 13:06:20.068993 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 30 13:06:20.069005 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 30 13:06:20.069016 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 30 13:06:20.069029 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 30 13:06:20.069039 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 30 13:06:20.069049 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 30 13:06:20.069059 kernel: printk: bootconsole [earlyser0] enabled Jan 30 13:06:20.069069 kernel: NX (Execute Disable) protection: active Jan 30 13:06:20.069080 kernel: APIC: Static calls initialized Jan 30 13:06:20.069096 kernel: efi: EFI v2.7 by Microsoft Jan 30 13:06:20.069108 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 Jan 30 13:06:20.069120 kernel: random: crng init done Jan 30 13:06:20.069131 kernel: secureboot: Secure boot disabled Jan 30 13:06:20.069143 kernel: SMBIOS 3.1.0 present. Jan 30 13:06:20.069155 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 30 13:06:20.069166 kernel: Hypervisor detected: Microsoft Hyper-V Jan 30 13:06:20.069178 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 30 13:06:20.069189 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Jan 30 13:06:20.069200 kernel: Hyper-V: Nested features: 0x1e0101 Jan 30 13:06:20.069214 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 30 13:06:20.069225 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 30 13:06:20.069238 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:06:20.069250 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:06:20.069262 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 30 13:06:20.069275 kernel: tsc: Detected 2593.906 MHz processor Jan 30 13:06:20.069287 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:06:20.069299 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:06:20.069311 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 30 13:06:20.069327 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:06:20.069338 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:06:20.069350 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 30 13:06:20.069361 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 30 13:06:20.069375 kernel: Using GB pages for direct mapping Jan 30 13:06:20.069387 kernel: ACPI: Early table checksum verification disabled Jan 30 13:06:20.069466 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 30 13:06:20.069486 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069505 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069518 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 30 13:06:20.069530 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 30 13:06:20.069542 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069555 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069567 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069583 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069596 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069609 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069622 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069636 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 30 13:06:20.069649 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 30 13:06:20.069661 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 30 13:06:20.069673 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 30 13:06:20.069685 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 30 13:06:20.069701 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 30 13:06:20.069712 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 30 13:06:20.069721 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 30 13:06:20.069731 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 30 13:06:20.069743 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 30 13:06:20.069756 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:06:20.069769 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:06:20.069780 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 30 13:06:20.069794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 30 13:06:20.069810 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 30 13:06:20.069824 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 30 13:06:20.069837 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 30 13:06:20.069851 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 30 13:06:20.069865 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 30 13:06:20.069879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 30 13:06:20.069892 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 30 13:06:20.069906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 30 13:06:20.069924 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 30 13:06:20.069938 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 30 13:06:20.069952 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 30 13:06:20.069965 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 30 13:06:20.069979 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 30 13:06:20.069993 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 30 13:06:20.070007 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 30 13:06:20.070021 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 30 13:06:20.070036 kernel: Zone ranges: Jan 30 13:06:20.070053 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:06:20.070066 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:06:20.070080 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:06:20.070094 kernel: Movable zone start for each node Jan 30 13:06:20.070107 kernel: Early memory node ranges Jan 30 13:06:20.070119 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:06:20.070130 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 30 13:06:20.070144 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 30 13:06:20.070156 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:06:20.070172 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 30 13:06:20.070184 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:06:20.070196 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:06:20.070211 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 30 13:06:20.070223 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 30 13:06:20.070235 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 30 13:06:20.070246 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:06:20.070258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:06:20.070270 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:06:20.070285 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 30 13:06:20.070296 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:06:20.070308 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 30 13:06:20.070318 kernel: Booting paravirtualized kernel on Hyper-V Jan 30 13:06:20.070330 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:06:20.070343 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:06:20.070356 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:06:20.070369 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:06:20.070382 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:06:20.070421 kernel: Hyper-V: PV spinlocks enabled Jan 30 13:06:20.070434 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:06:20.070449 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:06:20.070462 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:06:20.070475 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:06:20.070488 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:06:20.070500 kernel: Fallback order for Node 0: 0 Jan 30 13:06:20.070514 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 30 13:06:20.070532 kernel: Policy zone: Normal Jan 30 13:06:20.070557 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:06:20.070572 kernel: software IO TLB: area num 2. Jan 30 13:06:20.070590 kernel: Memory: 8074980K/8387460K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 312224K reserved, 0K cma-reserved) Jan 30 13:06:20.070604 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:06:20.070619 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:06:20.070632 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:06:20.070647 kernel: Dynamic Preempt: voluntary Jan 30 13:06:20.070661 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:06:20.070676 kernel: rcu: RCU event tracing is enabled. Jan 30 13:06:20.070691 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:06:20.070710 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:06:20.070725 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:06:20.070739 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:06:20.070754 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:06:20.070769 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:06:20.070784 kernel: Using NULL legacy PIC Jan 30 13:06:20.070802 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 30 13:06:20.070817 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:06:20.070831 kernel: Console: colour dummy device 80x25 Jan 30 13:06:20.070846 kernel: printk: console [tty1] enabled Jan 30 13:06:20.070860 kernel: printk: console [ttyS0] enabled Jan 30 13:06:20.070874 kernel: printk: bootconsole [earlyser0] disabled Jan 30 13:06:20.070889 kernel: ACPI: Core revision 20230628 Jan 30 13:06:20.070903 kernel: Failed to register legacy timer interrupt Jan 30 13:06:20.070917 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:06:20.070936 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 13:06:20.070950 kernel: Hyper-V: Using IPI hypercalls Jan 30 13:06:20.070964 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 30 13:06:20.070979 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 30 13:06:20.070993 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 30 13:06:20.071008 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 30 13:06:20.071023 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 30 13:06:20.071037 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 30 13:06:20.071051 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 30 13:06:20.071070 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:06:20.071084 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:06:20.071098 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:06:20.071112 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:06:20.071126 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:06:20.071140 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:06:20.071154 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:06:20.071168 kernel: RETBleed: Vulnerable Jan 30 13:06:20.071182 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:06:20.071196 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:06:20.071214 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:06:20.071229 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:06:20.071243 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:06:20.071256 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:06:20.071270 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:06:20.071285 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:06:20.071299 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:06:20.071313 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:06:20.071326 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 30 13:06:20.071340 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 30 13:06:20.071354 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 30 13:06:20.071372 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 30 13:06:20.071386 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:06:20.071420 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:06:20.071435 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:06:20.071448 kernel: landlock: Up and running. Jan 30 13:06:20.071462 kernel: SELinux: Initializing. Jan 30 13:06:20.071476 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:06:20.071490 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:06:20.071505 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:06:20.071519 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:06:20.071533 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:06:20.071553 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:06:20.071567 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:06:20.071582 kernel: signal: max sigframe size: 3632 Jan 30 13:06:20.071596 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:06:20.071611 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:06:20.071625 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:06:20.071639 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:06:20.071654 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:06:20.071668 kernel: .... node #0, CPUs: #1 Jan 30 13:06:20.071687 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 30 13:06:20.071702 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:06:20.071716 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:06:20.071731 kernel: smpboot: Max logical packages: 1 Jan 30 13:06:20.071745 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 30 13:06:20.071759 kernel: devtmpfs: initialized Jan 30 13:06:20.071773 kernel: x86/mm: Memory block size: 128MB Jan 30 13:06:20.071787 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 30 13:06:20.071805 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:06:20.071819 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:06:20.071835 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:06:20.071849 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:06:20.071862 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:06:20.071875 kernel: audit: type=2000 audit(1738242379.028:1): state=initialized audit_enabled=0 res=1 Jan 30 13:06:20.071889 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:06:20.071902 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:06:20.071915 kernel: cpuidle: using governor menu Jan 30 13:06:20.071932 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:06:20.071946 kernel: dca service started, version 1.12.1 Jan 30 13:06:20.071960 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 30 13:06:20.071974 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:06:20.071987 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:06:20.072001 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:06:20.072015 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:06:20.072028 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:06:20.072041 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:06:20.072057 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:06:20.072071 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:06:20.072086 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:06:20.072101 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:06:20.072115 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:06:20.072130 kernel: ACPI: Interpreter enabled Jan 30 13:06:20.072145 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:06:20.072160 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:06:20.072175 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:06:20.072194 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:06:20.072209 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 30 13:06:20.072224 kernel: iommu: Default domain type: Translated Jan 30 13:06:20.072239 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:06:20.072255 kernel: efivars: Registered efivars operations Jan 30 13:06:20.072269 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:06:20.072284 kernel: PCI: System does not support PCI Jan 30 13:06:20.072298 kernel: vgaarb: loaded Jan 30 13:06:20.072313 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 30 13:06:20.072331 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:06:20.072347 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:06:20.072361 kernel: pnp: PnP ACPI init Jan 30 13:06:20.072375 kernel: pnp: PnP ACPI: found 3 devices Jan 30 13:06:20.072390 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:06:20.072418 kernel: NET: Registered PF_INET protocol family Jan 30 13:06:20.072432 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:06:20.072446 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:06:20.072459 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:06:20.072477 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:06:20.072490 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:06:20.072503 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:06:20.072517 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:06:20.072529 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:06:20.072543 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:06:20.072556 kernel: NET: Registered PF_XDP protocol family Jan 30 13:06:20.072569 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:06:20.072583 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:06:20.072601 kernel: software IO TLB: mapped [mem 0x000000003ad8d000-0x000000003ed8d000] (64MB) Jan 30 13:06:20.072615 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:06:20.072628 kernel: Initialise system trusted keyrings Jan 30 13:06:20.072641 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:06:20.072654 kernel: Key type asymmetric registered Jan 30 13:06:20.072667 kernel: Asymmetric key parser 'x509' registered Jan 30 13:06:20.072680 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:06:20.072693 kernel: io scheduler mq-deadline registered Jan 30 13:06:20.072706 kernel: io scheduler kyber registered Jan 30 13:06:20.072725 kernel: io scheduler bfq registered Jan 30 13:06:20.072740 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:06:20.072755 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:06:20.072771 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:06:20.072785 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:06:20.072801 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:06:20.072981 kernel: rtc_cmos 00:02: registered as rtc0 Jan 30 13:06:20.073102 kernel: rtc_cmos 00:02: setting system clock to 2025-01-30T13:06:19 UTC (1738242379) Jan 30 13:06:20.073217 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 30 13:06:20.073235 kernel: intel_pstate: CPU model not supported Jan 30 13:06:20.073249 kernel: efifb: probing for efifb Jan 30 13:06:20.073262 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 13:06:20.073276 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 13:06:20.073289 kernel: efifb: scrolling: redraw Jan 30 13:06:20.073303 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:06:20.073317 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:06:20.073331 kernel: fb0: EFI VGA frame buffer device Jan 30 13:06:20.073348 kernel: pstore: Using crash dump compression: deflate Jan 30 13:06:20.073362 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:06:20.073375 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:06:20.073389 kernel: Segment Routing with IPv6 Jan 30 13:06:20.073425 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:06:20.073439 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:06:20.073453 kernel: Key type dns_resolver registered Jan 30 13:06:20.073466 kernel: IPI shorthand broadcast: enabled Jan 30 13:06:20.073480 kernel: sched_clock: Marking stable (848137400, 47012400)->(1119523400, -224373600) Jan 30 13:06:20.073498 kernel: registered taskstats version 1 Jan 30 13:06:20.073511 kernel: Loading compiled-in X.509 certificates Jan 30 13:06:20.073525 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:06:20.073539 kernel: Key type .fscrypt registered Jan 30 13:06:20.073552 kernel: Key type fscrypt-provisioning registered Jan 30 13:06:20.073566 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:06:20.073580 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:06:20.073593 kernel: ima: No architecture policies found Jan 30 13:06:20.073610 kernel: clk: Disabling unused clocks Jan 30 13:06:20.073624 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:06:20.073638 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:06:20.073651 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:06:20.073665 kernel: Run /init as init process Jan 30 13:06:20.073679 kernel: with arguments: Jan 30 13:06:20.073693 kernel: /init Jan 30 13:06:20.073706 kernel: with environment: Jan 30 13:06:20.073720 kernel: HOME=/ Jan 30 13:06:20.073733 kernel: TERM=linux Jan 30 13:06:20.073750 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:06:20.073767 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:06:20.073784 systemd[1]: Detected virtualization microsoft. Jan 30 13:06:20.073799 systemd[1]: Detected architecture x86-64. Jan 30 13:06:20.073814 systemd[1]: Running in initrd. Jan 30 13:06:20.073829 systemd[1]: No hostname configured, using default hostname. Jan 30 13:06:20.073844 systemd[1]: Hostname set to . Jan 30 13:06:20.073865 systemd[1]: Initializing machine ID from random generator. Jan 30 13:06:20.073880 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:06:20.073896 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:06:20.073912 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:06:20.073930 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:06:20.073947 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:06:20.073963 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:06:20.073980 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:06:20.074003 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:06:20.074020 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:06:20.074036 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:06:20.074051 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:06:20.074066 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:06:20.074080 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:06:20.074095 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:06:20.074112 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:06:20.074127 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:06:20.074141 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:06:20.074156 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:06:20.074170 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:06:20.074185 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:06:20.074200 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:06:20.074215 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:06:20.074230 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:06:20.074247 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:06:20.074262 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:06:20.074277 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:06:20.074292 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:06:20.074307 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:06:20.074322 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:06:20.074362 systemd-journald[177]: Collecting audit messages is disabled. Jan 30 13:06:20.074412 systemd-journald[177]: Journal started Jan 30 13:06:20.074448 systemd-journald[177]: Runtime Journal (/run/log/journal/d9e34e48252445ae87043042d66b1acc) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:06:20.079436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:20.088087 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:06:20.088761 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:06:20.094028 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:06:20.099694 systemd-modules-load[178]: Inserted module 'overlay' Jan 30 13:06:20.100185 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:06:20.118292 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:06:20.131546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:06:20.136463 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:20.141217 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:06:20.166872 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:06:20.170671 kernel: Bridge firewalling registered Jan 30 13:06:20.170769 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 30 13:06:20.179601 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:06:20.184496 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:06:20.185073 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:06:20.185353 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:06:20.194866 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:06:20.222365 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:06:20.223304 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:06:20.235713 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:06:20.238800 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:20.247657 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:06:20.268335 dracut-cmdline[215]: dracut-dracut-053 Jan 30 13:06:20.271785 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:06:20.298708 systemd-resolved[212]: Positive Trust Anchors: Jan 30 13:06:20.298727 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:06:20.298789 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:06:20.307661 systemd-resolved[212]: Defaulting to hostname 'linux'. Jan 30 13:06:20.309061 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:06:20.327289 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:06:20.356414 kernel: SCSI subsystem initialized Jan 30 13:06:20.366413 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:06:20.377418 kernel: iscsi: registered transport (tcp) Jan 30 13:06:20.399455 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:06:20.399559 kernel: QLogic iSCSI HBA Driver Jan 30 13:06:20.435336 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:06:20.446530 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:06:20.477307 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:06:20.477416 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:06:20.480474 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:06:20.520420 kernel: raid6: avx512x4 gen() 18190 MB/s Jan 30 13:06:20.541412 kernel: raid6: avx512x2 gen() 18260 MB/s Jan 30 13:06:20.560407 kernel: raid6: avx512x1 gen() 18105 MB/s Jan 30 13:06:20.578424 kernel: raid6: avx2x4 gen() 18095 MB/s Jan 30 13:06:20.597410 kernel: raid6: avx2x2 gen() 18184 MB/s Jan 30 13:06:20.617271 kernel: raid6: avx2x1 gen() 14050 MB/s Jan 30 13:06:20.617326 kernel: raid6: using algorithm avx512x2 gen() 18260 MB/s Jan 30 13:06:20.638960 kernel: raid6: .... xor() 29951 MB/s, rmw enabled Jan 30 13:06:20.638994 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:06:20.661422 kernel: xor: automatically using best checksumming function avx Jan 30 13:06:20.801423 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:06:20.811702 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:06:20.821557 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:06:20.835528 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 30 13:06:20.841999 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:06:20.859523 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:06:20.872969 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 30 13:06:20.899520 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:06:20.907628 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:06:20.947779 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:06:20.961593 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:06:20.993928 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:06:21.001835 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:06:21.008027 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:06:21.014318 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:06:21.028221 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:06:21.038244 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:06:21.038277 kernel: hv_vmbus: Vmbus version:5.2 Jan 30 13:06:21.061335 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:06:21.075848 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 13:06:21.082447 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:06:21.083216 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:06:21.083671 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:21.092271 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:06:21.115191 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 30 13:06:21.115226 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 13:06:21.095196 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:06:21.133383 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:06:21.133428 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 30 13:06:21.133453 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:06:21.133469 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 13:06:21.095430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:21.138446 kernel: PTP clock support registered Jan 30 13:06:21.107574 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:21.122915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:21.154951 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 13:06:21.155000 kernel: hv_vmbus: registering driver hv_utils Jan 30 13:06:21.158313 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 13:06:21.158369 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 13:06:22.371565 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 13:06:22.372167 systemd-resolved[212]: Clock change detected. Flushing caches. Jan 30 13:06:22.386486 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:06:22.394205 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 13:06:22.394242 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 13:06:22.396830 kernel: AES CTR mode by8 optimization enabled Jan 30 13:06:22.398697 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:22.411953 kernel: scsi host1: storvsc_host_t Jan 30 13:06:22.412010 kernel: scsi host0: storvsc_host_t Jan 30 13:06:22.408605 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:06:22.419572 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 13:06:22.425529 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 13:06:22.464827 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 13:06:22.476307 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:06:22.476334 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 13:06:22.488654 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:06:22.488852 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:06:22.489027 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 13:06:22.489205 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 13:06:22.489381 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 13:06:22.489573 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:22.489596 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:06:22.465951 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:22.622295 kernel: hv_netvsc 6045bd10-b80f-6045-bd10-b80f6045bd10 eth0: VF slot 1 added Jan 30 13:06:22.630892 kernel: hv_vmbus: registering driver hv_pci Jan 30 13:06:22.635485 kernel: hv_pci 93732baa-e4ed-4842-b14d-7e03edcff23e: PCI VMBus probing: Using version 0x10004 Jan 30 13:06:22.674965 kernel: hv_pci 93732baa-e4ed-4842-b14d-7e03edcff23e: PCI host bridge to bus e4ed:00 Jan 30 13:06:22.675142 kernel: pci_bus e4ed:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 30 13:06:22.675355 kernel: pci_bus e4ed:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 13:06:22.675534 kernel: pci e4ed:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 30 13:06:22.675730 kernel: pci e4ed:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:06:22.675910 kernel: pci e4ed:00:02.0: enabling Extended Tags Jan 30 13:06:22.676080 kernel: pci e4ed:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e4ed:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 30 13:06:22.676253 kernel: pci_bus e4ed:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 13:06:22.676421 kernel: pci e4ed:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:06:22.836376 kernel: mlx5_core e4ed:00:02.0: enabling device (0000 -> 0002) Jan 30 13:06:23.059637 kernel: mlx5_core e4ed:00:02.0: firmware version: 14.30.5000 Jan 30 13:06:23.059862 kernel: hv_netvsc 6045bd10-b80f-6045-bd10-b80f6045bd10 eth0: VF registering: eth1 Jan 30 13:06:23.060023 kernel: mlx5_core e4ed:00:02.0 eth1: joined to eth0 Jan 30 13:06:23.060207 kernel: mlx5_core e4ed:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:06:23.065507 kernel: mlx5_core e4ed:00:02.0 enP58605s1: renamed from eth1 Jan 30 13:06:23.076492 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 13:06:23.109491 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (446) Jan 30 13:06:23.125149 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:06:23.162796 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 13:06:23.173496 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (448) Jan 30 13:06:23.187798 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 13:06:23.190886 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 13:06:23.210659 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:06:23.226194 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:23.234495 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:24.241496 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:24.245521 disk-uuid[605]: The operation has completed successfully. Jan 30 13:06:24.325131 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:06:24.325243 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:06:24.349615 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:06:24.354225 sh[691]: Success Jan 30 13:06:24.393603 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:06:24.633156 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:06:24.651396 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:06:24.655803 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:06:24.688236 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:06:24.688303 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:24.691687 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:06:24.694539 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:06:24.696918 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:06:25.035154 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:06:25.038232 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:06:25.049733 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:06:25.056249 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:06:25.082495 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:25.082545 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:25.082558 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:06:25.104496 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:06:25.117766 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:25.117714 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:06:25.129194 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:06:25.140685 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:06:25.150823 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:06:25.163620 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:06:25.190673 systemd-networkd[875]: lo: Link UP Jan 30 13:06:25.190681 systemd-networkd[875]: lo: Gained carrier Jan 30 13:06:25.192817 systemd-networkd[875]: Enumeration completed Jan 30 13:06:25.192902 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:06:25.193895 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:06:25.193899 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:06:25.195654 systemd[1]: Reached target network.target - Network. Jan 30 13:06:25.261494 kernel: mlx5_core e4ed:00:02.0 enP58605s1: Link up Jan 30 13:06:25.292170 kernel: hv_netvsc 6045bd10-b80f-6045-bd10-b80f6045bd10 eth0: Data path switched to VF: enP58605s1 Jan 30 13:06:25.291625 systemd-networkd[875]: enP58605s1: Link UP Jan 30 13:06:25.291776 systemd-networkd[875]: eth0: Link UP Jan 30 13:06:25.292023 systemd-networkd[875]: eth0: Gained carrier Jan 30 13:06:25.292040 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:06:25.297819 systemd-networkd[875]: enP58605s1: Gained carrier Jan 30 13:06:25.331527 systemd-networkd[875]: eth0: DHCPv4 address 10.200.4.23/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 30 13:06:26.065722 ignition[864]: Ignition 2.20.0 Jan 30 13:06:26.065735 ignition[864]: Stage: fetch-offline Jan 30 13:06:26.067252 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:06:26.065781 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:26.065791 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:26.065898 ignition[864]: parsed url from cmdline: "" Jan 30 13:06:26.065903 ignition[864]: no config URL provided Jan 30 13:06:26.065909 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:06:26.065919 ignition[864]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:06:26.065926 ignition[864]: failed to fetch config: resource requires networking Jan 30 13:06:26.066355 ignition[864]: Ignition finished successfully Jan 30 13:06:26.093693 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:06:26.108818 ignition[884]: Ignition 2.20.0 Jan 30 13:06:26.108830 ignition[884]: Stage: fetch Jan 30 13:06:26.109055 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:26.109069 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:26.109165 ignition[884]: parsed url from cmdline: "" Jan 30 13:06:26.109169 ignition[884]: no config URL provided Jan 30 13:06:26.109173 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:06:26.109179 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:06:26.109208 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 13:06:26.197215 ignition[884]: GET result: OK Jan 30 13:06:26.197323 ignition[884]: config has been read from IMDS userdata Jan 30 13:06:26.197359 ignition[884]: parsing config with SHA512: 8ffe7a31993deed0633e344708d6f5a3103ff9858b530e05853ab37e5ab4b492848b6987d903985cd10f93b8b5291dc21ca65a4491b008a494247096c8a03e11 Jan 30 13:06:26.202218 unknown[884]: fetched base config from "system" Jan 30 13:06:26.202428 unknown[884]: fetched base config from "system" Jan 30 13:06:26.202820 ignition[884]: fetch: fetch complete Jan 30 13:06:26.202435 unknown[884]: fetched user config from "azure" Jan 30 13:06:26.202827 ignition[884]: fetch: fetch passed Jan 30 13:06:26.204401 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:06:26.202876 ignition[884]: Ignition finished successfully Jan 30 13:06:26.218769 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:06:26.232714 ignition[890]: Ignition 2.20.0 Jan 30 13:06:26.232742 ignition[890]: Stage: kargs Jan 30 13:06:26.232984 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:26.232998 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:26.239338 ignition[890]: kargs: kargs passed Jan 30 13:06:26.239397 ignition[890]: Ignition finished successfully Jan 30 13:06:26.243401 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:06:26.253663 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:06:26.267020 ignition[896]: Ignition 2.20.0 Jan 30 13:06:26.267032 ignition[896]: Stage: disks Jan 30 13:06:26.267283 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:26.267297 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:26.273513 ignition[896]: disks: disks passed Jan 30 13:06:26.273565 ignition[896]: Ignition finished successfully Jan 30 13:06:26.278525 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:06:26.279160 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:06:26.280035 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:06:26.280219 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:06:26.280618 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:06:26.280999 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:06:26.303780 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:06:26.413022 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 13:06:26.418282 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:06:26.429380 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:06:26.522484 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:06:26.522965 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:06:26.527719 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:06:26.577594 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:06:26.582921 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:06:26.594536 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (915) Jan 30 13:06:26.593681 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:06:26.606595 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:26.596848 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:06:26.616126 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:26.616157 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:06:26.596889 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:06:26.617799 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:06:26.626493 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:06:26.628383 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:06:26.634501 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:06:26.909633 systemd-networkd[875]: enP58605s1: Gained IPv6LL Jan 30 13:06:27.101618 systemd-networkd[875]: eth0: Gained IPv6LL Jan 30 13:06:27.251877 coreos-metadata[917]: Jan 30 13:06:27.251 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:06:27.258245 coreos-metadata[917]: Jan 30 13:06:27.258 INFO Fetch successful Jan 30 13:06:27.261010 coreos-metadata[917]: Jan 30 13:06:27.258 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:06:27.270584 coreos-metadata[917]: Jan 30 13:06:27.270 INFO Fetch successful Jan 30 13:06:27.288339 coreos-metadata[917]: Jan 30 13:06:27.286 INFO wrote hostname ci-4186.1.0-a-065ab1add7 to /sysroot/etc/hostname Jan 30 13:06:27.290932 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:06:27.295305 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:06:27.318204 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:06:27.325899 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:06:27.330596 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:06:28.129556 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:06:28.137597 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:06:28.145625 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:06:28.151522 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:28.154891 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:06:28.179278 ignition[1033]: INFO : Ignition 2.20.0 Jan 30 13:06:28.179278 ignition[1033]: INFO : Stage: mount Jan 30 13:06:28.184098 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:28.184098 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:28.184098 ignition[1033]: INFO : mount: mount passed Jan 30 13:06:28.184098 ignition[1033]: INFO : Ignition finished successfully Jan 30 13:06:28.182500 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:06:28.199654 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:06:28.204803 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:06:28.224724 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:06:28.239487 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1045) Jan 30 13:06:28.243482 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:28.243526 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:28.248149 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:06:28.253752 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:06:28.255164 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:06:28.277817 ignition[1061]: INFO : Ignition 2.20.0 Jan 30 13:06:28.277817 ignition[1061]: INFO : Stage: files Jan 30 13:06:28.282279 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:28.282279 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:28.282279 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:06:28.282279 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:06:28.282279 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:06:28.400583 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:06:28.404551 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:06:28.404551 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:06:28.401097 unknown[1061]: wrote ssh authorized keys file for user: core Jan 30 13:06:28.430872 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:06:28.439125 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:06:28.460255 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:06:28.733856 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:06:28.733856 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:06:28.744053 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 13:06:29.308123 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:06:29.517813 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:06:29.557829 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:06:29.562152 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:06:29.562152 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:06:29.562152 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:06:29.562152 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:06:29.562152 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 13:06:30.178447 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:06:31.101755 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:06:31.101755 ignition[1061]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:06:31.157241 ignition[1061]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:06:31.163557 ignition[1061]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:06:31.163557 ignition[1061]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:06:31.163557 ignition[1061]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:06:31.163557 ignition[1061]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:06:31.163557 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:06:31.163557 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:06:31.163557 ignition[1061]: INFO : files: files passed Jan 30 13:06:31.163557 ignition[1061]: INFO : Ignition finished successfully Jan 30 13:06:31.159585 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:06:31.180710 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:06:31.198694 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:06:31.206284 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:06:31.206403 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:06:31.229453 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:06:31.229453 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:06:31.237591 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:06:31.234068 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:06:31.238257 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:06:31.256139 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:06:31.284056 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:06:31.284179 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:06:31.290064 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:06:31.297804 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:06:31.302730 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:06:31.307703 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:06:31.320277 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:06:31.330689 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:06:31.343695 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:06:31.349629 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:06:31.357424 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:06:31.362038 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:06:31.362217 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:06:31.370284 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:06:31.375232 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:06:31.379612 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:06:31.382425 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:06:31.391022 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:06:31.391231 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:06:31.391601 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:06:31.407529 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:06:31.410379 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:06:31.417441 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:06:31.420243 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:06:31.420371 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:06:31.425295 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:06:31.435014 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:06:31.438963 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:06:31.444137 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:06:31.450194 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:06:31.450371 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:06:31.455947 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:06:31.456102 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:06:31.468267 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:06:31.468413 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:06:31.473549 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:06:31.473695 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:06:31.489955 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:06:31.495811 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:06:31.497930 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:06:31.498139 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:06:31.501517 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:06:31.501671 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:06:31.518310 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:06:31.518419 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:06:31.531540 ignition[1114]: INFO : Ignition 2.20.0 Jan 30 13:06:31.531540 ignition[1114]: INFO : Stage: umount Jan 30 13:06:31.538129 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:31.538129 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:31.538129 ignition[1114]: INFO : umount: umount passed Jan 30 13:06:31.538129 ignition[1114]: INFO : Ignition finished successfully Jan 30 13:06:31.534260 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:06:31.534366 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:06:31.538613 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:06:31.538713 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:06:31.560337 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:06:31.560415 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:06:31.568858 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:06:31.568931 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:06:31.574449 systemd[1]: Stopped target network.target - Network. Jan 30 13:06:31.574915 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:06:31.574970 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:06:31.575359 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:06:31.575741 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:06:31.585963 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:06:31.588893 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:06:31.595118 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:06:31.612227 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:06:31.612312 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:06:31.631102 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:06:31.631165 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:06:31.639586 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:06:31.639663 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:06:31.645818 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:06:31.645883 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:06:31.655992 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:06:31.656180 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:06:31.657718 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:06:31.658276 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:06:31.658427 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:06:31.659733 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:06:31.659816 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:06:31.672720 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:06:31.672828 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:06:31.675532 systemd-networkd[875]: eth0: DHCPv6 lease lost Jan 30 13:06:31.680228 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:06:31.680333 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:06:31.688278 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:06:31.688343 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:06:31.714579 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:06:31.717327 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:06:31.717405 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:06:31.721932 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:06:31.721982 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:06:31.729359 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:06:31.732543 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:06:31.735954 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:06:31.736001 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:06:31.736280 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:06:31.765122 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:06:31.765298 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:06:31.770964 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:06:31.771005 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:06:31.774931 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:06:31.774966 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:06:31.775415 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:06:31.775455 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:06:31.779197 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:06:31.779239 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:06:31.815010 kernel: hv_netvsc 6045bd10-b80f-6045-bd10-b80f6045bd10 eth0: Data path switched from VF: enP58605s1 Jan 30 13:06:31.780035 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:06:31.780072 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:31.817628 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:06:31.820482 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:06:31.820555 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:06:31.827311 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:06:31.827390 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:06:31.833415 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:06:31.833490 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:06:31.852957 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:06:31.853036 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:31.861942 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:06:31.862061 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:06:31.871951 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:06:31.872082 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:06:31.877515 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:06:31.891689 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:06:31.903135 systemd[1]: Switching root. Jan 30 13:06:31.992907 systemd-journald[177]: Journal stopped Jan 30 13:06:20.068914 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:06:20.068951 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:06:20.068964 kernel: BIOS-provided physical RAM map: Jan 30 13:06:20.068974 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:06:20.068983 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 30 13:06:20.068993 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 30 13:06:20.069005 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 30 13:06:20.069016 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 30 13:06:20.069029 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 30 13:06:20.069039 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 30 13:06:20.069049 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 30 13:06:20.069059 kernel: printk: bootconsole [earlyser0] enabled Jan 30 13:06:20.069069 kernel: NX (Execute Disable) protection: active Jan 30 13:06:20.069080 kernel: APIC: Static calls initialized Jan 30 13:06:20.069096 kernel: efi: EFI v2.7 by Microsoft Jan 30 13:06:20.069108 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 Jan 30 13:06:20.069120 kernel: random: crng init done Jan 30 13:06:20.069131 kernel: secureboot: Secure boot disabled Jan 30 13:06:20.069143 kernel: SMBIOS 3.1.0 present. Jan 30 13:06:20.069155 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 30 13:06:20.069166 kernel: Hypervisor detected: Microsoft Hyper-V Jan 30 13:06:20.069178 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 30 13:06:20.069189 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Jan 30 13:06:20.069200 kernel: Hyper-V: Nested features: 0x1e0101 Jan 30 13:06:20.069214 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 30 13:06:20.069225 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 30 13:06:20.069238 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:06:20.069250 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:06:20.069262 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 30 13:06:20.069275 kernel: tsc: Detected 2593.906 MHz processor Jan 30 13:06:20.069287 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:06:20.069299 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:06:20.069311 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 30 13:06:20.069327 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:06:20.069338 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:06:20.069350 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 30 13:06:20.069361 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 30 13:06:20.069375 kernel: Using GB pages for direct mapping Jan 30 13:06:20.069387 kernel: ACPI: Early table checksum verification disabled Jan 30 13:06:20.069466 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 30 13:06:20.069486 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069505 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069518 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 30 13:06:20.069530 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 30 13:06:20.069542 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069555 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069567 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069583 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069596 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069609 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069622 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:06:20.069636 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 30 13:06:20.069649 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 30 13:06:20.069661 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 30 13:06:20.069673 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 30 13:06:20.069685 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 30 13:06:20.069701 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 30 13:06:20.069712 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 30 13:06:20.069721 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 30 13:06:20.069731 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 30 13:06:20.069743 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 30 13:06:20.069756 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:06:20.069769 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:06:20.069780 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 30 13:06:20.069794 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 30 13:06:20.069810 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 30 13:06:20.069824 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 30 13:06:20.069837 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 30 13:06:20.069851 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 30 13:06:20.069865 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 30 13:06:20.069879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 30 13:06:20.069892 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 30 13:06:20.069906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 30 13:06:20.069924 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 30 13:06:20.069938 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 30 13:06:20.069952 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 30 13:06:20.069965 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 30 13:06:20.069979 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 30 13:06:20.069993 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 30 13:06:20.070007 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 30 13:06:20.070021 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 30 13:06:20.070036 kernel: Zone ranges: Jan 30 13:06:20.070053 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:06:20.070066 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:06:20.070080 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:06:20.070094 kernel: Movable zone start for each node Jan 30 13:06:20.070107 kernel: Early memory node ranges Jan 30 13:06:20.070119 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:06:20.070130 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 30 13:06:20.070144 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 30 13:06:20.070156 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:06:20.070172 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 30 13:06:20.070184 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:06:20.070196 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:06:20.070211 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 30 13:06:20.070223 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 30 13:06:20.070235 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 30 13:06:20.070246 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:06:20.070258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:06:20.070270 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:06:20.070285 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 30 13:06:20.070296 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:06:20.070308 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 30 13:06:20.070318 kernel: Booting paravirtualized kernel on Hyper-V Jan 30 13:06:20.070330 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:06:20.070343 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:06:20.070356 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:06:20.070369 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:06:20.070382 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:06:20.070421 kernel: Hyper-V: PV spinlocks enabled Jan 30 13:06:20.070434 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:06:20.070449 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:06:20.070462 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:06:20.070475 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:06:20.070488 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:06:20.070500 kernel: Fallback order for Node 0: 0 Jan 30 13:06:20.070514 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 30 13:06:20.070532 kernel: Policy zone: Normal Jan 30 13:06:20.070557 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:06:20.070572 kernel: software IO TLB: area num 2. Jan 30 13:06:20.070590 kernel: Memory: 8074980K/8387460K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 312224K reserved, 0K cma-reserved) Jan 30 13:06:20.070604 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:06:20.070619 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:06:20.070632 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:06:20.070647 kernel: Dynamic Preempt: voluntary Jan 30 13:06:20.070661 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:06:20.070676 kernel: rcu: RCU event tracing is enabled. Jan 30 13:06:20.070691 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:06:20.070710 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:06:20.070725 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:06:20.070739 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:06:20.070754 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:06:20.070769 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:06:20.070784 kernel: Using NULL legacy PIC Jan 30 13:06:20.070802 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 30 13:06:20.070817 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:06:20.070831 kernel: Console: colour dummy device 80x25 Jan 30 13:06:20.070846 kernel: printk: console [tty1] enabled Jan 30 13:06:20.070860 kernel: printk: console [ttyS0] enabled Jan 30 13:06:20.070874 kernel: printk: bootconsole [earlyser0] disabled Jan 30 13:06:20.070889 kernel: ACPI: Core revision 20230628 Jan 30 13:06:20.070903 kernel: Failed to register legacy timer interrupt Jan 30 13:06:20.070917 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:06:20.070936 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 13:06:20.070950 kernel: Hyper-V: Using IPI hypercalls Jan 30 13:06:20.070964 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 30 13:06:20.070979 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 30 13:06:20.070993 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 30 13:06:20.071008 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 30 13:06:20.071023 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 30 13:06:20.071037 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 30 13:06:20.071051 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 30 13:06:20.071070 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:06:20.071084 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:06:20.071098 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:06:20.071112 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:06:20.071126 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:06:20.071140 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:06:20.071154 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:06:20.071168 kernel: RETBleed: Vulnerable Jan 30 13:06:20.071182 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:06:20.071196 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:06:20.071214 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:06:20.071229 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:06:20.071243 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:06:20.071256 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:06:20.071270 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:06:20.071285 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:06:20.071299 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:06:20.071313 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:06:20.071326 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 30 13:06:20.071340 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 30 13:06:20.071354 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 30 13:06:20.071372 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 30 13:06:20.071386 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:06:20.071420 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:06:20.071435 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:06:20.071448 kernel: landlock: Up and running. Jan 30 13:06:20.071462 kernel: SELinux: Initializing. Jan 30 13:06:20.071476 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:06:20.071490 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:06:20.071505 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:06:20.071519 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:06:20.071533 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:06:20.071553 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:06:20.071567 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:06:20.071582 kernel: signal: max sigframe size: 3632 Jan 30 13:06:20.071596 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:06:20.071611 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:06:20.071625 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:06:20.071639 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:06:20.071654 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:06:20.071668 kernel: .... node #0, CPUs: #1 Jan 30 13:06:20.071687 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 30 13:06:20.071702 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:06:20.071716 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:06:20.071731 kernel: smpboot: Max logical packages: 1 Jan 30 13:06:20.071745 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 30 13:06:20.071759 kernel: devtmpfs: initialized Jan 30 13:06:20.071773 kernel: x86/mm: Memory block size: 128MB Jan 30 13:06:20.071787 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 30 13:06:20.071805 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:06:20.071819 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:06:20.071835 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:06:20.071849 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:06:20.071862 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:06:20.071875 kernel: audit: type=2000 audit(1738242379.028:1): state=initialized audit_enabled=0 res=1 Jan 30 13:06:20.071889 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:06:20.071902 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:06:20.071915 kernel: cpuidle: using governor menu Jan 30 13:06:20.071932 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:06:20.071946 kernel: dca service started, version 1.12.1 Jan 30 13:06:20.071960 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 30 13:06:20.071974 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:06:20.071987 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:06:20.072001 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:06:20.072015 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:06:20.072028 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:06:20.072041 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:06:20.072057 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:06:20.072071 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:06:20.072086 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:06:20.072101 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:06:20.072115 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:06:20.072130 kernel: ACPI: Interpreter enabled Jan 30 13:06:20.072145 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:06:20.072160 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:06:20.072175 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:06:20.072194 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:06:20.072209 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 30 13:06:20.072224 kernel: iommu: Default domain type: Translated Jan 30 13:06:20.072239 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:06:20.072255 kernel: efivars: Registered efivars operations Jan 30 13:06:20.072269 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:06:20.072284 kernel: PCI: System does not support PCI Jan 30 13:06:20.072298 kernel: vgaarb: loaded Jan 30 13:06:20.072313 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 30 13:06:20.072331 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:06:20.072347 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:06:20.072361 kernel: pnp: PnP ACPI init Jan 30 13:06:20.072375 kernel: pnp: PnP ACPI: found 3 devices Jan 30 13:06:20.072390 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:06:20.072418 kernel: NET: Registered PF_INET protocol family Jan 30 13:06:20.072432 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:06:20.072446 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:06:20.072459 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:06:20.072477 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:06:20.072490 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:06:20.072503 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:06:20.072517 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:06:20.072529 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:06:20.072543 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:06:20.072556 kernel: NET: Registered PF_XDP protocol family Jan 30 13:06:20.072569 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:06:20.072583 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:06:20.072601 kernel: software IO TLB: mapped [mem 0x000000003ad8d000-0x000000003ed8d000] (64MB) Jan 30 13:06:20.072615 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:06:20.072628 kernel: Initialise system trusted keyrings Jan 30 13:06:20.072641 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:06:20.072654 kernel: Key type asymmetric registered Jan 30 13:06:20.072667 kernel: Asymmetric key parser 'x509' registered Jan 30 13:06:20.072680 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:06:20.072693 kernel: io scheduler mq-deadline registered Jan 30 13:06:20.072706 kernel: io scheduler kyber registered Jan 30 13:06:20.072725 kernel: io scheduler bfq registered Jan 30 13:06:20.072740 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:06:20.072755 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:06:20.072771 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:06:20.072785 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:06:20.072801 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:06:20.072981 kernel: rtc_cmos 00:02: registered as rtc0 Jan 30 13:06:20.073102 kernel: rtc_cmos 00:02: setting system clock to 2025-01-30T13:06:19 UTC (1738242379) Jan 30 13:06:20.073217 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 30 13:06:20.073235 kernel: intel_pstate: CPU model not supported Jan 30 13:06:20.073249 kernel: efifb: probing for efifb Jan 30 13:06:20.073262 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 13:06:20.073276 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 13:06:20.073289 kernel: efifb: scrolling: redraw Jan 30 13:06:20.073303 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:06:20.073317 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:06:20.073331 kernel: fb0: EFI VGA frame buffer device Jan 30 13:06:20.073348 kernel: pstore: Using crash dump compression: deflate Jan 30 13:06:20.073362 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:06:20.073375 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:06:20.073389 kernel: Segment Routing with IPv6 Jan 30 13:06:20.073425 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:06:20.073439 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:06:20.073453 kernel: Key type dns_resolver registered Jan 30 13:06:20.073466 kernel: IPI shorthand broadcast: enabled Jan 30 13:06:20.073480 kernel: sched_clock: Marking stable (848137400, 47012400)->(1119523400, -224373600) Jan 30 13:06:20.073498 kernel: registered taskstats version 1 Jan 30 13:06:20.073511 kernel: Loading compiled-in X.509 certificates Jan 30 13:06:20.073525 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:06:20.073539 kernel: Key type .fscrypt registered Jan 30 13:06:20.073552 kernel: Key type fscrypt-provisioning registered Jan 30 13:06:20.073566 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:06:20.073580 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:06:20.073593 kernel: ima: No architecture policies found Jan 30 13:06:20.073610 kernel: clk: Disabling unused clocks Jan 30 13:06:20.073624 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:06:20.073638 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:06:20.073651 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:06:20.073665 kernel: Run /init as init process Jan 30 13:06:20.073679 kernel: with arguments: Jan 30 13:06:20.073693 kernel: /init Jan 30 13:06:20.073706 kernel: with environment: Jan 30 13:06:20.073720 kernel: HOME=/ Jan 30 13:06:20.073733 kernel: TERM=linux Jan 30 13:06:20.073750 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:06:20.073767 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:06:20.073784 systemd[1]: Detected virtualization microsoft. Jan 30 13:06:20.073799 systemd[1]: Detected architecture x86-64. Jan 30 13:06:20.073814 systemd[1]: Running in initrd. Jan 30 13:06:20.073829 systemd[1]: No hostname configured, using default hostname. Jan 30 13:06:20.073844 systemd[1]: Hostname set to . Jan 30 13:06:20.073865 systemd[1]: Initializing machine ID from random generator. Jan 30 13:06:20.073880 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:06:20.073896 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:06:20.073912 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:06:20.073930 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:06:20.073947 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:06:20.073963 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:06:20.073980 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:06:20.074003 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:06:20.074020 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:06:20.074036 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:06:20.074051 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:06:20.074066 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:06:20.074080 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:06:20.074095 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:06:20.074112 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:06:20.074127 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:06:20.074141 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:06:20.074156 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:06:20.074170 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:06:20.074185 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:06:20.074200 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:06:20.074215 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:06:20.074230 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:06:20.074247 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:06:20.074262 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:06:20.074277 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:06:20.074292 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:06:20.074307 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:06:20.074322 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:06:20.074362 systemd-journald[177]: Collecting audit messages is disabled. Jan 30 13:06:20.074412 systemd-journald[177]: Journal started Jan 30 13:06:20.074448 systemd-journald[177]: Runtime Journal (/run/log/journal/d9e34e48252445ae87043042d66b1acc) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:06:20.079436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:20.088087 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:06:20.088761 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:06:20.094028 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:06:20.099694 systemd-modules-load[178]: Inserted module 'overlay' Jan 30 13:06:20.100185 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:06:20.118292 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:06:20.131546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:06:20.136463 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:20.141217 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:06:20.166872 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:06:20.170671 kernel: Bridge firewalling registered Jan 30 13:06:20.170769 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 30 13:06:20.179601 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:06:20.184496 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:06:20.185073 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:06:20.185353 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:06:20.194866 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:06:20.222365 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:06:20.223304 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:06:20.235713 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:06:20.238800 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:20.247657 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:06:20.268335 dracut-cmdline[215]: dracut-dracut-053 Jan 30 13:06:20.271785 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:06:20.298708 systemd-resolved[212]: Positive Trust Anchors: Jan 30 13:06:20.298727 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:06:20.298789 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:06:20.307661 systemd-resolved[212]: Defaulting to hostname 'linux'. Jan 30 13:06:20.309061 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:06:20.327289 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:06:20.356414 kernel: SCSI subsystem initialized Jan 30 13:06:20.366413 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:06:20.377418 kernel: iscsi: registered transport (tcp) Jan 30 13:06:20.399455 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:06:20.399559 kernel: QLogic iSCSI HBA Driver Jan 30 13:06:20.435336 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:06:20.446530 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:06:20.477307 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:06:20.477416 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:06:20.480474 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:06:20.520420 kernel: raid6: avx512x4 gen() 18190 MB/s Jan 30 13:06:20.541412 kernel: raid6: avx512x2 gen() 18260 MB/s Jan 30 13:06:20.560407 kernel: raid6: avx512x1 gen() 18105 MB/s Jan 30 13:06:20.578424 kernel: raid6: avx2x4 gen() 18095 MB/s Jan 30 13:06:20.597410 kernel: raid6: avx2x2 gen() 18184 MB/s Jan 30 13:06:20.617271 kernel: raid6: avx2x1 gen() 14050 MB/s Jan 30 13:06:20.617326 kernel: raid6: using algorithm avx512x2 gen() 18260 MB/s Jan 30 13:06:20.638960 kernel: raid6: .... xor() 29951 MB/s, rmw enabled Jan 30 13:06:20.638994 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:06:20.661422 kernel: xor: automatically using best checksumming function avx Jan 30 13:06:20.801423 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:06:20.811702 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:06:20.821557 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:06:20.835528 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 30 13:06:20.841999 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:06:20.859523 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:06:20.872969 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Jan 30 13:06:20.899520 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:06:20.907628 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:06:20.947779 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:06:20.961593 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:06:20.993928 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:06:21.001835 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:06:21.008027 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:06:21.014318 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:06:21.028221 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:06:21.038244 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:06:21.038277 kernel: hv_vmbus: Vmbus version:5.2 Jan 30 13:06:21.061335 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:06:21.075848 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 13:06:21.082447 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:06:21.083216 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:06:21.083671 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:21.092271 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:06:21.115191 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 30 13:06:21.115226 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 13:06:21.095196 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:06:21.133383 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:06:21.133428 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 30 13:06:21.133453 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:06:21.133469 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 13:06:21.095430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:21.138446 kernel: PTP clock support registered Jan 30 13:06:21.107574 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:21.122915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:21.154951 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 13:06:21.155000 kernel: hv_vmbus: registering driver hv_utils Jan 30 13:06:21.158313 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 13:06:21.158369 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 13:06:22.371565 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 13:06:22.372167 systemd-resolved[212]: Clock change detected. Flushing caches. Jan 30 13:06:22.386486 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:06:22.394205 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 13:06:22.394242 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 13:06:22.396830 kernel: AES CTR mode by8 optimization enabled Jan 30 13:06:22.398697 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:22.411953 kernel: scsi host1: storvsc_host_t Jan 30 13:06:22.412010 kernel: scsi host0: storvsc_host_t Jan 30 13:06:22.408605 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:06:22.419572 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 13:06:22.425529 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 13:06:22.464827 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 13:06:22.476307 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:06:22.476334 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 13:06:22.488654 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:06:22.488852 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:06:22.489027 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 13:06:22.489205 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 13:06:22.489381 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 13:06:22.489573 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:22.489596 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:06:22.465951 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:22.622295 kernel: hv_netvsc 6045bd10-b80f-6045-bd10-b80f6045bd10 eth0: VF slot 1 added Jan 30 13:06:22.630892 kernel: hv_vmbus: registering driver hv_pci Jan 30 13:06:22.635485 kernel: hv_pci 93732baa-e4ed-4842-b14d-7e03edcff23e: PCI VMBus probing: Using version 0x10004 Jan 30 13:06:22.674965 kernel: hv_pci 93732baa-e4ed-4842-b14d-7e03edcff23e: PCI host bridge to bus e4ed:00 Jan 30 13:06:22.675142 kernel: pci_bus e4ed:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 30 13:06:22.675355 kernel: pci_bus e4ed:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 13:06:22.675534 kernel: pci e4ed:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 30 13:06:22.675730 kernel: pci e4ed:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:06:22.675910 kernel: pci e4ed:00:02.0: enabling Extended Tags Jan 30 13:06:22.676080 kernel: pci e4ed:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e4ed:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 30 13:06:22.676253 kernel: pci_bus e4ed:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 13:06:22.676421 kernel: pci e4ed:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:06:22.836376 kernel: mlx5_core e4ed:00:02.0: enabling device (0000 -> 0002) Jan 30 13:06:23.059637 kernel: mlx5_core e4ed:00:02.0: firmware version: 14.30.5000 Jan 30 13:06:23.059862 kernel: hv_netvsc 6045bd10-b80f-6045-bd10-b80f6045bd10 eth0: VF registering: eth1 Jan 30 13:06:23.060023 kernel: mlx5_core e4ed:00:02.0 eth1: joined to eth0 Jan 30 13:06:23.060207 kernel: mlx5_core e4ed:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:06:23.065507 kernel: mlx5_core e4ed:00:02.0 enP58605s1: renamed from eth1 Jan 30 13:06:23.076492 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 13:06:23.109491 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (446) Jan 30 13:06:23.125149 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:06:23.162796 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 13:06:23.173496 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (448) Jan 30 13:06:23.187798 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 13:06:23.190886 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 13:06:23.210659 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:06:23.226194 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:23.234495 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:24.241496 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:06:24.245521 disk-uuid[605]: The operation has completed successfully. Jan 30 13:06:24.325131 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:06:24.325243 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:06:24.349615 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:06:24.354225 sh[691]: Success Jan 30 13:06:24.393603 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:06:24.633156 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:06:24.651396 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:06:24.655803 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:06:24.688236 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:06:24.688303 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:24.691687 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:06:24.694539 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:06:24.696918 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:06:25.035154 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:06:25.038232 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:06:25.049733 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:06:25.056249 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:06:25.082495 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:25.082545 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:25.082558 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:06:25.104496 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:06:25.117766 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:25.117714 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:06:25.129194 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:06:25.140685 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:06:25.150823 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:06:25.163620 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:06:25.190673 systemd-networkd[875]: lo: Link UP Jan 30 13:06:25.190681 systemd-networkd[875]: lo: Gained carrier Jan 30 13:06:25.192817 systemd-networkd[875]: Enumeration completed Jan 30 13:06:25.192902 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:06:25.193895 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:06:25.193899 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:06:25.195654 systemd[1]: Reached target network.target - Network. Jan 30 13:06:25.261494 kernel: mlx5_core e4ed:00:02.0 enP58605s1: Link up Jan 30 13:06:25.292170 kernel: hv_netvsc 6045bd10-b80f-6045-bd10-b80f6045bd10 eth0: Data path switched to VF: enP58605s1 Jan 30 13:06:25.291625 systemd-networkd[875]: enP58605s1: Link UP Jan 30 13:06:25.291776 systemd-networkd[875]: eth0: Link UP Jan 30 13:06:25.292023 systemd-networkd[875]: eth0: Gained carrier Jan 30 13:06:25.292040 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:06:25.297819 systemd-networkd[875]: enP58605s1: Gained carrier Jan 30 13:06:25.331527 systemd-networkd[875]: eth0: DHCPv4 address 10.200.4.23/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 30 13:06:26.065722 ignition[864]: Ignition 2.20.0 Jan 30 13:06:26.065735 ignition[864]: Stage: fetch-offline Jan 30 13:06:26.067252 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:06:26.065781 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:26.065791 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:26.065898 ignition[864]: parsed url from cmdline: "" Jan 30 13:06:26.065903 ignition[864]: no config URL provided Jan 30 13:06:26.065909 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:06:26.065919 ignition[864]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:06:26.065926 ignition[864]: failed to fetch config: resource requires networking Jan 30 13:06:26.066355 ignition[864]: Ignition finished successfully Jan 30 13:06:26.093693 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:06:26.108818 ignition[884]: Ignition 2.20.0 Jan 30 13:06:26.108830 ignition[884]: Stage: fetch Jan 30 13:06:26.109055 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:26.109069 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:26.109165 ignition[884]: parsed url from cmdline: "" Jan 30 13:06:26.109169 ignition[884]: no config URL provided Jan 30 13:06:26.109173 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:06:26.109179 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:06:26.109208 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 13:06:26.197215 ignition[884]: GET result: OK Jan 30 13:06:26.197323 ignition[884]: config has been read from IMDS userdata Jan 30 13:06:26.197359 ignition[884]: parsing config with SHA512: 8ffe7a31993deed0633e344708d6f5a3103ff9858b530e05853ab37e5ab4b492848b6987d903985cd10f93b8b5291dc21ca65a4491b008a494247096c8a03e11 Jan 30 13:06:26.202218 unknown[884]: fetched base config from "system" Jan 30 13:06:26.202428 unknown[884]: fetched base config from "system" Jan 30 13:06:26.202820 ignition[884]: fetch: fetch complete Jan 30 13:06:26.202435 unknown[884]: fetched user config from "azure" Jan 30 13:06:26.202827 ignition[884]: fetch: fetch passed Jan 30 13:06:26.204401 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:06:26.202876 ignition[884]: Ignition finished successfully Jan 30 13:06:26.218769 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:06:26.232714 ignition[890]: Ignition 2.20.0 Jan 30 13:06:26.232742 ignition[890]: Stage: kargs Jan 30 13:06:26.232984 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:26.232998 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:26.239338 ignition[890]: kargs: kargs passed Jan 30 13:06:26.239397 ignition[890]: Ignition finished successfully Jan 30 13:06:26.243401 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:06:26.253663 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:06:26.267020 ignition[896]: Ignition 2.20.0 Jan 30 13:06:26.267032 ignition[896]: Stage: disks Jan 30 13:06:26.267283 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:26.267297 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:26.273513 ignition[896]: disks: disks passed Jan 30 13:06:26.273565 ignition[896]: Ignition finished successfully Jan 30 13:06:26.278525 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:06:26.279160 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:06:26.280035 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:06:26.280219 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:06:26.280618 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:06:26.280999 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:06:26.303780 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:06:26.413022 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 13:06:26.418282 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:06:26.429380 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:06:26.522484 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:06:26.522965 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:06:26.527719 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:06:26.577594 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:06:26.582921 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:06:26.594536 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (915) Jan 30 13:06:26.593681 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:06:26.606595 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:26.596848 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:06:26.616126 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:26.616157 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:06:26.596889 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:06:26.617799 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:06:26.626493 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:06:26.628383 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:06:26.634501 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:06:26.909633 systemd-networkd[875]: enP58605s1: Gained IPv6LL Jan 30 13:06:27.101618 systemd-networkd[875]: eth0: Gained IPv6LL Jan 30 13:06:27.251877 coreos-metadata[917]: Jan 30 13:06:27.251 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:06:27.258245 coreos-metadata[917]: Jan 30 13:06:27.258 INFO Fetch successful Jan 30 13:06:27.261010 coreos-metadata[917]: Jan 30 13:06:27.258 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:06:27.270584 coreos-metadata[917]: Jan 30 13:06:27.270 INFO Fetch successful Jan 30 13:06:27.288339 coreos-metadata[917]: Jan 30 13:06:27.286 INFO wrote hostname ci-4186.1.0-a-065ab1add7 to /sysroot/etc/hostname Jan 30 13:06:27.290932 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:06:27.295305 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:06:27.318204 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:06:27.325899 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:06:27.330596 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:06:28.129556 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:06:28.137597 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:06:28.145625 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:06:28.151522 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:28.154891 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:06:28.179278 ignition[1033]: INFO : Ignition 2.20.0 Jan 30 13:06:28.179278 ignition[1033]: INFO : Stage: mount Jan 30 13:06:28.184098 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:28.184098 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:28.184098 ignition[1033]: INFO : mount: mount passed Jan 30 13:06:28.184098 ignition[1033]: INFO : Ignition finished successfully Jan 30 13:06:28.182500 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:06:28.199654 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:06:28.204803 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:06:28.224724 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:06:28.239487 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1045) Jan 30 13:06:28.243482 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:06:28.243526 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:06:28.248149 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:06:28.253752 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:06:28.255164 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:06:28.277817 ignition[1061]: INFO : Ignition 2.20.0 Jan 30 13:06:28.277817 ignition[1061]: INFO : Stage: files Jan 30 13:06:28.282279 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:28.282279 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:28.282279 ignition[1061]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:06:28.282279 ignition[1061]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:06:28.282279 ignition[1061]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:06:28.400583 ignition[1061]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:06:28.404551 ignition[1061]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:06:28.404551 ignition[1061]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:06:28.401097 unknown[1061]: wrote ssh authorized keys file for user: core Jan 30 13:06:28.430872 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:06:28.439125 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:06:28.460255 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:06:28.733856 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:06:28.733856 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:06:28.744053 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 13:06:29.308123 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:06:29.517813 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:06:29.522838 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:06:29.557829 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:06:29.562152 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:06:29.562152 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:06:29.562152 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:06:29.562152 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:06:29.562152 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 13:06:30.178447 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:06:31.101755 ignition[1061]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:06:31.101755 ignition[1061]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:06:31.157241 ignition[1061]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:06:31.163557 ignition[1061]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:06:31.163557 ignition[1061]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:06:31.163557 ignition[1061]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:06:31.163557 ignition[1061]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:06:31.163557 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:06:31.163557 ignition[1061]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:06:31.163557 ignition[1061]: INFO : files: files passed Jan 30 13:06:31.163557 ignition[1061]: INFO : Ignition finished successfully Jan 30 13:06:31.159585 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:06:31.180710 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:06:31.198694 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:06:31.206284 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:06:31.206403 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:06:31.229453 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:06:31.229453 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:06:31.237591 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:06:31.234068 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:06:31.238257 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:06:31.256139 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:06:31.284056 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:06:31.284179 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:06:31.290064 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:06:31.297804 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:06:31.302730 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:06:31.307703 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:06:31.320277 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:06:31.330689 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:06:31.343695 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:06:31.349629 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:06:31.357424 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:06:31.362038 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:06:31.362217 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:06:31.370284 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:06:31.375232 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:06:31.379612 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:06:31.382425 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:06:31.391022 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:06:31.391231 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:06:31.391601 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:06:31.407529 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:06:31.410379 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:06:31.417441 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:06:31.420243 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:06:31.420371 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:06:31.425295 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:06:31.435014 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:06:31.438963 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:06:31.444137 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:06:31.450194 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:06:31.450371 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:06:31.455947 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:06:31.456102 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:06:31.468267 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:06:31.468413 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:06:31.473549 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:06:31.473695 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:06:31.489955 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:06:31.495811 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:06:31.497930 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:06:31.498139 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:06:31.501517 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:06:31.501671 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:06:31.518310 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:06:31.518419 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:06:31.531540 ignition[1114]: INFO : Ignition 2.20.0 Jan 30 13:06:31.531540 ignition[1114]: INFO : Stage: umount Jan 30 13:06:31.538129 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:06:31.538129 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:06:31.538129 ignition[1114]: INFO : umount: umount passed Jan 30 13:06:31.538129 ignition[1114]: INFO : Ignition finished successfully Jan 30 13:06:31.534260 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:06:31.534366 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:06:31.538613 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:06:31.538713 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:06:31.560337 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:06:31.560415 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:06:31.568858 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:06:31.568931 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:06:31.574449 systemd[1]: Stopped target network.target - Network. Jan 30 13:06:31.574915 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:06:31.574970 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:06:31.575359 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:06:31.575741 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:06:31.585963 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:06:31.588893 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:06:31.595118 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:06:31.612227 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:06:31.612312 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:06:31.631102 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:06:31.631165 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:06:31.639586 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:06:31.639663 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:06:31.645818 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:06:31.645883 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:06:31.655992 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:06:31.656180 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:06:31.657718 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:06:31.658276 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:06:31.658427 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:06:31.659733 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:06:31.659816 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:06:31.672720 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:06:31.672828 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:06:31.675532 systemd-networkd[875]: eth0: DHCPv6 lease lost Jan 30 13:06:31.680228 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:06:31.680333 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:06:31.688278 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:06:31.688343 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:06:31.714579 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:06:31.717327 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:06:31.717405 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:06:31.721932 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:06:31.721982 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:06:31.729359 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:06:31.732543 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:06:31.735954 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:06:31.736001 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:06:31.736280 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:06:31.765122 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:06:31.765298 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:06:31.770964 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:06:31.771005 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:06:31.774931 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:06:31.774966 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:06:31.775415 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:06:31.775455 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:06:31.779197 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:06:31.779239 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:06:31.815010 kernel: hv_netvsc 6045bd10-b80f-6045-bd10-b80f6045bd10 eth0: Data path switched from VF: enP58605s1 Jan 30 13:06:31.780035 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:06:31.780072 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:06:31.817628 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:06:31.820482 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:06:31.820555 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:06:31.827311 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:06:31.827390 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:06:31.833415 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:06:31.833490 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:06:31.852957 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:06:31.853036 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:31.861942 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:06:31.862061 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:06:31.871951 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:06:31.872082 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:06:31.877515 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:06:31.891689 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:06:31.903135 systemd[1]: Switching root. Jan 30 13:06:31.992907 systemd-journald[177]: Journal stopped Jan 30 13:06:37.406319 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 30 13:06:37.406351 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:06:37.406365 kernel: SELinux: policy capability open_perms=1 Jan 30 13:06:37.406376 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:06:37.406384 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:06:37.406395 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:06:37.406404 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:06:37.406417 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:06:37.406426 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:06:37.406437 kernel: audit: type=1403 audit(1738242394.217:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:06:37.406447 systemd[1]: Successfully loaded SELinux policy in 171.373ms. Jan 30 13:06:37.406459 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.018ms. Jan 30 13:06:37.406481 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:06:37.406491 systemd[1]: Detected virtualization microsoft. Jan 30 13:06:37.406506 systemd[1]: Detected architecture x86-64. Jan 30 13:06:37.406517 systemd[1]: Detected first boot. Jan 30 13:06:37.406529 systemd[1]: Hostname set to . Jan 30 13:06:37.406539 systemd[1]: Initializing machine ID from random generator. Jan 30 13:06:37.406551 zram_generator::config[1157]: No configuration found. Jan 30 13:06:37.406566 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:06:37.406576 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:06:37.406587 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:06:37.406597 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:06:37.406610 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:06:37.406621 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:06:37.406634 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:06:37.406647 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:06:37.406659 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:06:37.406670 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:06:37.406682 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:06:37.406692 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:06:37.406704 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:06:37.406714 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:06:37.406727 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:06:37.406741 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:06:37.406753 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:06:37.406765 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:06:37.406775 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:06:37.406787 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:06:37.406797 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:06:37.406812 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:06:37.406823 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:06:37.406838 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:06:37.406850 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:06:37.406861 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:06:37.406875 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:06:37.406886 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:06:37.406898 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:06:37.406908 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:06:37.406921 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:06:37.406935 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:06:37.406948 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:06:37.406960 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:06:37.406971 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:06:37.406986 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:06:37.406996 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:06:37.407010 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:06:37.407020 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:06:37.407033 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:06:37.407043 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:06:37.407057 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:06:37.407067 systemd[1]: Reached target machines.target - Containers. Jan 30 13:06:37.407082 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:06:37.407094 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:06:37.407105 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:06:37.407116 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:06:37.407130 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:06:37.407143 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:06:37.407154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:06:37.407166 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:06:37.407179 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:06:37.407192 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:06:37.407205 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:06:37.407216 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:06:37.407230 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:06:37.407241 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:06:37.407253 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:06:37.407264 kernel: fuse: init (API version 7.39) Jan 30 13:06:37.407275 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:06:37.407290 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:06:37.407301 kernel: loop: module loaded Jan 30 13:06:37.407313 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:06:37.407323 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:06:37.407336 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:06:37.407350 systemd[1]: Stopped verity-setup.service. Jan 30 13:06:37.407362 kernel: ACPI: bus type drm_connector registered Jan 30 13:06:37.407378 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:06:37.407404 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:06:37.407429 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:06:37.407449 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:06:37.407485 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:06:37.407539 systemd-journald[1249]: Collecting audit messages is disabled. Jan 30 13:06:37.407583 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:06:37.407603 systemd-journald[1249]: Journal started Jan 30 13:06:37.407645 systemd-journald[1249]: Runtime Journal (/run/log/journal/07d7c02318e448a68ab9eeaf426c2eff) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:06:36.693604 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:06:36.815565 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:06:36.815926 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:06:37.415486 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:06:37.417990 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:06:37.420661 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:06:37.423791 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:06:37.427100 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:06:37.427265 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:06:37.430514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:06:37.430676 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:06:37.433738 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:06:37.433892 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:06:37.436673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:06:37.437924 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:06:37.441755 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:06:37.441951 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:06:37.445143 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:06:37.445289 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:06:37.448142 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:06:37.451116 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:06:37.454398 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:06:37.473850 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:06:37.484565 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:06:37.497572 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:06:37.500604 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:06:37.500660 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:06:37.507244 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:06:37.518632 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:06:37.527598 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:06:37.530575 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:06:37.551670 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:06:37.555813 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:06:37.558991 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:06:37.560973 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:06:37.563855 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:06:37.572217 systemd-journald[1249]: Time spent on flushing to /var/log/journal/07d7c02318e448a68ab9eeaf426c2eff is 31.044ms for 954 entries. Jan 30 13:06:37.572217 systemd-journald[1249]: System Journal (/var/log/journal/07d7c02318e448a68ab9eeaf426c2eff) is 8.0M, max 2.6G, 2.6G free. Jan 30 13:06:37.622919 systemd-journald[1249]: Received client request to flush runtime journal. Jan 30 13:06:37.576255 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:06:37.584280 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:06:37.598646 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:06:37.608530 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:06:37.612076 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:06:37.617008 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:06:37.620793 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:06:37.626889 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:06:37.636689 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:06:37.640239 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:06:37.647596 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:06:37.663664 kernel: loop0: detected capacity change from 0 to 205544 Jan 30 13:06:37.664731 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:06:37.673345 udevadm[1303]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:06:37.679115 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:06:37.708743 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:06:37.714713 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Jan 30 13:06:37.714737 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Jan 30 13:06:37.720125 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:06:37.727713 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:06:37.737778 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:06:37.738533 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:06:37.760543 kernel: loop1: detected capacity change from 0 to 138184 Jan 30 13:06:37.882518 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:06:37.894786 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:06:37.910879 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jan 30 13:06:37.910902 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jan 30 13:06:37.914788 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:06:38.357498 kernel: loop2: detected capacity change from 0 to 141000 Jan 30 13:06:38.787806 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:06:38.797083 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:06:38.812576 kernel: loop3: detected capacity change from 0 to 28304 Jan 30 13:06:38.826462 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Jan 30 13:06:39.098231 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:06:39.114704 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:06:39.160751 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:06:39.204790 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:06:39.243494 kernel: loop4: detected capacity change from 0 to 205544 Jan 30 13:06:39.270503 kernel: loop5: detected capacity change from 0 to 138184 Jan 30 13:06:39.284593 kernel: hv_vmbus: registering driver hv_balloon Jan 30 13:06:39.289170 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 30 13:06:39.293281 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:06:39.308573 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:06:39.317607 kernel: loop6: detected capacity change from 0 to 141000 Jan 30 13:06:39.337628 kernel: hv_vmbus: registering driver hyperv_fb Jan 30 13:06:39.348970 kernel: loop7: detected capacity change from 0 to 28304 Jan 30 13:06:39.357512 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 30 13:06:39.361487 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 30 13:06:39.366940 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:06:39.372637 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:06:39.384273 (sd-merge)[1348]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 30 13:06:39.385983 (sd-merge)[1348]: Merged extensions into '/usr'. Jan 30 13:06:39.394551 systemd[1]: Reloading requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:06:39.394569 systemd[1]: Reloading... Jan 30 13:06:39.591787 zram_generator::config[1406]: No configuration found. Jan 30 13:06:39.611525 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1339) Jan 30 13:06:39.630113 systemd-networkd[1330]: lo: Link UP Jan 30 13:06:39.630124 systemd-networkd[1330]: lo: Gained carrier Jan 30 13:06:39.644634 systemd-networkd[1330]: Enumeration completed Jan 30 13:06:39.645066 systemd-networkd[1330]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:06:39.645070 systemd-networkd[1330]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:06:39.720485 kernel: mlx5_core e4ed:00:02.0 enP58605s1: Link up Jan 30 13:06:39.746493 kernel: hv_netvsc 6045bd10-b80f-6045-bd10-b80f6045bd10 eth0: Data path switched to VF: enP58605s1 Jan 30 13:06:39.749589 systemd-networkd[1330]: enP58605s1: Link UP Jan 30 13:06:39.749906 systemd-networkd[1330]: eth0: Link UP Jan 30 13:06:39.750097 systemd-networkd[1330]: eth0: Gained carrier Jan 30 13:06:39.750560 systemd-networkd[1330]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:06:39.782224 systemd-networkd[1330]: enP58605s1: Gained carrier Jan 30 13:06:39.816810 systemd-networkd[1330]: eth0: DHCPv4 address 10.200.4.23/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 30 13:06:39.895499 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 30 13:06:39.985625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:06:40.067565 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:06:40.071590 systemd[1]: Reloading finished in 676 ms. Jan 30 13:06:40.098901 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:06:40.102445 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:06:40.139762 systemd[1]: Starting ensure-sysext.service... Jan 30 13:06:40.146423 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:06:40.152807 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:06:40.157980 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:06:40.167528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:06:40.176327 systemd[1]: Reloading requested from client PID 1514 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:06:40.176342 systemd[1]: Reloading... Jan 30 13:06:40.188688 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:06:40.189454 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:06:40.190859 systemd-tmpfiles[1517]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:06:40.191393 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Jan 30 13:06:40.191500 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Jan 30 13:06:40.213007 systemd-tmpfiles[1517]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:06:40.213022 systemd-tmpfiles[1517]: Skipping /boot Jan 30 13:06:40.225703 systemd-tmpfiles[1517]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:06:40.225838 systemd-tmpfiles[1517]: Skipping /boot Jan 30 13:06:40.286752 zram_generator::config[1554]: No configuration found. Jan 30 13:06:40.408413 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:06:40.496812 systemd[1]: Reloading finished in 319 ms. Jan 30 13:06:40.522397 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:06:40.531933 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:06:40.539771 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:06:40.543254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:06:40.559718 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:06:40.583778 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:06:40.587906 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:06:40.595145 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:06:40.604579 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:06:40.614801 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:06:40.624907 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:06:40.625182 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:06:40.631820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:06:40.639780 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:06:40.649635 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:06:40.652333 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:06:40.652531 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:06:40.654066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:06:40.655436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:06:40.662933 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:06:40.664559 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:06:40.669134 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:06:40.683521 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:06:40.683812 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:06:40.690487 lvm[1622]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:06:40.690806 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:06:40.709272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:06:40.712421 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:06:40.712615 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:06:40.713734 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:06:40.717500 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:06:40.717671 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:06:40.721283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:06:40.721448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:06:40.731730 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:06:40.732311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:06:40.744514 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:06:40.756765 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:06:40.762229 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:06:40.765654 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:06:40.766116 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:06:40.769225 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:06:40.771028 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:06:40.774921 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:06:40.778825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:06:40.778993 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:06:40.782104 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:06:40.782263 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:06:40.785356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:06:40.785553 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:06:40.790619 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:06:40.790793 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:06:40.803430 augenrules[1658]: No rules Jan 30 13:06:40.804873 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:06:40.805541 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:06:40.805740 systemd-resolved[1624]: Positive Trust Anchors: Jan 30 13:06:40.805968 systemd-resolved[1624]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:06:40.806045 systemd-resolved[1624]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:06:40.809321 systemd[1]: Finished ensure-sysext.service. Jan 30 13:06:40.813840 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:06:40.824638 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:06:40.827637 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:06:40.827710 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:06:40.829027 lvm[1671]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:06:40.862560 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:06:40.868753 systemd-resolved[1624]: Using system hostname 'ci-4186.1.0-a-065ab1add7'. Jan 30 13:06:40.870439 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:06:40.873539 systemd[1]: Reached target network.target - Network. Jan 30 13:06:40.875634 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:06:41.373767 systemd-networkd[1330]: eth0: Gained IPv6LL Jan 30 13:06:41.377299 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:06:41.381018 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:06:41.559920 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:06:41.563574 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:06:41.565636 systemd-networkd[1330]: enP58605s1: Gained IPv6LL Jan 30 13:06:43.519432 ldconfig[1288]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:06:43.533179 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:06:43.540774 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:06:43.567714 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:06:43.571102 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:06:43.574184 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:06:43.577278 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:06:43.580732 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:06:43.583638 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:06:43.586730 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:06:43.589804 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:06:43.589841 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:06:43.592094 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:06:43.595148 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:06:43.599189 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:06:43.611263 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:06:43.614421 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:06:43.617160 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:06:43.619522 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:06:43.622006 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:06:43.622039 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:06:43.644601 systemd[1]: Starting chronyd.service - NTP client/server... Jan 30 13:06:43.650616 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:06:43.660639 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:06:43.676040 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:06:43.688570 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:06:43.694655 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:06:43.697361 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:06:43.697422 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 30 13:06:43.701501 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 30 13:06:43.701902 (chronyd)[1680]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 30 13:06:43.704978 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 30 13:06:43.710644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:06:43.711048 jq[1687]: false Jan 30 13:06:43.715756 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:06:43.721204 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:06:43.725213 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:06:43.732311 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:06:43.737606 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:06:43.750655 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:06:43.754284 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:06:43.754902 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:06:43.755625 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:06:43.759233 KVP[1689]: KVP starting; pid is:1689 Jan 30 13:06:43.759621 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:06:43.769888 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:06:43.770515 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:06:43.776911 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:06:43.777158 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:06:43.777849 chronyd[1706]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 30 13:06:43.793142 KVP[1689]: KVP LIC Version: 3.1 Jan 30 13:06:43.793485 kernel: hv_utils: KVP IC version 4.0 Jan 30 13:06:43.802729 chronyd[1706]: Timezone right/UTC failed leap second check, ignoring Jan 30 13:06:43.802970 chronyd[1706]: Loaded seccomp filter (level 2) Jan 30 13:06:43.804658 extend-filesystems[1688]: Found loop4 Jan 30 13:06:43.808379 extend-filesystems[1688]: Found loop5 Jan 30 13:06:43.808379 extend-filesystems[1688]: Found loop6 Jan 30 13:06:43.808379 extend-filesystems[1688]: Found loop7 Jan 30 13:06:43.808379 extend-filesystems[1688]: Found sda Jan 30 13:06:43.808379 extend-filesystems[1688]: Found sda1 Jan 30 13:06:43.808379 extend-filesystems[1688]: Found sda2 Jan 30 13:06:43.808379 extend-filesystems[1688]: Found sda3 Jan 30 13:06:43.808379 extend-filesystems[1688]: Found usr Jan 30 13:06:43.808379 extend-filesystems[1688]: Found sda4 Jan 30 13:06:43.808379 extend-filesystems[1688]: Found sda6 Jan 30 13:06:43.808379 extend-filesystems[1688]: Found sda7 Jan 30 13:06:43.808379 extend-filesystems[1688]: Found sda9 Jan 30 13:06:43.808379 extend-filesystems[1688]: Checking size of /dev/sda9 Jan 30 13:06:43.807932 systemd[1]: Started chronyd.service - NTP client/server. Jan 30 13:06:43.841394 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:06:43.841655 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:06:43.861683 (ntainerd)[1721]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:06:43.863888 extend-filesystems[1688]: Old size kept for /dev/sda9 Jan 30 13:06:43.863888 extend-filesystems[1688]: Found sr0 Jan 30 13:06:43.867855 jq[1701]: true Jan 30 13:06:43.863934 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:06:43.868394 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:06:43.906052 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:06:43.912784 systemd-logind[1698]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:06:43.914761 systemd-logind[1698]: New seat seat0. Jan 30 13:06:43.918019 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:06:43.926214 jq[1731]: true Jan 30 13:06:43.938028 tar[1707]: linux-amd64/helm Jan 30 13:06:43.947958 dbus-daemon[1683]: [system] SELinux support is enabled Jan 30 13:06:43.948159 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:06:43.955129 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:06:43.955169 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:06:43.961546 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:06:43.961577 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:06:43.993816 update_engine[1700]: I20250130 13:06:43.989877 1700 main.cc:92] Flatcar Update Engine starting Jan 30 13:06:43.997955 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:06:44.003716 update_engine[1700]: I20250130 13:06:44.003662 1700 update_check_scheduler.cc:74] Next update check in 5m12s Jan 30 13:06:44.008796 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:06:44.105305 coreos-metadata[1682]: Jan 30 13:06:44.104 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:06:44.107524 coreos-metadata[1682]: Jan 30 13:06:44.107 INFO Fetch successful Jan 30 13:06:44.107624 coreos-metadata[1682]: Jan 30 13:06:44.107 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 30 13:06:44.112941 coreos-metadata[1682]: Jan 30 13:06:44.112 INFO Fetch successful Jan 30 13:06:44.118771 coreos-metadata[1682]: Jan 30 13:06:44.115 INFO Fetching http://168.63.129.16/machine/aff1d87e-727a-4372-9c7e-cf1d9c9970c8/fd0159f3%2D6a2a%2D484e%2Dba28%2Dd0f2fd25e277.%5Fci%2D4186.1.0%2Da%2D065ab1add7?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 30 13:06:44.118771 coreos-metadata[1682]: Jan 30 13:06:44.117 INFO Fetch successful Jan 30 13:06:44.118771 coreos-metadata[1682]: Jan 30 13:06:44.118 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:06:44.138486 coreos-metadata[1682]: Jan 30 13:06:44.137 INFO Fetch successful Jan 30 13:06:44.188805 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:06:44.192791 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:06:44.216261 bash[1767]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:06:44.213174 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:06:44.223266 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:06:44.293179 sshd_keygen[1727]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:06:44.325509 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1764) Jan 30 13:06:44.350617 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:06:44.370575 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:06:44.383671 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 30 13:06:44.395661 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:06:44.396561 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:06:44.416605 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:06:44.431620 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 30 13:06:44.432976 locksmithd[1745]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:06:44.452368 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:06:44.473875 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:06:44.489156 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:06:44.497135 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:06:44.921017 tar[1707]: linux-amd64/LICENSE Jan 30 13:06:44.921462 tar[1707]: linux-amd64/README.md Jan 30 13:06:44.933834 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:06:45.248263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:06:45.261901 (kubelet)[1865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:06:45.415305 containerd[1721]: time="2025-01-30T13:06:45.414267000Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:06:45.445357 containerd[1721]: time="2025-01-30T13:06:45.445123000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:06:45.447170 containerd[1721]: time="2025-01-30T13:06:45.447128100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:06:45.447391 containerd[1721]: time="2025-01-30T13:06:45.447285700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:06:45.447391 containerd[1721]: time="2025-01-30T13:06:45.447312700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:06:45.447530 containerd[1721]: time="2025-01-30T13:06:45.447515400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:06:45.447568 containerd[1721]: time="2025-01-30T13:06:45.447542800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:06:45.447650 containerd[1721]: time="2025-01-30T13:06:45.447626000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:06:45.447650 containerd[1721]: time="2025-01-30T13:06:45.447643600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:06:45.448789 containerd[1721]: time="2025-01-30T13:06:45.447879600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:06:45.448789 containerd[1721]: time="2025-01-30T13:06:45.447904600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:06:45.448789 containerd[1721]: time="2025-01-30T13:06:45.447923700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:06:45.448789 containerd[1721]: time="2025-01-30T13:06:45.447938800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:06:45.448789 containerd[1721]: time="2025-01-30T13:06:45.448033700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:06:45.448789 containerd[1721]: time="2025-01-30T13:06:45.448256900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:06:45.449209 containerd[1721]: time="2025-01-30T13:06:45.448391800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:06:45.449303 containerd[1721]: time="2025-01-30T13:06:45.449283700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:06:45.449498 containerd[1721]: time="2025-01-30T13:06:45.449458600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:06:45.450636 containerd[1721]: time="2025-01-30T13:06:45.450608100Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:06:45.462757 containerd[1721]: time="2025-01-30T13:06:45.462079900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:06:45.462757 containerd[1721]: time="2025-01-30T13:06:45.462139700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:06:45.462757 containerd[1721]: time="2025-01-30T13:06:45.462160600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:06:45.462757 containerd[1721]: time="2025-01-30T13:06:45.462183000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:06:45.462757 containerd[1721]: time="2025-01-30T13:06:45.462201800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:06:45.462757 containerd[1721]: time="2025-01-30T13:06:45.462351900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:06:45.463004 containerd[1721]: time="2025-01-30T13:06:45.462759900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:06:45.463004 containerd[1721]: time="2025-01-30T13:06:45.462885200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:06:45.463004 containerd[1721]: time="2025-01-30T13:06:45.462907600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:06:45.463004 containerd[1721]: time="2025-01-30T13:06:45.462928400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:06:45.463004 containerd[1721]: time="2025-01-30T13:06:45.462948600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:06:45.463004 containerd[1721]: time="2025-01-30T13:06:45.462976700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:06:45.463004 containerd[1721]: time="2025-01-30T13:06:45.462996900Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:06:45.463219 containerd[1721]: time="2025-01-30T13:06:45.463016700Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:06:45.463219 containerd[1721]: time="2025-01-30T13:06:45.463037300Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:06:45.463219 containerd[1721]: time="2025-01-30T13:06:45.463056000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:06:45.463219 containerd[1721]: time="2025-01-30T13:06:45.463073000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:06:45.463219 containerd[1721]: time="2025-01-30T13:06:45.463089800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:06:45.463219 containerd[1721]: time="2025-01-30T13:06:45.463116400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463219 containerd[1721]: time="2025-01-30T13:06:45.463135200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463219 containerd[1721]: time="2025-01-30T13:06:45.463153000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463219 containerd[1721]: time="2025-01-30T13:06:45.463171000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463219 containerd[1721]: time="2025-01-30T13:06:45.463187300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463219 containerd[1721]: time="2025-01-30T13:06:45.463205300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463616 containerd[1721]: time="2025-01-30T13:06:45.463230700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463616 containerd[1721]: time="2025-01-30T13:06:45.463251400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463616 containerd[1721]: time="2025-01-30T13:06:45.463269200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463616 containerd[1721]: time="2025-01-30T13:06:45.463288600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463616 containerd[1721]: time="2025-01-30T13:06:45.463305100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463616 containerd[1721]: time="2025-01-30T13:06:45.463321400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463616 containerd[1721]: time="2025-01-30T13:06:45.463345600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463616 containerd[1721]: time="2025-01-30T13:06:45.463370600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:06:45.463616 containerd[1721]: time="2025-01-30T13:06:45.463397800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463616 containerd[1721]: time="2025-01-30T13:06:45.463418000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.463616 containerd[1721]: time="2025-01-30T13:06:45.463432900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:06:45.463616 containerd[1721]: time="2025-01-30T13:06:45.463511500Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:06:45.463616 containerd[1721]: time="2025-01-30T13:06:45.463535700Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:06:45.463616 containerd[1721]: time="2025-01-30T13:06:45.463551300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:06:45.464052 containerd[1721]: time="2025-01-30T13:06:45.463627800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:06:45.464052 containerd[1721]: time="2025-01-30T13:06:45.463641700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.464052 containerd[1721]: time="2025-01-30T13:06:45.463659700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:06:45.464052 containerd[1721]: time="2025-01-30T13:06:45.463674200Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:06:45.464052 containerd[1721]: time="2025-01-30T13:06:45.463688500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:06:45.464217 containerd[1721]: time="2025-01-30T13:06:45.464064300Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:06:45.464217 containerd[1721]: time="2025-01-30T13:06:45.464141800Z" level=info msg="Connect containerd service" Jan 30 13:06:45.464217 containerd[1721]: time="2025-01-30T13:06:45.464187800Z" level=info msg="using legacy CRI server" Jan 30 13:06:45.464217 containerd[1721]: time="2025-01-30T13:06:45.464199400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:06:45.464554 containerd[1721]: time="2025-01-30T13:06:45.464346800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:06:45.465801 containerd[1721]: time="2025-01-30T13:06:45.465054700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:06:45.465801 containerd[1721]: time="2025-01-30T13:06:45.465391900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:06:45.465801 containerd[1721]: time="2025-01-30T13:06:45.465444400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:06:45.465801 containerd[1721]: time="2025-01-30T13:06:45.465504300Z" level=info msg="Start subscribing containerd event" Jan 30 13:06:45.465801 containerd[1721]: time="2025-01-30T13:06:45.465545000Z" level=info msg="Start recovering state" Jan 30 13:06:45.465801 containerd[1721]: time="2025-01-30T13:06:45.465612700Z" level=info msg="Start event monitor" Jan 30 13:06:45.465801 containerd[1721]: time="2025-01-30T13:06:45.465633000Z" level=info msg="Start snapshots syncer" Jan 30 13:06:45.465801 containerd[1721]: time="2025-01-30T13:06:45.465646400Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:06:45.465801 containerd[1721]: time="2025-01-30T13:06:45.465655700Z" level=info msg="Start streaming server" Jan 30 13:06:45.465824 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:06:45.469076 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:06:45.472678 containerd[1721]: time="2025-01-30T13:06:45.466093300Z" level=info msg="containerd successfully booted in 0.053315s" Jan 30 13:06:45.474215 systemd[1]: Startup finished in 855ms (firmware) + 27.965s (loader) + 986ms (kernel) + 13.143s (initrd) + 11.427s (userspace) = 54.377s. Jan 30 13:06:45.522657 agetty[1847]: failed to open credentials directory Jan 30 13:06:45.522657 agetty[1844]: failed to open credentials directory Jan 30 13:06:45.763528 login[1844]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:06:45.765262 login[1847]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:06:45.782084 systemd-logind[1698]: New session 2 of user core. Jan 30 13:06:45.782859 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:06:45.790818 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:06:45.801569 systemd-logind[1698]: New session 1 of user core. Jan 30 13:06:45.810761 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:06:45.819398 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:06:45.843944 (systemd)[1880]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:06:45.897650 kubelet[1865]: E0130 13:06:45.897119 1865 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:06:45.900168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:06:45.900355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:06:46.006140 systemd[1880]: Queued start job for default target default.target. Jan 30 13:06:46.012556 systemd[1880]: Created slice app.slice - User Application Slice. Jan 30 13:06:46.012594 systemd[1880]: Reached target paths.target - Paths. Jan 30 13:06:46.012612 systemd[1880]: Reached target timers.target - Timers. Jan 30 13:06:46.013816 systemd[1880]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:06:46.029903 systemd[1880]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:06:46.030054 systemd[1880]: Reached target sockets.target - Sockets. Jan 30 13:06:46.030079 systemd[1880]: Reached target basic.target - Basic System. Jan 30 13:06:46.030192 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:06:46.032081 systemd[1880]: Reached target default.target - Main User Target. Jan 30 13:06:46.032136 systemd[1880]: Startup finished in 180ms. Jan 30 13:06:46.039621 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:06:46.040629 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:06:46.421539 waagent[1838]: 2025-01-30T13:06:46.421360Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 30 13:06:46.459645 waagent[1838]: 2025-01-30T13:06:46.421810Z INFO Daemon Daemon OS: flatcar 4186.1.0 Jan 30 13:06:46.459645 waagent[1838]: 2025-01-30T13:06:46.422939Z INFO Daemon Daemon Python: 3.11.10 Jan 30 13:06:46.459645 waagent[1838]: 2025-01-30T13:06:46.423967Z INFO Daemon Daemon Run daemon Jan 30 13:06:46.459645 waagent[1838]: 2025-01-30T13:06:46.424429Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.0' Jan 30 13:06:46.459645 waagent[1838]: 2025-01-30T13:06:46.425259Z INFO Daemon Daemon Using waagent for provisioning Jan 30 13:06:46.459645 waagent[1838]: 2025-01-30T13:06:46.426271Z INFO Daemon Daemon Activate resource disk Jan 30 13:06:46.459645 waagent[1838]: 2025-01-30T13:06:46.427075Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 30 13:06:46.459645 waagent[1838]: 2025-01-30T13:06:46.432720Z INFO Daemon Daemon Found device: None Jan 30 13:06:46.459645 waagent[1838]: 2025-01-30T13:06:46.433710Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 30 13:06:46.459645 waagent[1838]: 2025-01-30T13:06:46.434477Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 30 13:06:46.459645 waagent[1838]: 2025-01-30T13:06:46.435660Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 13:06:46.459645 waagent[1838]: 2025-01-30T13:06:46.436533Z INFO Daemon Daemon Running default provisioning handler Jan 30 13:06:46.463089 waagent[1838]: 2025-01-30T13:06:46.463009Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 30 13:06:46.483004 waagent[1838]: 2025-01-30T13:06:46.470919Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 30 13:06:46.483004 waagent[1838]: 2025-01-30T13:06:46.471138Z INFO Daemon Daemon cloud-init is enabled: False Jan 30 13:06:46.483004 waagent[1838]: 2025-01-30T13:06:46.471240Z INFO Daemon Daemon Copying ovf-env.xml Jan 30 13:06:46.553967 waagent[1838]: 2025-01-30T13:06:46.553698Z INFO Daemon Daemon Successfully mounted dvd Jan 30 13:06:46.582210 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 30 13:06:46.584376 waagent[1838]: 2025-01-30T13:06:46.584293Z INFO Daemon Daemon Detect protocol endpoint Jan 30 13:06:46.598484 waagent[1838]: 2025-01-30T13:06:46.584700Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 13:06:46.598484 waagent[1838]: 2025-01-30T13:06:46.586066Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 30 13:06:46.598484 waagent[1838]: 2025-01-30T13:06:46.586974Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 30 13:06:46.598484 waagent[1838]: 2025-01-30T13:06:46.587580Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 30 13:06:46.598484 waagent[1838]: 2025-01-30T13:06:46.588023Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 30 13:06:46.622072 waagent[1838]: 2025-01-30T13:06:46.622005Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 30 13:06:46.629444 waagent[1838]: 2025-01-30T13:06:46.622549Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 30 13:06:46.629444 waagent[1838]: 2025-01-30T13:06:46.623213Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 30 13:06:46.764121 waagent[1838]: 2025-01-30T13:06:46.764014Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 30 13:06:46.769390 waagent[1838]: 2025-01-30T13:06:46.764403Z INFO Daemon Daemon Forcing an update of the goal state. Jan 30 13:06:46.772328 waagent[1838]: 2025-01-30T13:06:46.772274Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 13:06:46.783681 waagent[1838]: 2025-01-30T13:06:46.783632Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.162 Jan 30 13:06:46.800761 waagent[1838]: 2025-01-30T13:06:46.784262Z INFO Daemon Jan 30 13:06:46.800761 waagent[1838]: 2025-01-30T13:06:46.784843Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 97996c46-dc78-4258-85c0-9c8734fce031 eTag: 8930725191124316818 source: Fabric] Jan 30 13:06:46.800761 waagent[1838]: 2025-01-30T13:06:46.785919Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 30 13:06:46.800761 waagent[1838]: 2025-01-30T13:06:46.786953Z INFO Daemon Jan 30 13:06:46.800761 waagent[1838]: 2025-01-30T13:06:46.787302Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 30 13:06:46.800761 waagent[1838]: 2025-01-30T13:06:46.792267Z INFO Daemon Daemon Downloading artifacts profile blob Jan 30 13:06:46.870160 waagent[1838]: 2025-01-30T13:06:46.870074Z INFO Daemon Downloaded certificate {'thumbprint': 'DDD993AB62BD70B302E0B78FA22798A32A2EA15B', 'hasPrivateKey': True} Jan 30 13:06:46.875192 waagent[1838]: 2025-01-30T13:06:46.875126Z INFO Daemon Fetch goal state completed Jan 30 13:06:46.884103 waagent[1838]: 2025-01-30T13:06:46.884055Z INFO Daemon Daemon Starting provisioning Jan 30 13:06:46.891044 waagent[1838]: 2025-01-30T13:06:46.884272Z INFO Daemon Daemon Handle ovf-env.xml. Jan 30 13:06:46.891044 waagent[1838]: 2025-01-30T13:06:46.885250Z INFO Daemon Daemon Set hostname [ci-4186.1.0-a-065ab1add7] Jan 30 13:06:46.901572 waagent[1838]: 2025-01-30T13:06:46.901485Z INFO Daemon Daemon Publish hostname [ci-4186.1.0-a-065ab1add7] Jan 30 13:06:46.909216 waagent[1838]: 2025-01-30T13:06:46.901959Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 30 13:06:46.909216 waagent[1838]: 2025-01-30T13:06:46.903037Z INFO Daemon Daemon Primary interface is [eth0] Jan 30 13:06:46.935983 systemd-networkd[1330]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:06:46.935995 systemd-networkd[1330]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:06:46.936050 systemd-networkd[1330]: eth0: DHCP lease lost Jan 30 13:06:46.937622 waagent[1838]: 2025-01-30T13:06:46.937460Z INFO Daemon Daemon Create user account if not exists Jan 30 13:06:46.947892 waagent[1838]: 2025-01-30T13:06:46.937999Z INFO Daemon Daemon User core already exists, skip useradd Jan 30 13:06:46.947892 waagent[1838]: 2025-01-30T13:06:46.938920Z INFO Daemon Daemon Configure sudoer Jan 30 13:06:46.947892 waagent[1838]: 2025-01-30T13:06:46.940036Z INFO Daemon Daemon Configure sshd Jan 30 13:06:46.947892 waagent[1838]: 2025-01-30T13:06:46.942743Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 30 13:06:46.947892 waagent[1838]: 2025-01-30T13:06:46.943156Z INFO Daemon Daemon Deploy ssh public key. Jan 30 13:06:46.955330 systemd-networkd[1330]: eth0: DHCPv6 lease lost Jan 30 13:06:46.988527 systemd-networkd[1330]: eth0: DHCPv4 address 10.200.4.23/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 30 13:06:48.075933 waagent[1838]: 2025-01-30T13:06:48.075835Z INFO Daemon Daemon Provisioning complete Jan 30 13:06:48.090001 waagent[1838]: 2025-01-30T13:06:48.089931Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 30 13:06:48.093077 waagent[1838]: 2025-01-30T13:06:48.093013Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 30 13:06:48.097792 waagent[1838]: 2025-01-30T13:06:48.097707Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 30 13:06:48.224366 waagent[1932]: 2025-01-30T13:06:48.224257Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 30 13:06:48.224763 waagent[1932]: 2025-01-30T13:06:48.224430Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.0 Jan 30 13:06:48.224763 waagent[1932]: 2025-01-30T13:06:48.224543Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 30 13:06:48.261698 waagent[1932]: 2025-01-30T13:06:48.261593Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 30 13:06:48.261951 waagent[1932]: 2025-01-30T13:06:48.261891Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:06:48.262067 waagent[1932]: 2025-01-30T13:06:48.262012Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:06:48.270682 waagent[1932]: 2025-01-30T13:06:48.270610Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 13:06:48.276220 waagent[1932]: 2025-01-30T13:06:48.276162Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.162 Jan 30 13:06:48.276687 waagent[1932]: 2025-01-30T13:06:48.276634Z INFO ExtHandler Jan 30 13:06:48.276785 waagent[1932]: 2025-01-30T13:06:48.276726Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 42c6083c-1a0b-46fd-ac5f-85921d7dd0fd eTag: 8930725191124316818 source: Fabric] Jan 30 13:06:48.277079 waagent[1932]: 2025-01-30T13:06:48.277030Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 30 13:06:48.277669 waagent[1932]: 2025-01-30T13:06:48.277613Z INFO ExtHandler Jan 30 13:06:48.277732 waagent[1932]: 2025-01-30T13:06:48.277697Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 30 13:06:48.281066 waagent[1932]: 2025-01-30T13:06:48.281020Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 30 13:06:48.341004 waagent[1932]: 2025-01-30T13:06:48.340864Z INFO ExtHandler Downloaded certificate {'thumbprint': 'DDD993AB62BD70B302E0B78FA22798A32A2EA15B', 'hasPrivateKey': True} Jan 30 13:06:48.341496 waagent[1932]: 2025-01-30T13:06:48.341425Z INFO ExtHandler Fetch goal state completed Jan 30 13:06:48.355745 waagent[1932]: 2025-01-30T13:06:48.355677Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1932 Jan 30 13:06:48.355908 waagent[1932]: 2025-01-30T13:06:48.355857Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 30 13:06:48.357452 waagent[1932]: 2025-01-30T13:06:48.357392Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 30 13:06:48.357827 waagent[1932]: 2025-01-30T13:06:48.357776Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 30 13:06:48.417455 waagent[1932]: 2025-01-30T13:06:48.417393Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 30 13:06:48.417742 waagent[1932]: 2025-01-30T13:06:48.417684Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 30 13:06:48.425215 waagent[1932]: 2025-01-30T13:06:48.425157Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 30 13:06:48.432156 systemd[1]: Reloading requested from client PID 1945 ('systemctl') (unit waagent.service)... Jan 30 13:06:48.432172 systemd[1]: Reloading... Jan 30 13:06:48.511538 zram_generator::config[1976]: No configuration found. Jan 30 13:06:48.642540 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:06:48.726852 systemd[1]: Reloading finished in 294 ms. Jan 30 13:06:48.752557 waagent[1932]: 2025-01-30T13:06:48.752196Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 30 13:06:48.761691 systemd[1]: Reloading requested from client PID 2036 ('systemctl') (unit waagent.service)... Jan 30 13:06:48.761706 systemd[1]: Reloading... Jan 30 13:06:48.827492 zram_generator::config[2066]: No configuration found. Jan 30 13:06:48.968979 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:06:49.062274 systemd[1]: Reloading finished in 300 ms. Jan 30 13:06:49.088509 waagent[1932]: 2025-01-30T13:06:49.088305Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 30 13:06:49.089499 waagent[1932]: 2025-01-30T13:06:49.088698Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 30 13:06:50.348159 waagent[1932]: 2025-01-30T13:06:50.348056Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 30 13:06:50.348949 waagent[1932]: 2025-01-30T13:06:50.348874Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 30 13:06:50.349838 waagent[1932]: 2025-01-30T13:06:50.349775Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 30 13:06:50.349989 waagent[1932]: 2025-01-30T13:06:50.349930Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:06:50.350602 waagent[1932]: 2025-01-30T13:06:50.350537Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:06:50.350679 waagent[1932]: 2025-01-30T13:06:50.350626Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 30 13:06:50.351015 waagent[1932]: 2025-01-30T13:06:50.350927Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 30 13:06:50.351608 waagent[1932]: 2025-01-30T13:06:50.351543Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 30 13:06:50.351719 waagent[1932]: 2025-01-30T13:06:50.351617Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 30 13:06:50.351961 waagent[1932]: 2025-01-30T13:06:50.351903Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 30 13:06:50.351961 waagent[1932]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 30 13:06:50.351961 waagent[1932]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 30 13:06:50.351961 waagent[1932]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 30 13:06:50.351961 waagent[1932]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:06:50.351961 waagent[1932]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:06:50.351961 waagent[1932]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:06:50.352618 waagent[1932]: 2025-01-30T13:06:50.352359Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:06:50.352618 waagent[1932]: 2025-01-30T13:06:50.352512Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:06:50.352807 waagent[1932]: 2025-01-30T13:06:50.352739Z INFO EnvHandler ExtHandler Configure routes Jan 30 13:06:50.352915 waagent[1932]: 2025-01-30T13:06:50.352869Z INFO EnvHandler ExtHandler Gateway:None Jan 30 13:06:50.353010 waagent[1932]: 2025-01-30T13:06:50.352964Z INFO EnvHandler ExtHandler Routes:None Jan 30 13:06:50.353661 waagent[1932]: 2025-01-30T13:06:50.353596Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 30 13:06:50.353738 waagent[1932]: 2025-01-30T13:06:50.353668Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 30 13:06:50.354280 waagent[1932]: 2025-01-30T13:06:50.354228Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 30 13:06:50.359133 waagent[1932]: 2025-01-30T13:06:50.359081Z INFO ExtHandler ExtHandler Jan 30 13:06:50.359251 waagent[1932]: 2025-01-30T13:06:50.359204Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ef828bb7-59a7-4c58-9b2d-5fa241e47cf1 correlation 7babf014-2103-4883-a653-c2125ddb39d0 created: 2025-01-30T13:05:40.357117Z] Jan 30 13:06:50.360306 waagent[1932]: 2025-01-30T13:06:50.360209Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 30 13:06:50.361578 waagent[1932]: 2025-01-30T13:06:50.361531Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 30 13:06:50.393346 waagent[1932]: 2025-01-30T13:06:50.393194Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C2416E52-BCF0-4A1F-A23C-7ADBA8B53315;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 30 13:06:50.424631 waagent[1932]: 2025-01-30T13:06:50.424538Z INFO MonitorHandler ExtHandler Network interfaces: Jan 30 13:06:50.424631 waagent[1932]: Executing ['ip', '-a', '-o', 'link']: Jan 30 13:06:50.424631 waagent[1932]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 30 13:06:50.424631 waagent[1932]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:10:b8:0f brd ff:ff:ff:ff:ff:ff Jan 30 13:06:50.424631 waagent[1932]: 3: enP58605s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:10:b8:0f brd ff:ff:ff:ff:ff:ff\ altname enP58605p0s2 Jan 30 13:06:50.424631 waagent[1932]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 30 13:06:50.424631 waagent[1932]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 30 13:06:50.424631 waagent[1932]: 2: eth0 inet 10.200.4.23/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 30 13:06:50.424631 waagent[1932]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 30 13:06:50.424631 waagent[1932]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 30 13:06:50.424631 waagent[1932]: 2: eth0 inet6 fe80::6245:bdff:fe10:b80f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 13:06:50.424631 waagent[1932]: 3: enP58605s1 inet6 fe80::6245:bdff:fe10:b80f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 13:06:50.486495 waagent[1932]: 2025-01-30T13:06:50.486407Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 30 13:06:50.486495 waagent[1932]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:06:50.486495 waagent[1932]: pkts bytes target prot opt in out source destination Jan 30 13:06:50.486495 waagent[1932]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:06:50.486495 waagent[1932]: pkts bytes target prot opt in out source destination Jan 30 13:06:50.486495 waagent[1932]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:06:50.486495 waagent[1932]: pkts bytes target prot opt in out source destination Jan 30 13:06:50.486495 waagent[1932]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 13:06:50.486495 waagent[1932]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 13:06:50.486495 waagent[1932]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 13:06:50.492833 waagent[1932]: 2025-01-30T13:06:50.492768Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 30 13:06:50.492833 waagent[1932]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:06:50.492833 waagent[1932]: pkts bytes target prot opt in out source destination Jan 30 13:06:50.492833 waagent[1932]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:06:50.492833 waagent[1932]: pkts bytes target prot opt in out source destination Jan 30 13:06:50.492833 waagent[1932]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:06:50.492833 waagent[1932]: pkts bytes target prot opt in out source destination Jan 30 13:06:50.492833 waagent[1932]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 13:06:50.492833 waagent[1932]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 13:06:50.492833 waagent[1932]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 13:06:50.493236 waagent[1932]: 2025-01-30T13:06:50.493082Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 30 13:06:55.939132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:06:55.944696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:06:56.166280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:06:56.177807 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:06:56.672206 kubelet[2166]: E0130 13:06:56.672151 2166 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:06:56.675662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:06:56.675870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:07:06.689266 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:07:06.694751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:06.797263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:06.809835 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:07:07.356229 kubelet[2181]: E0130 13:07:07.356169 2181 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:07:07.358524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:07:07.358735 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:07:07.601843 chronyd[1706]: Selected source PHC0 Jan 30 13:07:17.439214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 13:07:17.444752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:17.651670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:17.656394 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:07:18.001309 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:07:18.002654 systemd[1]: Started sshd@0-10.200.4.23:22-10.200.16.10:51230.service - OpenSSH per-connection server daemon (10.200.16.10:51230). Jan 30 13:07:18.165596 kubelet[2197]: E0130 13:07:18.165544 2197 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:07:18.167745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:07:18.167953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:07:18.891243 sshd[2203]: Accepted publickey for core from 10.200.16.10 port 51230 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:18.892934 sshd-session[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:18.898500 systemd-logind[1698]: New session 3 of user core. Jan 30 13:07:18.907632 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:07:19.476775 systemd[1]: Started sshd@1-10.200.4.23:22-10.200.16.10:51236.service - OpenSSH per-connection server daemon (10.200.16.10:51236). Jan 30 13:07:20.118631 sshd[2210]: Accepted publickey for core from 10.200.16.10 port 51236 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:20.120278 sshd-session[2210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:20.124499 systemd-logind[1698]: New session 4 of user core. Jan 30 13:07:20.135641 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:07:20.592680 sshd[2212]: Connection closed by 10.200.16.10 port 51236 Jan 30 13:07:20.593576 sshd-session[2210]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:20.597788 systemd[1]: sshd@1-10.200.4.23:22-10.200.16.10:51236.service: Deactivated successfully. Jan 30 13:07:20.599874 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:07:20.600703 systemd-logind[1698]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:07:20.601821 systemd-logind[1698]: Removed session 4. Jan 30 13:07:20.711772 systemd[1]: Started sshd@2-10.200.4.23:22-10.200.16.10:51238.service - OpenSSH per-connection server daemon (10.200.16.10:51238). Jan 30 13:07:21.350423 sshd[2217]: Accepted publickey for core from 10.200.16.10 port 51238 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:21.351997 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:21.356365 systemd-logind[1698]: New session 5 of user core. Jan 30 13:07:21.363758 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:07:21.799182 sshd[2219]: Connection closed by 10.200.16.10 port 51238 Jan 30 13:07:21.800145 sshd-session[2217]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:21.802977 systemd[1]: sshd@2-10.200.4.23:22-10.200.16.10:51238.service: Deactivated successfully. Jan 30 13:07:21.804911 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:07:21.806548 systemd-logind[1698]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:07:21.807411 systemd-logind[1698]: Removed session 5. Jan 30 13:07:21.918778 systemd[1]: Started sshd@3-10.200.4.23:22-10.200.16.10:51252.service - OpenSSH per-connection server daemon (10.200.16.10:51252). Jan 30 13:07:22.559394 sshd[2224]: Accepted publickey for core from 10.200.16.10 port 51252 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:22.562752 sshd-session[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:22.568192 systemd-logind[1698]: New session 6 of user core. Jan 30 13:07:22.577846 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:07:23.017022 sshd[2226]: Connection closed by 10.200.16.10 port 51252 Jan 30 13:07:23.017895 sshd-session[2224]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:23.022236 systemd[1]: sshd@3-10.200.4.23:22-10.200.16.10:51252.service: Deactivated successfully. Jan 30 13:07:23.024427 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:07:23.025485 systemd-logind[1698]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:07:23.026596 systemd-logind[1698]: Removed session 6. Jan 30 13:07:23.133781 systemd[1]: Started sshd@4-10.200.4.23:22-10.200.16.10:51264.service - OpenSSH per-connection server daemon (10.200.16.10:51264). Jan 30 13:07:23.777662 sshd[2231]: Accepted publickey for core from 10.200.16.10 port 51264 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:23.779281 sshd-session[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:23.784580 systemd-logind[1698]: New session 7 of user core. Jan 30 13:07:23.789634 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:07:24.325768 sudo[2234]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:07:24.326219 sudo[2234]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:07:24.361953 sudo[2234]: pam_unix(sudo:session): session closed for user root Jan 30 13:07:24.467568 sshd[2233]: Connection closed by 10.200.16.10 port 51264 Jan 30 13:07:24.468687 sshd-session[2231]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:24.471742 systemd[1]: sshd@4-10.200.4.23:22-10.200.16.10:51264.service: Deactivated successfully. Jan 30 13:07:24.473733 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:07:24.475126 systemd-logind[1698]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:07:24.476206 systemd-logind[1698]: Removed session 7. Jan 30 13:07:24.584792 systemd[1]: Started sshd@5-10.200.4.23:22-10.200.16.10:51268.service - OpenSSH per-connection server daemon (10.200.16.10:51268). Jan 30 13:07:25.227949 sshd[2239]: Accepted publickey for core from 10.200.16.10 port 51268 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:25.229689 sshd-session[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:25.235200 systemd-logind[1698]: New session 8 of user core. Jan 30 13:07:25.241898 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:07:25.581415 sudo[2243]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:07:25.582210 sudo[2243]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:07:25.585324 sudo[2243]: pam_unix(sudo:session): session closed for user root Jan 30 13:07:25.590086 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:07:25.590415 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:07:25.608870 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:07:25.634597 augenrules[2265]: No rules Jan 30 13:07:25.635955 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:07:25.636179 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:07:25.637607 sudo[2242]: pam_unix(sudo:session): session closed for user root Jan 30 13:07:25.742279 sshd[2241]: Connection closed by 10.200.16.10 port 51268 Jan 30 13:07:25.743160 sshd-session[2239]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:25.747656 systemd[1]: sshd@5-10.200.4.23:22-10.200.16.10:51268.service: Deactivated successfully. Jan 30 13:07:25.749693 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:07:25.750374 systemd-logind[1698]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:07:25.751309 systemd-logind[1698]: Removed session 8. Jan 30 13:07:25.859667 systemd[1]: Started sshd@6-10.200.4.23:22-10.200.16.10:34794.service - OpenSSH per-connection server daemon (10.200.16.10:34794). Jan 30 13:07:26.503180 sshd[2273]: Accepted publickey for core from 10.200.16.10 port 34794 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:26.504819 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:26.510332 systemd-logind[1698]: New session 9 of user core. Jan 30 13:07:26.520613 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:07:26.856854 sudo[2276]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:07:26.857303 sudo[2276]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:07:27.440369 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 30 13:07:28.174824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 13:07:28.179951 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:07:28.181390 (dockerd)[2294]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:07:28.182683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:28.873648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:28.878106 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:07:28.914315 kubelet[2303]: E0130 13:07:28.914089 2303 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:07:28.916406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:07:28.916625 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:07:28.919588 update_engine[1700]: I20250130 13:07:28.919540 1700 update_attempter.cc:509] Updating boot flags... Jan 30 13:07:28.976502 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2325) Jan 30 13:07:29.117541 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2327) Jan 30 13:07:30.116931 dockerd[2294]: time="2025-01-30T13:07:30.116864520Z" level=info msg="Starting up" Jan 30 13:07:30.522106 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport311670668-merged.mount: Deactivated successfully. Jan 30 13:07:30.585086 dockerd[2294]: time="2025-01-30T13:07:30.585034538Z" level=info msg="Loading containers: start." Jan 30 13:07:30.787514 kernel: Initializing XFRM netlink socket Jan 30 13:07:30.973887 systemd-networkd[1330]: docker0: Link UP Jan 30 13:07:31.020905 dockerd[2294]: time="2025-01-30T13:07:31.020863422Z" level=info msg="Loading containers: done." Jan 30 13:07:31.075697 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3899027265-merged.mount: Deactivated successfully. Jan 30 13:07:31.081522 dockerd[2294]: time="2025-01-30T13:07:31.081453102Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:07:31.081631 dockerd[2294]: time="2025-01-30T13:07:31.081598603Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:07:31.081738 dockerd[2294]: time="2025-01-30T13:07:31.081716304Z" level=info msg="Daemon has completed initialization" Jan 30 13:07:31.126878 dockerd[2294]: time="2025-01-30T13:07:31.126813632Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:07:31.128125 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:07:32.057336 containerd[1721]: time="2025-01-30T13:07:32.057295254Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 13:07:32.884143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146871211.mount: Deactivated successfully. Jan 30 13:07:34.362908 containerd[1721]: time="2025-01-30T13:07:34.362849572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:34.365729 containerd[1721]: time="2025-01-30T13:07:34.365496592Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976729" Jan 30 13:07:34.368805 containerd[1721]: time="2025-01-30T13:07:34.368397314Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:34.374960 containerd[1721]: time="2025-01-30T13:07:34.374923964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:34.375950 containerd[1721]: time="2025-01-30T13:07:34.375917972Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.318584018s" Jan 30 13:07:34.376073 containerd[1721]: time="2025-01-30T13:07:34.376054373Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 30 13:07:34.377694 containerd[1721]: time="2025-01-30T13:07:34.377664785Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 13:07:35.877098 containerd[1721]: time="2025-01-30T13:07:35.877041543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:35.880132 containerd[1721]: time="2025-01-30T13:07:35.879944465Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701151" Jan 30 13:07:35.884295 containerd[1721]: time="2025-01-30T13:07:35.883123089Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:35.889078 containerd[1721]: time="2025-01-30T13:07:35.889034035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:35.890060 containerd[1721]: time="2025-01-30T13:07:35.890023542Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.512323357s" Jan 30 13:07:35.890218 containerd[1721]: time="2025-01-30T13:07:35.890194943Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 30 13:07:35.890940 containerd[1721]: time="2025-01-30T13:07:35.890896149Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 13:07:37.216677 containerd[1721]: time="2025-01-30T13:07:37.216610480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:37.218639 containerd[1721]: time="2025-01-30T13:07:37.218568894Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652061" Jan 30 13:07:37.222035 containerd[1721]: time="2025-01-30T13:07:37.221971420Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:37.227755 containerd[1721]: time="2025-01-30T13:07:37.227701464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:37.228689 containerd[1721]: time="2025-01-30T13:07:37.228652672Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.337719622s" Jan 30 13:07:37.228754 containerd[1721]: time="2025-01-30T13:07:37.228693772Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 30 13:07:37.229419 containerd[1721]: time="2025-01-30T13:07:37.229220576Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:07:38.498008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471859210.mount: Deactivated successfully. Jan 30 13:07:38.939402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 13:07:38.948951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:39.071907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:39.076885 (kubelet)[2683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:07:39.656464 kubelet[2683]: E0130 13:07:39.656408 2683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:07:39.658681 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:07:39.658905 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:07:39.677190 containerd[1721]: time="2025-01-30T13:07:39.677136637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:39.679365 containerd[1721]: time="2025-01-30T13:07:39.679293358Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231136" Jan 30 13:07:39.683292 containerd[1721]: time="2025-01-30T13:07:39.683225696Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:39.687808 containerd[1721]: time="2025-01-30T13:07:39.687755139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:39.688915 containerd[1721]: time="2025-01-30T13:07:39.688341545Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.459086069s" Jan 30 13:07:39.688915 containerd[1721]: time="2025-01-30T13:07:39.688383045Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 30 13:07:39.689038 containerd[1721]: time="2025-01-30T13:07:39.688916650Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:07:40.347556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12777535.mount: Deactivated successfully. Jan 30 13:07:41.487696 containerd[1721]: time="2025-01-30T13:07:41.487639887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:41.490550 containerd[1721]: time="2025-01-30T13:07:41.490492014Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 30 13:07:41.494776 containerd[1721]: time="2025-01-30T13:07:41.494721254Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:41.499956 containerd[1721]: time="2025-01-30T13:07:41.499918304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:41.500799 containerd[1721]: time="2025-01-30T13:07:41.500640111Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.81169346s" Jan 30 13:07:41.500799 containerd[1721]: time="2025-01-30T13:07:41.500675611Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:07:41.501545 containerd[1721]: time="2025-01-30T13:07:41.501331618Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:07:42.105174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3356931697.mount: Deactivated successfully. Jan 30 13:07:42.128580 containerd[1721]: time="2025-01-30T13:07:42.128526828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:42.130907 containerd[1721]: time="2025-01-30T13:07:42.130844550Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 30 13:07:42.134664 containerd[1721]: time="2025-01-30T13:07:42.134612486Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:42.138354 containerd[1721]: time="2025-01-30T13:07:42.138305622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:42.139205 containerd[1721]: time="2025-01-30T13:07:42.139004828Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 637.63771ms" Jan 30 13:07:42.139205 containerd[1721]: time="2025-01-30T13:07:42.139044629Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:07:42.139882 containerd[1721]: time="2025-01-30T13:07:42.139688035Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 13:07:43.340929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697202027.mount: Deactivated successfully. Jan 30 13:07:45.604056 containerd[1721]: time="2025-01-30T13:07:45.603985131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:45.609915 containerd[1721]: time="2025-01-30T13:07:45.609835387Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Jan 30 13:07:45.614743 containerd[1721]: time="2025-01-30T13:07:45.614672434Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:45.619429 containerd[1721]: time="2025-01-30T13:07:45.619363079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:07:45.621530 containerd[1721]: time="2025-01-30T13:07:45.620440589Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.480719054s" Jan 30 13:07:45.621530 containerd[1721]: time="2025-01-30T13:07:45.620501390Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 30 13:07:48.112130 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:48.125062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:48.149596 systemd[1]: Reloading requested from client PID 2819 ('systemctl') (unit session-9.scope)... Jan 30 13:07:48.149612 systemd[1]: Reloading... Jan 30 13:07:48.264531 zram_generator::config[2862]: No configuration found. Jan 30 13:07:48.379618 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:07:48.464061 systemd[1]: Reloading finished in 314 ms. Jan 30 13:07:48.751653 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:07:48.751785 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:07:48.752139 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:48.761992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:50.482931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:50.489618 (kubelet)[2926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:07:50.527054 kubelet[2926]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:07:50.527536 kubelet[2926]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:07:50.527536 kubelet[2926]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:07:51.403889 kubelet[2926]: I0130 13:07:51.400307 2926 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:07:51.964703 kubelet[2926]: I0130 13:07:51.964657 2926 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:07:51.964703 kubelet[2926]: I0130 13:07:51.964688 2926 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:07:51.965202 kubelet[2926]: I0130 13:07:51.965024 2926 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:07:51.987543 kubelet[2926]: E0130 13:07:51.987499 2926 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.23:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:07:51.989871 kubelet[2926]: I0130 13:07:51.989707 2926 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:07:51.999709 kubelet[2926]: E0130 13:07:51.999665 2926 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:07:51.999709 kubelet[2926]: I0130 13:07:51.999707 2926 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:07:52.004918 kubelet[2926]: I0130 13:07:52.004883 2926 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:07:52.006192 kubelet[2926]: I0130 13:07:52.006156 2926 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:07:52.006482 kubelet[2926]: I0130 13:07:52.006426 2926 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:07:52.006771 kubelet[2926]: I0130 13:07:52.006490 2926 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-065ab1add7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:07:52.006952 kubelet[2926]: I0130 13:07:52.006789 2926 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:07:52.006952 kubelet[2926]: I0130 13:07:52.006806 2926 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:07:52.007056 kubelet[2926]: I0130 13:07:52.006967 2926 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:07:52.009234 kubelet[2926]: I0130 13:07:52.008936 2926 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:07:52.009234 kubelet[2926]: I0130 13:07:52.008969 2926 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:07:52.009234 kubelet[2926]: I0130 13:07:52.009016 2926 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:07:52.009234 kubelet[2926]: I0130 13:07:52.009035 2926 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:07:52.014211 kubelet[2926]: W0130 13:07:52.014151 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-065ab1add7&limit=500&resourceVersion=0": dial tcp 10.200.4.23:6443: connect: connection refused Jan 30 13:07:52.014322 kubelet[2926]: E0130 13:07:52.014228 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-065ab1add7&limit=500&resourceVersion=0\": dial tcp 10.200.4.23:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:07:52.015209 kubelet[2926]: W0130 13:07:52.014683 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.23:6443: connect: connection refused Jan 30 13:07:52.015209 kubelet[2926]: E0130 13:07:52.014735 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.23:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:07:52.015209 kubelet[2926]: I0130 13:07:52.015036 2926 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:07:52.017085 kubelet[2926]: I0130 13:07:52.016934 2926 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:07:52.017593 kubelet[2926]: W0130 13:07:52.017570 2926 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:07:52.019045 kubelet[2926]: I0130 13:07:52.019017 2926 server.go:1269] "Started kubelet" Jan 30 13:07:52.021850 kubelet[2926]: I0130 13:07:52.021812 2926 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:07:52.022968 kubelet[2926]: I0130 13:07:52.022941 2926 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:07:52.025850 kubelet[2926]: I0130 13:07:52.025308 2926 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:07:52.025850 kubelet[2926]: I0130 13:07:52.025491 2926 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:07:52.025850 kubelet[2926]: I0130 13:07:52.025556 2926 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:07:52.028672 kubelet[2926]: E0130 13:07:52.025763 2926 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.23:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-065ab1add7.181f7a51963072fd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-065ab1add7,UID:ci-4186.1.0-a-065ab1add7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-065ab1add7,},FirstTimestamp:2025-01-30 13:07:52.018998013 +0000 UTC m=+1.525645210,LastTimestamp:2025-01-30 13:07:52.018998013 +0000 UTC m=+1.525645210,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-065ab1add7,}" Jan 30 13:07:52.031712 kubelet[2926]: I0130 13:07:52.030719 2926 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:07:52.033499 kubelet[2926]: I0130 13:07:52.033237 2926 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:07:52.033499 kubelet[2926]: E0130 13:07:52.033309 2926 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:07:52.034560 kubelet[2926]: I0130 13:07:52.034243 2926 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:07:52.034560 kubelet[2926]: I0130 13:07:52.034315 2926 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:07:52.035437 kubelet[2926]: I0130 13:07:52.035417 2926 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:07:52.035649 kubelet[2926]: I0130 13:07:52.035628 2926 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:07:52.036845 kubelet[2926]: E0130 13:07:52.036022 2926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-065ab1add7\" not found" Jan 30 13:07:52.036845 kubelet[2926]: W0130 13:07:52.036168 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.23:6443: connect: connection refused Jan 30 13:07:52.036845 kubelet[2926]: E0130 13:07:52.036217 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.23:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:07:52.036845 kubelet[2926]: E0130 13:07:52.036805 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-065ab1add7?timeout=10s\": dial tcp 10.200.4.23:6443: connect: connection refused" interval="200ms" Jan 30 13:07:52.038496 kubelet[2926]: I0130 13:07:52.038284 2926 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:07:52.077386 kubelet[2926]: I0130 13:07:52.077353 2926 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:07:52.077386 kubelet[2926]: I0130 13:07:52.077377 2926 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:07:52.077616 kubelet[2926]: I0130 13:07:52.077401 2926 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:07:52.083709 kubelet[2926]: I0130 13:07:52.083672 2926 policy_none.go:49] "None policy: Start" Jan 30 13:07:52.084438 kubelet[2926]: I0130 13:07:52.084389 2926 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:07:52.084438 kubelet[2926]: I0130 13:07:52.084419 2926 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:07:52.096835 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:07:52.107893 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:07:52.111142 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:07:52.118700 kubelet[2926]: I0130 13:07:52.118310 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:07:52.120658 kubelet[2926]: I0130 13:07:52.119790 2926 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:07:52.120658 kubelet[2926]: I0130 13:07:52.120019 2926 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:07:52.120658 kubelet[2926]: I0130 13:07:52.120040 2926 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:07:52.125422 kubelet[2926]: I0130 13:07:52.124355 2926 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:07:52.125422 kubelet[2926]: I0130 13:07:52.124535 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:07:52.125422 kubelet[2926]: I0130 13:07:52.124563 2926 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:07:52.125422 kubelet[2926]: I0130 13:07:52.124601 2926 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:07:52.125422 kubelet[2926]: E0130 13:07:52.124644 2926 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 13:07:52.127974 kubelet[2926]: E0130 13:07:52.127947 2926 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-a-065ab1add7\" not found" Jan 30 13:07:52.129329 kubelet[2926]: W0130 13:07:52.129274 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.23:6443: connect: connection refused Jan 30 13:07:52.129421 kubelet[2926]: E0130 13:07:52.129380 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.23:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:07:52.223229 kubelet[2926]: I0130 13:07:52.223072 2926 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.223881 kubelet[2926]: E0130 13:07:52.223843 2926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.23:6443/api/v1/nodes\": dial tcp 10.200.4.23:6443: connect: connection refused" node="ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.234948 kubelet[2926]: I0130 13:07:52.234848 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/724d854281b76a70500e2d0d34525121-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-065ab1add7\" (UID: \"724d854281b76a70500e2d0d34525121\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.234948 kubelet[2926]: I0130 13:07:52.234894 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91d7e7952925e281afecd4065cfbf49f-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-065ab1add7\" (UID: \"91d7e7952925e281afecd4065cfbf49f\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.234948 kubelet[2926]: I0130 13:07:52.234920 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/724d854281b76a70500e2d0d34525121-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-065ab1add7\" (UID: \"724d854281b76a70500e2d0d34525121\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.234948 kubelet[2926]: I0130 13:07:52.234944 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/724d854281b76a70500e2d0d34525121-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-065ab1add7\" (UID: \"724d854281b76a70500e2d0d34525121\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.235202 kubelet[2926]: I0130 13:07:52.234963 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/724d854281b76a70500e2d0d34525121-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-065ab1add7\" (UID: \"724d854281b76a70500e2d0d34525121\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.235202 kubelet[2926]: I0130 13:07:52.234986 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec5a3e5ab7bb17b8137209ce4bae8710-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-065ab1add7\" (UID: \"ec5a3e5ab7bb17b8137209ce4bae8710\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.235202 kubelet[2926]: I0130 13:07:52.235004 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91d7e7952925e281afecd4065cfbf49f-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-065ab1add7\" (UID: \"91d7e7952925e281afecd4065cfbf49f\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.235202 kubelet[2926]: I0130 13:07:52.235024 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91d7e7952925e281afecd4065cfbf49f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-065ab1add7\" (UID: \"91d7e7952925e281afecd4065cfbf49f\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.235202 kubelet[2926]: I0130 13:07:52.235046 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/724d854281b76a70500e2d0d34525121-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-065ab1add7\" (UID: \"724d854281b76a70500e2d0d34525121\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.235431 systemd[1]: Created slice kubepods-burstable-pod91d7e7952925e281afecd4065cfbf49f.slice - libcontainer container kubepods-burstable-pod91d7e7952925e281afecd4065cfbf49f.slice. Jan 30 13:07:52.238225 kubelet[2926]: E0130 13:07:52.238189 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-065ab1add7?timeout=10s\": dial tcp 10.200.4.23:6443: connect: connection refused" interval="400ms" Jan 30 13:07:52.254889 systemd[1]: Created slice kubepods-burstable-pod724d854281b76a70500e2d0d34525121.slice - libcontainer container kubepods-burstable-pod724d854281b76a70500e2d0d34525121.slice. Jan 30 13:07:52.268697 systemd[1]: Created slice kubepods-burstable-podec5a3e5ab7bb17b8137209ce4bae8710.slice - libcontainer container kubepods-burstable-podec5a3e5ab7bb17b8137209ce4bae8710.slice. Jan 30 13:07:52.426155 kubelet[2926]: I0130 13:07:52.426119 2926 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.426652 kubelet[2926]: E0130 13:07:52.426605 2926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.23:6443/api/v1/nodes\": dial tcp 10.200.4.23:6443: connect: connection refused" node="ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.554381 containerd[1721]: time="2025-01-30T13:07:52.554243557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-065ab1add7,Uid:91d7e7952925e281afecd4065cfbf49f,Namespace:kube-system,Attempt:0,}" Jan 30 13:07:52.567150 containerd[1721]: time="2025-01-30T13:07:52.567080626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-065ab1add7,Uid:724d854281b76a70500e2d0d34525121,Namespace:kube-system,Attempt:0,}" Jan 30 13:07:52.572128 containerd[1721]: time="2025-01-30T13:07:52.571838351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-065ab1add7,Uid:ec5a3e5ab7bb17b8137209ce4bae8710,Namespace:kube-system,Attempt:0,}" Jan 30 13:07:52.639013 kubelet[2926]: E0130 13:07:52.638960 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-065ab1add7?timeout=10s\": dial tcp 10.200.4.23:6443: connect: connection refused" interval="800ms" Jan 30 13:07:52.830205 kubelet[2926]: I0130 13:07:52.828773 2926 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.830205 kubelet[2926]: E0130 13:07:52.829125 2926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.23:6443/api/v1/nodes\": dial tcp 10.200.4.23:6443: connect: connection refused" node="ci-4186.1.0-a-065ab1add7" Jan 30 13:07:52.942302 kubelet[2926]: W0130 13:07:52.942261 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.23:6443: connect: connection refused Jan 30 13:07:52.942459 kubelet[2926]: E0130 13:07:52.942311 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.23:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:07:53.151691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount811601209.mount: Deactivated successfully. Jan 30 13:07:53.178933 containerd[1721]: time="2025-01-30T13:07:53.178875577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:07:53.193731 containerd[1721]: time="2025-01-30T13:07:53.193550255Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 13:07:53.197477 containerd[1721]: time="2025-01-30T13:07:53.197417876Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:07:53.202875 containerd[1721]: time="2025-01-30T13:07:53.202818804Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:07:53.208679 containerd[1721]: time="2025-01-30T13:07:53.208350234Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:07:53.211959 containerd[1721]: time="2025-01-30T13:07:53.211920253Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:07:53.215189 containerd[1721]: time="2025-01-30T13:07:53.215146070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:07:53.215998 containerd[1721]: time="2025-01-30T13:07:53.215947774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 661.576416ms" Jan 30 13:07:53.217777 containerd[1721]: time="2025-01-30T13:07:53.217355582Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:07:53.217863 kubelet[2926]: W0130 13:07:53.217667 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.23:6443: connect: connection refused Jan 30 13:07:53.217863 kubelet[2926]: E0130 13:07:53.217745 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.23:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:07:53.221246 containerd[1721]: time="2025-01-30T13:07:53.221208902Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 649.272951ms" Jan 30 13:07:53.234113 containerd[1721]: time="2025-01-30T13:07:53.234067070Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 666.871444ms" Jan 30 13:07:53.262544 kubelet[2926]: W0130 13:07:53.262447 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-065ab1add7&limit=500&resourceVersion=0": dial tcp 10.200.4.23:6443: connect: connection refused Jan 30 13:07:53.262691 kubelet[2926]: E0130 13:07:53.262549 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-065ab1add7&limit=500&resourceVersion=0\": dial tcp 10.200.4.23:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:07:53.439403 kubelet[2926]: E0130 13:07:53.439359 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-065ab1add7?timeout=10s\": dial tcp 10.200.4.23:6443: connect: connection refused" interval="1.6s" Jan 30 13:07:53.518997 kubelet[2926]: W0130 13:07:53.518928 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.23:6443: connect: connection refused Jan 30 13:07:53.519161 kubelet[2926]: E0130 13:07:53.519007 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.23:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:07:53.630998 kubelet[2926]: I0130 13:07:53.630961 2926 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-065ab1add7" Jan 30 13:07:53.631362 kubelet[2926]: E0130 13:07:53.631329 2926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.23:6443/api/v1/nodes\": dial tcp 10.200.4.23:6443: connect: connection refused" node="ci-4186.1.0-a-065ab1add7" Jan 30 13:07:54.058295 containerd[1721]: time="2025-01-30T13:07:54.058173250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:07:54.059492 containerd[1721]: time="2025-01-30T13:07:54.058805654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:07:54.059492 containerd[1721]: time="2025-01-30T13:07:54.058836654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:07:54.059492 containerd[1721]: time="2025-01-30T13:07:54.058934354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:07:54.062506 containerd[1721]: time="2025-01-30T13:07:54.056417641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:07:54.062785 containerd[1721]: time="2025-01-30T13:07:54.062725075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:07:54.063039 containerd[1721]: time="2025-01-30T13:07:54.062998376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:07:54.064617 containerd[1721]: time="2025-01-30T13:07:54.064448584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:07:54.065120 containerd[1721]: time="2025-01-30T13:07:54.064855986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:07:54.065120 containerd[1721]: time="2025-01-30T13:07:54.064913186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:07:54.065120 containerd[1721]: time="2025-01-30T13:07:54.064930286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:07:54.065120 containerd[1721]: time="2025-01-30T13:07:54.065026187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:07:54.080082 kubelet[2926]: E0130 13:07:54.080042 2926 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.23:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:07:54.095903 systemd[1]: Started cri-containerd-4d2a7a8e5020d125f0cf0bcada61d1a8ea658ac9c1fbfbeec214bd0ef004ecbd.scope - libcontainer container 4d2a7a8e5020d125f0cf0bcada61d1a8ea658ac9c1fbfbeec214bd0ef004ecbd. Jan 30 13:07:54.102794 systemd[1]: Started cri-containerd-286bd1ea92dc9f82acfbee660d5b8288998df629e780757a8385ad556f6ba2b4.scope - libcontainer container 286bd1ea92dc9f82acfbee660d5b8288998df629e780757a8385ad556f6ba2b4. Jan 30 13:07:54.105416 systemd[1]: Started cri-containerd-7aba9656fbda149810461c83c2dbb8b50f0f282a4fbddd493f4fdd679f0e026d.scope - libcontainer container 7aba9656fbda149810461c83c2dbb8b50f0f282a4fbddd493f4fdd679f0e026d. Jan 30 13:07:54.185144 containerd[1721]: time="2025-01-30T13:07:54.184947924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-065ab1add7,Uid:ec5a3e5ab7bb17b8137209ce4bae8710,Namespace:kube-system,Attempt:0,} returns sandbox id \"286bd1ea92dc9f82acfbee660d5b8288998df629e780757a8385ad556f6ba2b4\"" Jan 30 13:07:54.191945 containerd[1721]: time="2025-01-30T13:07:54.191189357Z" level=info msg="CreateContainer within sandbox \"286bd1ea92dc9f82acfbee660d5b8288998df629e780757a8385ad556f6ba2b4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:07:54.192637 containerd[1721]: time="2025-01-30T13:07:54.192608665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-065ab1add7,Uid:91d7e7952925e281afecd4065cfbf49f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d2a7a8e5020d125f0cf0bcada61d1a8ea658ac9c1fbfbeec214bd0ef004ecbd\"" Jan 30 13:07:54.196426 containerd[1721]: time="2025-01-30T13:07:54.196361085Z" level=info msg="CreateContainer within sandbox \"4d2a7a8e5020d125f0cf0bcada61d1a8ea658ac9c1fbfbeec214bd0ef004ecbd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:07:54.207740 containerd[1721]: time="2025-01-30T13:07:54.207696145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-065ab1add7,Uid:724d854281b76a70500e2d0d34525121,Namespace:kube-system,Attempt:0,} returns sandbox id \"7aba9656fbda149810461c83c2dbb8b50f0f282a4fbddd493f4fdd679f0e026d\"" Jan 30 13:07:54.211416 containerd[1721]: time="2025-01-30T13:07:54.211380965Z" level=info msg="CreateContainer within sandbox \"7aba9656fbda149810461c83c2dbb8b50f0f282a4fbddd493f4fdd679f0e026d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:07:54.233969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount884478778.mount: Deactivated successfully. Jan 30 13:07:54.239556 containerd[1721]: time="2025-01-30T13:07:54.239509914Z" level=info msg="CreateContainer within sandbox \"286bd1ea92dc9f82acfbee660d5b8288998df629e780757a8385ad556f6ba2b4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f15ea8527c9f2592c0d7dc2bb417dd398e126d00c9ca348f2df0eebd4ed97ba8\"" Jan 30 13:07:54.240305 containerd[1721]: time="2025-01-30T13:07:54.240273718Z" level=info msg="StartContainer for \"f15ea8527c9f2592c0d7dc2bb417dd398e126d00c9ca348f2df0eebd4ed97ba8\"" Jan 30 13:07:54.258091 containerd[1721]: time="2025-01-30T13:07:54.257928912Z" level=info msg="CreateContainer within sandbox \"4d2a7a8e5020d125f0cf0bcada61d1a8ea658ac9c1fbfbeec214bd0ef004ecbd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a7ca369762c34904ede721436224a01334d90ebf78210bf67a89d7f3939a9bd7\"" Jan 30 13:07:54.258643 containerd[1721]: time="2025-01-30T13:07:54.258609116Z" level=info msg="StartContainer for \"a7ca369762c34904ede721436224a01334d90ebf78210bf67a89d7f3939a9bd7\"" Jan 30 13:07:54.276541 systemd[1]: Started cri-containerd-f15ea8527c9f2592c0d7dc2bb417dd398e126d00c9ca348f2df0eebd4ed97ba8.scope - libcontainer container f15ea8527c9f2592c0d7dc2bb417dd398e126d00c9ca348f2df0eebd4ed97ba8. Jan 30 13:07:54.302813 containerd[1721]: time="2025-01-30T13:07:54.302671650Z" level=info msg="CreateContainer within sandbox \"7aba9656fbda149810461c83c2dbb8b50f0f282a4fbddd493f4fdd679f0e026d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f210e27ce409e8b03715e20849c0d3b9984f54553ece171c8ff53f89e7592793\"" Jan 30 13:07:54.305300 containerd[1721]: time="2025-01-30T13:07:54.305000162Z" level=info msg="StartContainer for \"f210e27ce409e8b03715e20849c0d3b9984f54553ece171c8ff53f89e7592793\"" Jan 30 13:07:54.314736 systemd[1]: Started cri-containerd-a7ca369762c34904ede721436224a01334d90ebf78210bf67a89d7f3939a9bd7.scope - libcontainer container a7ca369762c34904ede721436224a01334d90ebf78210bf67a89d7f3939a9bd7. Jan 30 13:07:54.368626 containerd[1721]: time="2025-01-30T13:07:54.368577000Z" level=info msg="StartContainer for \"f15ea8527c9f2592c0d7dc2bb417dd398e126d00c9ca348f2df0eebd4ed97ba8\" returns successfully" Jan 30 13:07:54.376266 systemd[1]: Started cri-containerd-f210e27ce409e8b03715e20849c0d3b9984f54553ece171c8ff53f89e7592793.scope - libcontainer container f210e27ce409e8b03715e20849c0d3b9984f54553ece171c8ff53f89e7592793. Jan 30 13:07:54.410221 containerd[1721]: time="2025-01-30T13:07:54.410169221Z" level=info msg="StartContainer for \"a7ca369762c34904ede721436224a01334d90ebf78210bf67a89d7f3939a9bd7\" returns successfully" Jan 30 13:07:54.472627 containerd[1721]: time="2025-01-30T13:07:54.472576953Z" level=info msg="StartContainer for \"f210e27ce409e8b03715e20849c0d3b9984f54553ece171c8ff53f89e7592793\" returns successfully" Jan 30 13:07:55.237193 kubelet[2926]: I0130 13:07:55.236647 2926 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-065ab1add7" Jan 30 13:07:56.444711 kubelet[2926]: E0130 13:07:56.444656 2926 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.0-a-065ab1add7\" not found" node="ci-4186.1.0-a-065ab1add7" Jan 30 13:07:56.447011 kubelet[2926]: I0130 13:07:56.446243 2926 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186.1.0-a-065ab1add7" Jan 30 13:07:57.017702 kubelet[2926]: I0130 13:07:57.017660 2926 apiserver.go:52] "Watching apiserver" Jan 30 13:07:57.034620 kubelet[2926]: I0130 13:07:57.034563 2926 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:07:57.176787 kubelet[2926]: E0130 13:07:57.176737 2926 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-065ab1add7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4186.1.0-a-065ab1add7" Jan 30 13:07:58.231397 kubelet[2926]: W0130 13:07:58.231349 2926 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:07:58.925636 systemd[1]: Reloading requested from client PID 3196 ('systemctl') (unit session-9.scope)... Jan 30 13:07:58.925655 systemd[1]: Reloading... Jan 30 13:07:59.024511 zram_generator::config[3232]: No configuration found. Jan 30 13:07:59.157533 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:07:59.255976 systemd[1]: Reloading finished in 329 ms. Jan 30 13:07:59.299036 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:59.325105 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:07:59.325381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:59.325458 systemd[1]: kubelet.service: Consumed 1.073s CPU time, 119.0M memory peak, 0B memory swap peak. Jan 30 13:07:59.334876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:07:59.438373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:07:59.449857 (kubelet)[3303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:07:59.486884 kubelet[3303]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:07:59.486884 kubelet[3303]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:07:59.486884 kubelet[3303]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:07:59.487364 kubelet[3303]: I0130 13:07:59.486985 3303 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:07:59.493921 kubelet[3303]: I0130 13:07:59.493884 3303 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:07:59.493921 kubelet[3303]: I0130 13:07:59.493909 3303 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:08:00.036371 kubelet[3303]: I0130 13:07:59.494124 3303 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:08:00.038676 kubelet[3303]: I0130 13:08:00.038644 3303 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:08:00.042218 kubelet[3303]: I0130 13:08:00.041862 3303 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:08:00.045972 kubelet[3303]: E0130 13:08:00.045923 3303 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:08:00.045972 kubelet[3303]: I0130 13:08:00.045970 3303 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:08:00.050722 kubelet[3303]: I0130 13:08:00.050698 3303 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:08:00.050909 kubelet[3303]: I0130 13:08:00.050827 3303 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:08:00.050998 kubelet[3303]: I0130 13:08:00.050971 3303 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:08:00.051405 kubelet[3303]: I0130 13:08:00.051004 3303 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-065ab1add7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:08:00.051405 kubelet[3303]: I0130 13:08:00.051260 3303 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:08:00.051405 kubelet[3303]: I0130 13:08:00.051275 3303 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:08:00.051405 kubelet[3303]: I0130 13:08:00.051350 3303 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:08:00.051998 kubelet[3303]: I0130 13:08:00.051510 3303 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:08:00.051998 kubelet[3303]: I0130 13:08:00.051527 3303 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:08:00.051998 kubelet[3303]: I0130 13:08:00.051561 3303 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:08:00.051998 kubelet[3303]: I0130 13:08:00.051579 3303 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:08:00.054200 kubelet[3303]: I0130 13:08:00.054178 3303 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:08:00.056361 kubelet[3303]: I0130 13:08:00.056333 3303 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:08:00.063003 kubelet[3303]: I0130 13:08:00.062976 3303 server.go:1269] "Started kubelet" Jan 30 13:08:00.071342 kubelet[3303]: I0130 13:08:00.070884 3303 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:08:00.075495 kubelet[3303]: I0130 13:08:00.072002 3303 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:08:00.075495 kubelet[3303]: I0130 13:08:00.072606 3303 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:08:00.075495 kubelet[3303]: I0130 13:08:00.072899 3303 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:08:00.075817 kubelet[3303]: I0130 13:08:00.075804 3303 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:08:00.086498 kubelet[3303]: I0130 13:08:00.085968 3303 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:08:00.092851 kubelet[3303]: I0130 13:08:00.092824 3303 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:08:00.094276 kubelet[3303]: I0130 13:08:00.093433 3303 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:08:00.094736 kubelet[3303]: I0130 13:08:00.094709 3303 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:08:00.099230 kubelet[3303]: I0130 13:08:00.099197 3303 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:08:00.099230 kubelet[3303]: I0130 13:08:00.099223 3303 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:08:00.099374 kubelet[3303]: I0130 13:08:00.099314 3303 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:08:00.100231 kubelet[3303]: E0130 13:08:00.099560 3303 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:08:00.116551 kubelet[3303]: I0130 13:08:00.116449 3303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:08:00.121923 kubelet[3303]: I0130 13:08:00.121888 3303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:08:00.121923 kubelet[3303]: I0130 13:08:00.121929 3303 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:08:00.122122 kubelet[3303]: I0130 13:08:00.121960 3303 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:08:00.122122 kubelet[3303]: E0130 13:08:00.122010 3303 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:08:00.177260 kubelet[3303]: I0130 13:08:00.176906 3303 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:08:00.177260 kubelet[3303]: I0130 13:08:00.176930 3303 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:08:00.177260 kubelet[3303]: I0130 13:08:00.176951 3303 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:08:00.177260 kubelet[3303]: I0130 13:08:00.177134 3303 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:08:00.177260 kubelet[3303]: I0130 13:08:00.177150 3303 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:08:00.177260 kubelet[3303]: I0130 13:08:00.177176 3303 policy_none.go:49] "None policy: Start" Jan 30 13:08:00.178861 kubelet[3303]: I0130 13:08:00.178635 3303 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:08:00.178861 kubelet[3303]: I0130 13:08:00.178660 3303 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:08:00.178992 kubelet[3303]: I0130 13:08:00.178933 3303 state_mem.go:75] "Updated machine memory state" Jan 30 13:08:00.183614 kubelet[3303]: I0130 13:08:00.183053 3303 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:08:00.183614 kubelet[3303]: I0130 13:08:00.183237 3303 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:08:00.183614 kubelet[3303]: I0130 13:08:00.183250 3303 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:08:00.183614 kubelet[3303]: I0130 13:08:00.183529 3303 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:08:00.211102 sudo[3335]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:08:00.211877 sudo[3335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:08:00.230790 kubelet[3303]: W0130 13:08:00.230740 3303 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:08:00.236114 kubelet[3303]: W0130 13:08:00.235716 3303 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:08:00.236114 kubelet[3303]: E0130 13:08:00.235831 3303 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186.1.0-a-065ab1add7\" already exists" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-065ab1add7" Jan 30 13:08:00.236614 kubelet[3303]: W0130 13:08:00.236599 3303 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:08:00.292804 kubelet[3303]: I0130 13:08:00.291859 3303 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-065ab1add7" Jan 30 13:08:00.296108 kubelet[3303]: I0130 13:08:00.295845 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/724d854281b76a70500e2d0d34525121-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-065ab1add7\" (UID: \"724d854281b76a70500e2d0d34525121\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-065ab1add7" Jan 30 13:08:00.296108 kubelet[3303]: I0130 13:08:00.295887 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91d7e7952925e281afecd4065cfbf49f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-065ab1add7\" (UID: \"91d7e7952925e281afecd4065cfbf49f\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-065ab1add7" Jan 30 13:08:00.296108 kubelet[3303]: I0130 13:08:00.295920 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/724d854281b76a70500e2d0d34525121-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-065ab1add7\" (UID: \"724d854281b76a70500e2d0d34525121\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-065ab1add7" Jan 30 13:08:00.296108 kubelet[3303]: I0130 13:08:00.295943 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/724d854281b76a70500e2d0d34525121-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-065ab1add7\" (UID: \"724d854281b76a70500e2d0d34525121\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-065ab1add7" Jan 30 13:08:00.296108 kubelet[3303]: I0130 13:08:00.295966 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/724d854281b76a70500e2d0d34525121-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-065ab1add7\" (UID: \"724d854281b76a70500e2d0d34525121\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-065ab1add7" Jan 30 13:08:00.296446 kubelet[3303]: I0130 13:08:00.295992 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec5a3e5ab7bb17b8137209ce4bae8710-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-065ab1add7\" (UID: \"ec5a3e5ab7bb17b8137209ce4bae8710\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-065ab1add7" Jan 30 13:08:00.296446 kubelet[3303]: I0130 13:08:00.296015 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91d7e7952925e281afecd4065cfbf49f-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-065ab1add7\" (UID: \"91d7e7952925e281afecd4065cfbf49f\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-065ab1add7" Jan 30 13:08:00.296446 kubelet[3303]: I0130 13:08:00.296035 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91d7e7952925e281afecd4065cfbf49f-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-065ab1add7\" (UID: \"91d7e7952925e281afecd4065cfbf49f\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-065ab1add7" Jan 30 13:08:00.296446 kubelet[3303]: I0130 13:08:00.296056 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/724d854281b76a70500e2d0d34525121-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-065ab1add7\" (UID: \"724d854281b76a70500e2d0d34525121\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-065ab1add7" Jan 30 13:08:00.310524 kubelet[3303]: I0130 13:08:00.310055 3303 kubelet_node_status.go:111] "Node was previously registered" node="ci-4186.1.0-a-065ab1add7" Jan 30 13:08:00.310524 kubelet[3303]: I0130 13:08:00.310158 3303 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186.1.0-a-065ab1add7" Jan 30 13:08:00.745435 sudo[3335]: pam_unix(sudo:session): session closed for user root Jan 30 13:08:01.053586 kubelet[3303]: I0130 13:08:01.052622 3303 apiserver.go:52] "Watching apiserver" Jan 30 13:08:01.095290 kubelet[3303]: I0130 13:08:01.095243 3303 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:08:01.163661 kubelet[3303]: W0130 13:08:01.163036 3303 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:08:01.163661 kubelet[3303]: E0130 13:08:01.163115 3303 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-065ab1add7\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-a-065ab1add7" Jan 30 13:08:01.193291 kubelet[3303]: I0130 13:08:01.193215 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-a-065ab1add7" podStartSLOduration=1.193095547 podStartE2EDuration="1.193095547s" podCreationTimestamp="2025-01-30 13:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:08:01.183426765 +0000 UTC m=+1.729737386" watchObservedRunningTime="2025-01-30 13:08:01.193095547 +0000 UTC m=+1.739406068" Jan 30 13:08:01.206555 kubelet[3303]: I0130 13:08:01.206318 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-a-065ab1add7" podStartSLOduration=1.206295659 podStartE2EDuration="1.206295659s" podCreationTimestamp="2025-01-30 13:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:08:01.193793253 +0000 UTC m=+1.740103774" watchObservedRunningTime="2025-01-30 13:08:01.206295659 +0000 UTC m=+1.752606280" Jan 30 13:08:01.217817 kubelet[3303]: I0130 13:08:01.217173 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-065ab1add7" podStartSLOduration=3.217151751 podStartE2EDuration="3.217151751s" podCreationTimestamp="2025-01-30 13:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:08:01.20640276 +0000 UTC m=+1.752713281" watchObservedRunningTime="2025-01-30 13:08:01.217151751 +0000 UTC m=+1.763462272" Jan 30 13:08:02.085009 sudo[2276]: pam_unix(sudo:session): session closed for user root Jan 30 13:08:02.187555 sshd[2275]: Connection closed by 10.200.16.10 port 34794 Jan 30 13:08:02.188353 sshd-session[2273]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:02.192948 systemd[1]: sshd@6-10.200.4.23:22-10.200.16.10:34794.service: Deactivated successfully. Jan 30 13:08:02.195000 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:08:02.195208 systemd[1]: session-9.scope: Consumed 4.000s CPU time, 149.9M memory peak, 0B memory swap peak. Jan 30 13:08:02.195858 systemd-logind[1698]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:08:02.196944 systemd-logind[1698]: Removed session 9. Jan 30 13:08:04.572461 kubelet[3303]: I0130 13:08:04.572424 3303 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:08:04.573064 containerd[1721]: time="2025-01-30T13:08:04.572920369Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:08:04.574039 kubelet[3303]: I0130 13:08:04.573731 3303 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:08:05.433232 kubelet[3303]: I0130 13:08:05.432625 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb654b77-71f8-469c-9a95-6b428b612136-lib-modules\") pod \"kube-proxy-zfv2d\" (UID: \"cb654b77-71f8-469c-9a95-6b428b612136\") " pod="kube-system/kube-proxy-zfv2d" Jan 30 13:08:05.433232 kubelet[3303]: I0130 13:08:05.432670 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjss4\" (UniqueName: \"kubernetes.io/projected/cb654b77-71f8-469c-9a95-6b428b612136-kube-api-access-sjss4\") pod \"kube-proxy-zfv2d\" (UID: \"cb654b77-71f8-469c-9a95-6b428b612136\") " pod="kube-system/kube-proxy-zfv2d" Jan 30 13:08:05.433232 kubelet[3303]: I0130 13:08:05.432701 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb654b77-71f8-469c-9a95-6b428b612136-xtables-lock\") pod \"kube-proxy-zfv2d\" (UID: \"cb654b77-71f8-469c-9a95-6b428b612136\") " pod="kube-system/kube-proxy-zfv2d" Jan 30 13:08:05.433232 kubelet[3303]: I0130 13:08:05.432725 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb654b77-71f8-469c-9a95-6b428b612136-kube-proxy\") pod \"kube-proxy-zfv2d\" (UID: \"cb654b77-71f8-469c-9a95-6b428b612136\") " pod="kube-system/kube-proxy-zfv2d" Jan 30 13:08:05.441679 systemd[1]: Created slice kubepods-besteffort-podcb654b77_71f8_469c_9a95_6b428b612136.slice - libcontainer container kubepods-besteffort-podcb654b77_71f8_469c_9a95_6b428b612136.slice. Jan 30 13:08:05.454533 systemd[1]: Created slice kubepods-burstable-pod77b9d498_9654_4009_8d83_7ab065d09c75.slice - libcontainer container kubepods-burstable-pod77b9d498_9654_4009_8d83_7ab065d09c75.slice. Jan 30 13:08:05.533482 kubelet[3303]: I0130 13:08:05.533409 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-host-proc-sys-net\") pod \"cilium-ns2x5\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " pod="kube-system/cilium-ns2x5" Jan 30 13:08:05.534319 kubelet[3303]: I0130 13:08:05.533717 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/77b9d498-9654-4009-8d83-7ab065d09c75-clustermesh-secrets\") pod \"cilium-ns2x5\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " pod="kube-system/cilium-ns2x5" Jan 30 13:08:05.534319 kubelet[3303]: I0130 13:08:05.533778 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-cilium-cgroup\") pod \"cilium-ns2x5\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " pod="kube-system/cilium-ns2x5" Jan 30 13:08:05.534319 kubelet[3303]: I0130 13:08:05.533806 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-cni-path\") pod \"cilium-ns2x5\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " pod="kube-system/cilium-ns2x5" Jan 30 13:08:05.534319 kubelet[3303]: I0130 13:08:05.533839 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-xtables-lock\") pod \"cilium-ns2x5\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " pod="kube-system/cilium-ns2x5" Jan 30 13:08:05.534319 kubelet[3303]: I0130 13:08:05.533912 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-cilium-run\") pod \"cilium-ns2x5\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " pod="kube-system/cilium-ns2x5" Jan 30 13:08:05.534319 kubelet[3303]: I0130 13:08:05.533955 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77b9d498-9654-4009-8d83-7ab065d09c75-cilium-config-path\") pod \"cilium-ns2x5\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " pod="kube-system/cilium-ns2x5" Jan 30 13:08:05.534712 kubelet[3303]: I0130 13:08:05.533979 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-hostproc\") pod \"cilium-ns2x5\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " pod="kube-system/cilium-ns2x5" Jan 30 13:08:05.534712 kubelet[3303]: I0130 13:08:05.534002 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-host-proc-sys-kernel\") pod \"cilium-ns2x5\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " pod="kube-system/cilium-ns2x5" Jan 30 13:08:05.534712 kubelet[3303]: I0130 13:08:05.534027 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-bpf-maps\") pod \"cilium-ns2x5\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " pod="kube-system/cilium-ns2x5" Jan 30 13:08:05.534712 kubelet[3303]: I0130 13:08:05.534056 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/77b9d498-9654-4009-8d83-7ab065d09c75-hubble-tls\") pod \"cilium-ns2x5\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " pod="kube-system/cilium-ns2x5" Jan 30 13:08:05.534712 kubelet[3303]: I0130 13:08:05.534141 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-etc-cni-netd\") pod \"cilium-ns2x5\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " pod="kube-system/cilium-ns2x5" Jan 30 13:08:05.534712 kubelet[3303]: I0130 13:08:05.534169 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-lib-modules\") pod \"cilium-ns2x5\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " pod="kube-system/cilium-ns2x5" Jan 30 13:08:05.535015 kubelet[3303]: I0130 13:08:05.534195 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t9qx\" (UniqueName: \"kubernetes.io/projected/77b9d498-9654-4009-8d83-7ab065d09c75-kube-api-access-8t9qx\") pod \"cilium-ns2x5\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " pod="kube-system/cilium-ns2x5" Jan 30 13:08:05.694178 systemd[1]: Created slice kubepods-besteffort-podf2489171_fd17_4c85_b161_7d468aaddc51.slice - libcontainer container kubepods-besteffort-podf2489171_fd17_4c85_b161_7d468aaddc51.slice. Jan 30 13:08:05.735603 kubelet[3303]: I0130 13:08:05.735557 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2489171-fd17-4c85-b161-7d468aaddc51-cilium-config-path\") pod \"cilium-operator-5d85765b45-7vhg4\" (UID: \"f2489171-fd17-4c85-b161-7d468aaddc51\") " pod="kube-system/cilium-operator-5d85765b45-7vhg4" Jan 30 13:08:05.735987 kubelet[3303]: I0130 13:08:05.735626 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7vp6\" (UniqueName: \"kubernetes.io/projected/f2489171-fd17-4c85-b161-7d468aaddc51-kube-api-access-t7vp6\") pod \"cilium-operator-5d85765b45-7vhg4\" (UID: \"f2489171-fd17-4c85-b161-7d468aaddc51\") " pod="kube-system/cilium-operator-5d85765b45-7vhg4" Jan 30 13:08:05.749382 containerd[1721]: time="2025-01-30T13:08:05.749334040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zfv2d,Uid:cb654b77-71f8-469c-9a95-6b428b612136,Namespace:kube-system,Attempt:0,}" Jan 30 13:08:05.761085 containerd[1721]: time="2025-01-30T13:08:05.761046329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ns2x5,Uid:77b9d498-9654-4009-8d83-7ab065d09c75,Namespace:kube-system,Attempt:0,}" Jan 30 13:08:05.805977 containerd[1721]: time="2025-01-30T13:08:05.805724470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:08:05.805977 containerd[1721]: time="2025-01-30T13:08:05.805801571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:08:05.805977 containerd[1721]: time="2025-01-30T13:08:05.805824871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:05.805977 containerd[1721]: time="2025-01-30T13:08:05.805922171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:05.833840 systemd[1]: Started cri-containerd-473cb37cace566457b13a206457560a4c4cf5c83f5b666e8e1ec0dcd41f9d664.scope - libcontainer container 473cb37cace566457b13a206457560a4c4cf5c83f5b666e8e1ec0dcd41f9d664. Jan 30 13:08:05.875688 containerd[1721]: time="2025-01-30T13:08:05.875368701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:08:05.875688 containerd[1721]: time="2025-01-30T13:08:05.875549102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:08:05.876055 containerd[1721]: time="2025-01-30T13:08:05.875629203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:05.876267 containerd[1721]: time="2025-01-30T13:08:05.876013206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:05.882853 containerd[1721]: time="2025-01-30T13:08:05.882797158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zfv2d,Uid:cb654b77-71f8-469c-9a95-6b428b612136,Namespace:kube-system,Attempt:0,} returns sandbox id \"473cb37cace566457b13a206457560a4c4cf5c83f5b666e8e1ec0dcd41f9d664\"" Jan 30 13:08:05.890492 containerd[1721]: time="2025-01-30T13:08:05.890171214Z" level=info msg="CreateContainer within sandbox \"473cb37cace566457b13a206457560a4c4cf5c83f5b666e8e1ec0dcd41f9d664\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:08:05.901641 systemd[1]: Started cri-containerd-60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539.scope - libcontainer container 60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539. Jan 30 13:08:05.924236 containerd[1721]: time="2025-01-30T13:08:05.924194973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ns2x5,Uid:77b9d498-9654-4009-8d83-7ab065d09c75,Namespace:kube-system,Attempt:0,} returns sandbox id \"60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539\"" Jan 30 13:08:05.926377 containerd[1721]: time="2025-01-30T13:08:05.926251089Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:08:05.999199 containerd[1721]: time="2025-01-30T13:08:05.999048444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7vhg4,Uid:f2489171-fd17-4c85-b161-7d468aaddc51,Namespace:kube-system,Attempt:0,}" Jan 30 13:08:07.060499 containerd[1721]: time="2025-01-30T13:08:07.057765417Z" level=info msg="CreateContainer within sandbox \"473cb37cace566457b13a206457560a4c4cf5c83f5b666e8e1ec0dcd41f9d664\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6f6c04aaae3379d9a3c0fbd578a741e2b7993ce60576990663cf51fc8e98aca5\"" Jan 30 13:08:07.063495 containerd[1721]: time="2025-01-30T13:08:07.061503746Z" level=info msg="StartContainer for \"6f6c04aaae3379d9a3c0fbd578a741e2b7993ce60576990663cf51fc8e98aca5\"" Jan 30 13:08:07.090421 containerd[1721]: time="2025-01-30T13:08:07.090068164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:08:07.090421 containerd[1721]: time="2025-01-30T13:08:07.090215965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:08:07.090421 containerd[1721]: time="2025-01-30T13:08:07.090241465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:07.091783 containerd[1721]: time="2025-01-30T13:08:07.091589075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:07.110613 systemd[1]: Started cri-containerd-6f6c04aaae3379d9a3c0fbd578a741e2b7993ce60576990663cf51fc8e98aca5.scope - libcontainer container 6f6c04aaae3379d9a3c0fbd578a741e2b7993ce60576990663cf51fc8e98aca5. Jan 30 13:08:07.116696 systemd[1]: Started cri-containerd-2b8dd582e3ad65df9ef3658137b405554d48c3f92e5c8f8c781bd8eb5131e41f.scope - libcontainer container 2b8dd582e3ad65df9ef3658137b405554d48c3f92e5c8f8c781bd8eb5131e41f. Jan 30 13:08:07.156774 containerd[1721]: time="2025-01-30T13:08:07.156725972Z" level=info msg="StartContainer for \"6f6c04aaae3379d9a3c0fbd578a741e2b7993ce60576990663cf51fc8e98aca5\" returns successfully" Jan 30 13:08:07.203743 containerd[1721]: time="2025-01-30T13:08:07.203696530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7vhg4,Uid:f2489171-fd17-4c85-b161-7d468aaddc51,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b8dd582e3ad65df9ef3658137b405554d48c3f92e5c8f8c781bd8eb5131e41f\"" Jan 30 13:08:09.098449 kubelet[3303]: I0130 13:08:09.098376 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zfv2d" podStartSLOduration=4.098354277 podStartE2EDuration="4.098354277s" podCreationTimestamp="2025-01-30 13:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:08:07.23773649 +0000 UTC m=+7.784047011" watchObservedRunningTime="2025-01-30 13:08:09.098354277 +0000 UTC m=+9.644664798" Jan 30 13:08:12.022082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457126139.mount: Deactivated successfully. Jan 30 13:08:14.141757 containerd[1721]: time="2025-01-30T13:08:14.141701172Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:14.143558 containerd[1721]: time="2025-01-30T13:08:14.143490885Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 13:08:14.146397 containerd[1721]: time="2025-01-30T13:08:14.146358706Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:14.148614 containerd[1721]: time="2025-01-30T13:08:14.148581323Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.222260834s" Jan 30 13:08:14.148728 containerd[1721]: time="2025-01-30T13:08:14.148616223Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 13:08:14.150597 containerd[1721]: time="2025-01-30T13:08:14.150350736Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:08:14.152773 containerd[1721]: time="2025-01-30T13:08:14.152302450Z" level=info msg="CreateContainer within sandbox \"60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:08:14.197374 containerd[1721]: time="2025-01-30T13:08:14.197323779Z" level=info msg="CreateContainer within sandbox \"60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc\"" Jan 30 13:08:14.198065 containerd[1721]: time="2025-01-30T13:08:14.198002884Z" level=info msg="StartContainer for \"556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc\"" Jan 30 13:08:14.231645 systemd[1]: Started cri-containerd-556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc.scope - libcontainer container 556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc. Jan 30 13:08:14.259198 containerd[1721]: time="2025-01-30T13:08:14.259153631Z" level=info msg="StartContainer for \"556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc\" returns successfully" Jan 30 13:08:14.270098 systemd[1]: cri-containerd-556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc.scope: Deactivated successfully. Jan 30 13:08:15.178129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc-rootfs.mount: Deactivated successfully. Jan 30 13:08:18.156870 containerd[1721]: time="2025-01-30T13:08:18.156802139Z" level=info msg="shim disconnected" id=556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc namespace=k8s.io Jan 30 13:08:18.156870 containerd[1721]: time="2025-01-30T13:08:18.156864140Z" level=warning msg="cleaning up after shim disconnected" id=556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc namespace=k8s.io Jan 30 13:08:18.156870 containerd[1721]: time="2025-01-30T13:08:18.156876040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:08:18.200496 containerd[1721]: time="2025-01-30T13:08:18.200227657Z" level=info msg="CreateContainer within sandbox \"60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:08:18.241985 containerd[1721]: time="2025-01-30T13:08:18.241932462Z" level=info msg="CreateContainer within sandbox \"60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae\"" Jan 30 13:08:18.243721 containerd[1721]: time="2025-01-30T13:08:18.243680775Z" level=info msg="StartContainer for \"5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae\"" Jan 30 13:08:18.301031 systemd[1]: run-containerd-runc-k8s.io-5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae-runc.Abm0f0.mount: Deactivated successfully. Jan 30 13:08:18.309630 systemd[1]: Started cri-containerd-5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae.scope - libcontainer container 5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae. Jan 30 13:08:18.342303 containerd[1721]: time="2025-01-30T13:08:18.340127680Z" level=info msg="StartContainer for \"5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae\" returns successfully" Jan 30 13:08:18.352295 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:08:18.352614 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:08:18.352705 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:08:18.360824 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:08:18.361078 systemd[1]: cri-containerd-5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae.scope: Deactivated successfully. Jan 30 13:08:18.380699 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:08:18.384499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae-rootfs.mount: Deactivated successfully. Jan 30 13:08:18.394955 containerd[1721]: time="2025-01-30T13:08:18.394898581Z" level=info msg="shim disconnected" id=5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae namespace=k8s.io Jan 30 13:08:18.395104 containerd[1721]: time="2025-01-30T13:08:18.394965381Z" level=warning msg="cleaning up after shim disconnected" id=5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae namespace=k8s.io Jan 30 13:08:18.395104 containerd[1721]: time="2025-01-30T13:08:18.394980781Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:08:19.204776 containerd[1721]: time="2025-01-30T13:08:19.204723004Z" level=info msg="CreateContainer within sandbox \"60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:08:19.278100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3685065855.mount: Deactivated successfully. Jan 30 13:08:19.305233 containerd[1721]: time="2025-01-30T13:08:19.305180338Z" level=info msg="CreateContainer within sandbox \"60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897\"" Jan 30 13:08:19.306090 containerd[1721]: time="2025-01-30T13:08:19.305812043Z" level=info msg="StartContainer for \"bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897\"" Jan 30 13:08:19.348415 systemd[1]: Started cri-containerd-bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897.scope - libcontainer container bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897. Jan 30 13:08:19.397674 systemd[1]: cri-containerd-bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897.scope: Deactivated successfully. Jan 30 13:08:19.401849 containerd[1721]: time="2025-01-30T13:08:19.401806345Z" level=info msg="StartContainer for \"bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897\" returns successfully" Jan 30 13:08:19.450430 containerd[1721]: time="2025-01-30T13:08:19.450228199Z" level=info msg="shim disconnected" id=bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897 namespace=k8s.io Jan 30 13:08:19.450430 containerd[1721]: time="2025-01-30T13:08:19.450288800Z" level=warning msg="cleaning up after shim disconnected" id=bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897 namespace=k8s.io Jan 30 13:08:19.450430 containerd[1721]: time="2025-01-30T13:08:19.450304400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:08:19.468772 containerd[1721]: time="2025-01-30T13:08:19.468642234Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:08:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:08:19.938619 containerd[1721]: time="2025-01-30T13:08:19.938575571Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:19.941628 containerd[1721]: time="2025-01-30T13:08:19.941569092Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 13:08:19.946807 containerd[1721]: time="2025-01-30T13:08:19.946751028Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:08:19.948150 containerd[1721]: time="2025-01-30T13:08:19.947996836Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.7976048s" Jan 30 13:08:19.948150 containerd[1721]: time="2025-01-30T13:08:19.948038437Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 13:08:19.950731 containerd[1721]: time="2025-01-30T13:08:19.950631254Z" level=info msg="CreateContainer within sandbox \"2b8dd582e3ad65df9ef3658137b405554d48c3f92e5c8f8c781bd8eb5131e41f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:08:19.987755 containerd[1721]: time="2025-01-30T13:08:19.987704210Z" level=info msg="CreateContainer within sandbox \"2b8dd582e3ad65df9ef3658137b405554d48c3f92e5c8f8c781bd8eb5131e41f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2\"" Jan 30 13:08:19.988406 containerd[1721]: time="2025-01-30T13:08:19.988186113Z" level=info msg="StartContainer for \"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2\"" Jan 30 13:08:20.018654 systemd[1]: Started cri-containerd-e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2.scope - libcontainer container e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2. Jan 30 13:08:20.044087 containerd[1721]: time="2025-01-30T13:08:20.044015598Z" level=info msg="StartContainer for \"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2\" returns successfully" Jan 30 13:08:20.214692 containerd[1721]: time="2025-01-30T13:08:20.212877362Z" level=info msg="CreateContainer within sandbox \"60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:08:20.246880 kubelet[3303]: I0130 13:08:20.244944 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-7vhg4" podStartSLOduration=2.502115289 podStartE2EDuration="15.244922483s" podCreationTimestamp="2025-01-30 13:08:05 +0000 UTC" firstStartedPulling="2025-01-30 13:08:07.206194449 +0000 UTC m=+7.752505070" lastFinishedPulling="2025-01-30 13:08:19.949001643 +0000 UTC m=+20.495312264" observedRunningTime="2025-01-30 13:08:20.237941034 +0000 UTC m=+20.784251555" watchObservedRunningTime="2025-01-30 13:08:20.244922483 +0000 UTC m=+20.791233104" Jan 30 13:08:20.255131 containerd[1721]: time="2025-01-30T13:08:20.253251340Z" level=info msg="CreateContainer within sandbox \"60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1\"" Jan 30 13:08:20.256609 containerd[1721]: time="2025-01-30T13:08:20.255544156Z" level=info msg="StartContainer for \"ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1\"" Jan 30 13:08:20.278015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897-rootfs.mount: Deactivated successfully. Jan 30 13:08:20.337437 systemd[1]: run-containerd-runc-k8s.io-ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1-runc.7X7pj4.mount: Deactivated successfully. Jan 30 13:08:20.350684 systemd[1]: Started cri-containerd-ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1.scope - libcontainer container ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1. Jan 30 13:08:20.420642 systemd[1]: cri-containerd-ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1.scope: Deactivated successfully. Jan 30 13:08:20.425045 containerd[1721]: time="2025-01-30T13:08:20.424296919Z" level=info msg="StartContainer for \"ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1\" returns successfully" Jan 30 13:08:20.456330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1-rootfs.mount: Deactivated successfully. Jan 30 13:08:20.911803 containerd[1721]: time="2025-01-30T13:08:20.911148074Z" level=info msg="shim disconnected" id=ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1 namespace=k8s.io Jan 30 13:08:20.911803 containerd[1721]: time="2025-01-30T13:08:20.911223074Z" level=warning msg="cleaning up after shim disconnected" id=ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1 namespace=k8s.io Jan 30 13:08:20.911803 containerd[1721]: time="2025-01-30T13:08:20.911235275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:08:21.218839 containerd[1721]: time="2025-01-30T13:08:21.218731994Z" level=info msg="CreateContainer within sandbox \"60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:08:21.262565 containerd[1721]: time="2025-01-30T13:08:21.262515595Z" level=info msg="CreateContainer within sandbox \"60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec\"" Jan 30 13:08:21.263723 containerd[1721]: time="2025-01-30T13:08:21.263013799Z" level=info msg="StartContainer for \"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec\"" Jan 30 13:08:21.298633 systemd[1]: Started cri-containerd-06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec.scope - libcontainer container 06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec. Jan 30 13:08:21.331161 containerd[1721]: time="2025-01-30T13:08:21.331098368Z" level=info msg="StartContainer for \"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec\" returns successfully" Jan 30 13:08:21.440206 kubelet[3303]: I0130 13:08:21.438510 3303 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:08:21.497538 systemd[1]: Created slice kubepods-burstable-pod2dddf0ce_8a70_4234_a900_c81411c159be.slice - libcontainer container kubepods-burstable-pod2dddf0ce_8a70_4234_a900_c81411c159be.slice. Jan 30 13:08:21.516120 systemd[1]: Created slice kubepods-burstable-podde922fbc_33bf_4624_8fdd_d01a7b82559e.slice - libcontainer container kubepods-burstable-podde922fbc_33bf_4624_8fdd_d01a7b82559e.slice. Jan 30 13:08:21.544140 kubelet[3303]: I0130 13:08:21.542373 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xfs8\" (UniqueName: \"kubernetes.io/projected/2dddf0ce-8a70-4234-a900-c81411c159be-kube-api-access-7xfs8\") pod \"coredns-6f6b679f8f-p2csm\" (UID: \"2dddf0ce-8a70-4234-a900-c81411c159be\") " pod="kube-system/coredns-6f6b679f8f-p2csm" Jan 30 13:08:21.544140 kubelet[3303]: I0130 13:08:21.542421 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grcf5\" (UniqueName: \"kubernetes.io/projected/de922fbc-33bf-4624-8fdd-d01a7b82559e-kube-api-access-grcf5\") pod \"coredns-6f6b679f8f-85gwb\" (UID: \"de922fbc-33bf-4624-8fdd-d01a7b82559e\") " pod="kube-system/coredns-6f6b679f8f-85gwb" Jan 30 13:08:21.544140 kubelet[3303]: I0130 13:08:21.542447 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de922fbc-33bf-4624-8fdd-d01a7b82559e-config-volume\") pod \"coredns-6f6b679f8f-85gwb\" (UID: \"de922fbc-33bf-4624-8fdd-d01a7b82559e\") " pod="kube-system/coredns-6f6b679f8f-85gwb" Jan 30 13:08:21.544442 kubelet[3303]: I0130 13:08:21.544420 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dddf0ce-8a70-4234-a900-c81411c159be-config-volume\") pod \"coredns-6f6b679f8f-p2csm\" (UID: \"2dddf0ce-8a70-4234-a900-c81411c159be\") " pod="kube-system/coredns-6f6b679f8f-p2csm" Jan 30 13:08:21.807545 containerd[1721]: time="2025-01-30T13:08:21.807383550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p2csm,Uid:2dddf0ce-8a70-4234-a900-c81411c159be,Namespace:kube-system,Attempt:0,}" Jan 30 13:08:21.821568 containerd[1721]: time="2025-01-30T13:08:21.821528748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-85gwb,Uid:de922fbc-33bf-4624-8fdd-d01a7b82559e,Namespace:kube-system,Attempt:0,}" Jan 30 13:08:22.285072 systemd[1]: run-containerd-runc-k8s.io-06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec-runc.BUGBAM.mount: Deactivated successfully. Jan 30 13:08:23.544559 systemd-networkd[1330]: cilium_host: Link UP Jan 30 13:08:23.545089 systemd-networkd[1330]: cilium_net: Link UP Jan 30 13:08:23.545298 systemd-networkd[1330]: cilium_net: Gained carrier Jan 30 13:08:23.545551 systemd-networkd[1330]: cilium_host: Gained carrier Jan 30 13:08:23.749086 systemd-networkd[1330]: cilium_vxlan: Link UP Jan 30 13:08:23.749100 systemd-networkd[1330]: cilium_vxlan: Gained carrier Jan 30 13:08:23.781706 systemd-networkd[1330]: cilium_host: Gained IPv6LL Jan 30 13:08:23.845655 systemd-networkd[1330]: cilium_net: Gained IPv6LL Jan 30 13:08:24.006987 kernel: NET: Registered PF_ALG protocol family Jan 30 13:08:24.701829 systemd-networkd[1330]: lxc_health: Link UP Jan 30 13:08:24.709221 systemd-networkd[1330]: lxc_health: Gained carrier Jan 30 13:08:24.888302 systemd-networkd[1330]: lxc1c8c753faa1f: Link UP Jan 30 13:08:24.897549 kernel: eth0: renamed from tmp77afb Jan 30 13:08:24.906533 systemd-networkd[1330]: lxc1c8c753faa1f: Gained carrier Jan 30 13:08:24.926128 systemd-networkd[1330]: lxcc0c58f1f86e7: Link UP Jan 30 13:08:24.933498 kernel: eth0: renamed from tmp38bee Jan 30 13:08:24.939285 systemd-networkd[1330]: lxcc0c58f1f86e7: Gained carrier Jan 30 13:08:25.693708 systemd-networkd[1330]: cilium_vxlan: Gained IPv6LL Jan 30 13:08:25.757708 systemd-networkd[1330]: lxc_health: Gained IPv6LL Jan 30 13:08:25.794419 kubelet[3303]: I0130 13:08:25.793875 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ns2x5" podStartSLOduration=12.569779578 podStartE2EDuration="20.793854624s" podCreationTimestamp="2025-01-30 13:08:05 +0000 UTC" firstStartedPulling="2025-01-30 13:08:05.925400683 +0000 UTC m=+6.471711204" lastFinishedPulling="2025-01-30 13:08:14.149475629 +0000 UTC m=+14.695786250" observedRunningTime="2025-01-30 13:08:22.240482135 +0000 UTC m=+22.786792656" watchObservedRunningTime="2025-01-30 13:08:25.793854624 +0000 UTC m=+26.340165245" Jan 30 13:08:26.141769 systemd-networkd[1330]: lxc1c8c753faa1f: Gained IPv6LL Jan 30 13:08:26.397624 systemd-networkd[1330]: lxcc0c58f1f86e7: Gained IPv6LL Jan 30 13:08:28.837082 containerd[1721]: time="2025-01-30T13:08:28.836566963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:08:28.837082 containerd[1721]: time="2025-01-30T13:08:28.836639663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:08:28.837082 containerd[1721]: time="2025-01-30T13:08:28.836656163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:28.837082 containerd[1721]: time="2025-01-30T13:08:28.836752464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:28.849557 containerd[1721]: time="2025-01-30T13:08:28.849410708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:08:28.849773 containerd[1721]: time="2025-01-30T13:08:28.849732309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:08:28.849932 containerd[1721]: time="2025-01-30T13:08:28.849905010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:28.850181 containerd[1721]: time="2025-01-30T13:08:28.850145410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:08:28.880654 systemd[1]: Started cri-containerd-38bee8270d1bee47f271f5af695129f45439756b094917d0fd0a219cfecf958f.scope - libcontainer container 38bee8270d1bee47f271f5af695129f45439756b094917d0fd0a219cfecf958f. Jan 30 13:08:28.899589 systemd[1]: run-containerd-runc-k8s.io-77afb42c7b35e822c5a4e086383370db493c35b271fd91faaa48351ea3642a94-runc.FyKUcx.mount: Deactivated successfully. Jan 30 13:08:28.909636 systemd[1]: Started cri-containerd-77afb42c7b35e822c5a4e086383370db493c35b271fd91faaa48351ea3642a94.scope - libcontainer container 77afb42c7b35e822c5a4e086383370db493c35b271fd91faaa48351ea3642a94. Jan 30 13:08:28.989862 containerd[1721]: time="2025-01-30T13:08:28.989756696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p2csm,Uid:2dddf0ce-8a70-4234-a900-c81411c159be,Namespace:kube-system,Attempt:0,} returns sandbox id \"77afb42c7b35e822c5a4e086383370db493c35b271fd91faaa48351ea3642a94\"" Jan 30 13:08:28.996297 containerd[1721]: time="2025-01-30T13:08:28.996098918Z" level=info msg="CreateContainer within sandbox \"77afb42c7b35e822c5a4e086383370db493c35b271fd91faaa48351ea3642a94\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:08:29.001444 containerd[1721]: time="2025-01-30T13:08:29.001394436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-85gwb,Uid:de922fbc-33bf-4624-8fdd-d01a7b82559e,Namespace:kube-system,Attempt:0,} returns sandbox id \"38bee8270d1bee47f271f5af695129f45439756b094917d0fd0a219cfecf958f\"" Jan 30 13:08:29.008877 containerd[1721]: time="2025-01-30T13:08:29.008752062Z" level=info msg="CreateContainer within sandbox \"38bee8270d1bee47f271f5af695129f45439756b094917d0fd0a219cfecf958f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:08:29.070413 containerd[1721]: time="2025-01-30T13:08:29.070361476Z" level=info msg="CreateContainer within sandbox \"77afb42c7b35e822c5a4e086383370db493c35b271fd91faaa48351ea3642a94\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c5c82e1906fe4740b4e94bc5f7091034781881893b02e9eb5962b7672aad111\"" Jan 30 13:08:29.072008 containerd[1721]: time="2025-01-30T13:08:29.070997079Z" level=info msg="StartContainer for \"6c5c82e1906fe4740b4e94bc5f7091034781881893b02e9eb5962b7672aad111\"" Jan 30 13:08:29.078837 containerd[1721]: time="2025-01-30T13:08:29.078783206Z" level=info msg="CreateContainer within sandbox \"38bee8270d1bee47f271f5af695129f45439756b094917d0fd0a219cfecf958f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0af3e5c821d086e07df773d5746533e161c7d3170c62e4d4b51b4613b88581e7\"" Jan 30 13:08:29.080324 containerd[1721]: time="2025-01-30T13:08:29.080298311Z" level=info msg="StartContainer for \"0af3e5c821d086e07df773d5746533e161c7d3170c62e4d4b51b4613b88581e7\"" Jan 30 13:08:29.103779 systemd[1]: Started cri-containerd-6c5c82e1906fe4740b4e94bc5f7091034781881893b02e9eb5962b7672aad111.scope - libcontainer container 6c5c82e1906fe4740b4e94bc5f7091034781881893b02e9eb5962b7672aad111. Jan 30 13:08:29.122634 systemd[1]: Started cri-containerd-0af3e5c821d086e07df773d5746533e161c7d3170c62e4d4b51b4613b88581e7.scope - libcontainer container 0af3e5c821d086e07df773d5746533e161c7d3170c62e4d4b51b4613b88581e7. Jan 30 13:08:29.154191 containerd[1721]: time="2025-01-30T13:08:29.154067768Z" level=info msg="StartContainer for \"6c5c82e1906fe4740b4e94bc5f7091034781881893b02e9eb5962b7672aad111\" returns successfully" Jan 30 13:08:29.171125 containerd[1721]: time="2025-01-30T13:08:29.171076327Z" level=info msg="StartContainer for \"0af3e5c821d086e07df773d5746533e161c7d3170c62e4d4b51b4613b88581e7\" returns successfully" Jan 30 13:08:29.254177 kubelet[3303]: I0130 13:08:29.253607 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-p2csm" podStartSLOduration=24.253586514 podStartE2EDuration="24.253586514s" podCreationTimestamp="2025-01-30 13:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:08:29.251794907 +0000 UTC m=+29.798105528" watchObservedRunningTime="2025-01-30 13:08:29.253586514 +0000 UTC m=+29.799897035" Jan 30 13:08:29.303739 kubelet[3303]: I0130 13:08:29.302971 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-85gwb" podStartSLOduration=24.302949185 podStartE2EDuration="24.302949185s" podCreationTimestamp="2025-01-30 13:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:08:29.300569377 +0000 UTC m=+29.846879998" watchObservedRunningTime="2025-01-30 13:08:29.302949185 +0000 UTC m=+29.849259706" Jan 30 13:09:59.155702 systemd[1]: Started sshd@7-10.200.4.23:22-10.200.16.10:58884.service - OpenSSH per-connection server daemon (10.200.16.10:58884). Jan 30 13:09:59.799111 sshd[4688]: Accepted publickey for core from 10.200.16.10 port 58884 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:09:59.800682 sshd-session[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:09:59.805460 systemd-logind[1698]: New session 10 of user core. Jan 30 13:09:59.810648 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:10:00.353565 sshd[4690]: Connection closed by 10.200.16.10 port 58884 Jan 30 13:10:00.354390 sshd-session[4688]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:00.358853 systemd[1]: sshd@7-10.200.4.23:22-10.200.16.10:58884.service: Deactivated successfully. Jan 30 13:10:00.361443 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:10:00.362606 systemd-logind[1698]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:10:00.363790 systemd-logind[1698]: Removed session 10. Jan 30 13:10:05.474797 systemd[1]: Started sshd@8-10.200.4.23:22-10.200.16.10:58898.service - OpenSSH per-connection server daemon (10.200.16.10:58898). Jan 30 13:10:06.116413 sshd[4704]: Accepted publickey for core from 10.200.16.10 port 58898 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:10:06.118035 sshd-session[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:10:06.124390 systemd-logind[1698]: New session 11 of user core. Jan 30 13:10:06.128657 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:10:06.633831 sshd[4706]: Connection closed by 10.200.16.10 port 58898 Jan 30 13:10:06.634686 sshd-session[4704]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:06.638487 systemd[1]: sshd@8-10.200.4.23:22-10.200.16.10:58898.service: Deactivated successfully. Jan 30 13:10:06.640615 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:10:06.641363 systemd-logind[1698]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:10:06.642293 systemd-logind[1698]: Removed session 11. Jan 30 13:10:11.755785 systemd[1]: Started sshd@9-10.200.4.23:22-10.200.16.10:50702.service - OpenSSH per-connection server daemon (10.200.16.10:50702). Jan 30 13:10:12.395917 sshd[4719]: Accepted publickey for core from 10.200.16.10 port 50702 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:10:12.397322 sshd-session[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:10:12.402052 systemd-logind[1698]: New session 12 of user core. Jan 30 13:10:12.408879 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:10:12.909150 sshd[4721]: Connection closed by 10.200.16.10 port 50702 Jan 30 13:10:12.909803 sshd-session[4719]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:12.913501 systemd[1]: sshd@9-10.200.4.23:22-10.200.16.10:50702.service: Deactivated successfully. Jan 30 13:10:12.916258 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:10:12.918005 systemd-logind[1698]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:10:12.919210 systemd-logind[1698]: Removed session 12. Jan 30 13:10:18.026924 systemd[1]: Started sshd@10-10.200.4.23:22-10.200.16.10:45976.service - OpenSSH per-connection server daemon (10.200.16.10:45976). Jan 30 13:10:18.672338 sshd[4733]: Accepted publickey for core from 10.200.16.10 port 45976 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:10:18.674041 sshd-session[4733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:10:18.681150 systemd-logind[1698]: New session 13 of user core. Jan 30 13:10:18.687648 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:10:19.178955 sshd[4735]: Connection closed by 10.200.16.10 port 45976 Jan 30 13:10:19.179837 sshd-session[4733]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:19.183297 systemd[1]: sshd@10-10.200.4.23:22-10.200.16.10:45976.service: Deactivated successfully. Jan 30 13:10:19.186110 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:10:19.188142 systemd-logind[1698]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:10:19.189464 systemd-logind[1698]: Removed session 13. Jan 30 13:10:19.295940 systemd[1]: Started sshd@11-10.200.4.23:22-10.200.16.10:45990.service - OpenSSH per-connection server daemon (10.200.16.10:45990). Jan 30 13:10:19.943242 sshd[4747]: Accepted publickey for core from 10.200.16.10 port 45990 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:10:19.944915 sshd-session[4747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:10:19.949056 systemd-logind[1698]: New session 14 of user core. Jan 30 13:10:19.953941 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:10:20.484629 sshd[4749]: Connection closed by 10.200.16.10 port 45990 Jan 30 13:10:20.486973 sshd-session[4747]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:20.491454 systemd[1]: sshd@11-10.200.4.23:22-10.200.16.10:45990.service: Deactivated successfully. Jan 30 13:10:20.494019 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:10:20.497616 systemd-logind[1698]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:10:20.499616 systemd-logind[1698]: Removed session 14. Jan 30 13:10:20.599848 systemd[1]: Started sshd@12-10.200.4.23:22-10.200.16.10:45996.service - OpenSSH per-connection server daemon (10.200.16.10:45996). Jan 30 13:10:21.246174 sshd[4758]: Accepted publickey for core from 10.200.16.10 port 45996 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:10:21.247881 sshd-session[4758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:10:21.252519 systemd-logind[1698]: New session 15 of user core. Jan 30 13:10:21.260625 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:10:21.759213 sshd[4760]: Connection closed by 10.200.16.10 port 45996 Jan 30 13:10:21.759772 sshd-session[4758]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:21.763378 systemd[1]: sshd@12-10.200.4.23:22-10.200.16.10:45996.service: Deactivated successfully. Jan 30 13:10:21.765563 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:10:21.767014 systemd-logind[1698]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:10:21.768424 systemd-logind[1698]: Removed session 15. Jan 30 13:10:26.874688 systemd[1]: Started sshd@13-10.200.4.23:22-10.200.16.10:46560.service - OpenSSH per-connection server daemon (10.200.16.10:46560). Jan 30 13:10:27.525842 sshd[4771]: Accepted publickey for core from 10.200.16.10 port 46560 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:10:27.527637 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:10:27.533058 systemd-logind[1698]: New session 16 of user core. Jan 30 13:10:27.536634 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:10:28.055642 sshd[4773]: Connection closed by 10.200.16.10 port 46560 Jan 30 13:10:28.056464 sshd-session[4771]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:28.060841 systemd[1]: sshd@13-10.200.4.23:22-10.200.16.10:46560.service: Deactivated successfully. Jan 30 13:10:28.063364 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:10:28.064157 systemd-logind[1698]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:10:28.065244 systemd-logind[1698]: Removed session 16. Jan 30 13:10:33.169601 systemd[1]: Started sshd@14-10.200.4.23:22-10.200.16.10:46574.service - OpenSSH per-connection server daemon (10.200.16.10:46574). Jan 30 13:10:33.810592 sshd[4784]: Accepted publickey for core from 10.200.16.10 port 46574 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:10:33.812326 sshd-session[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:10:33.817422 systemd-logind[1698]: New session 17 of user core. Jan 30 13:10:33.822642 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:10:34.323666 sshd[4786]: Connection closed by 10.200.16.10 port 46574 Jan 30 13:10:34.324461 sshd-session[4784]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:34.328279 systemd[1]: sshd@14-10.200.4.23:22-10.200.16.10:46574.service: Deactivated successfully. Jan 30 13:10:34.330327 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:10:34.331153 systemd-logind[1698]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:10:34.332286 systemd-logind[1698]: Removed session 17. Jan 30 13:10:34.437831 systemd[1]: Started sshd@15-10.200.4.23:22-10.200.16.10:46584.service - OpenSSH per-connection server daemon (10.200.16.10:46584). Jan 30 13:10:35.091682 sshd[4796]: Accepted publickey for core from 10.200.16.10 port 46584 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:10:35.093312 sshd-session[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:10:35.097934 systemd-logind[1698]: New session 18 of user core. Jan 30 13:10:35.106625 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:10:35.652086 sshd[4798]: Connection closed by 10.200.16.10 port 46584 Jan 30 13:10:35.652933 sshd-session[4796]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:35.656141 systemd[1]: sshd@15-10.200.4.23:22-10.200.16.10:46584.service: Deactivated successfully. Jan 30 13:10:35.658330 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:10:35.660102 systemd-logind[1698]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:10:35.661221 systemd-logind[1698]: Removed session 18. Jan 30 13:10:35.769808 systemd[1]: Started sshd@16-10.200.4.23:22-10.200.16.10:46586.service - OpenSSH per-connection server daemon (10.200.16.10:46586). Jan 30 13:10:36.410095 sshd[4806]: Accepted publickey for core from 10.200.16.10 port 46586 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:10:36.411600 sshd-session[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:10:36.416269 systemd-logind[1698]: New session 19 of user core. Jan 30 13:10:36.422620 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:10:38.387084 sshd[4808]: Connection closed by 10.200.16.10 port 46586 Jan 30 13:10:38.387584 sshd-session[4806]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:38.391444 systemd[1]: sshd@16-10.200.4.23:22-10.200.16.10:46586.service: Deactivated successfully. Jan 30 13:10:38.393942 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:10:38.395871 systemd-logind[1698]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:10:38.397071 systemd-logind[1698]: Removed session 19. Jan 30 13:10:38.505732 systemd[1]: Started sshd@17-10.200.4.23:22-10.200.16.10:39886.service - OpenSSH per-connection server daemon (10.200.16.10:39886). Jan 30 13:10:39.147017 sshd[4826]: Accepted publickey for core from 10.200.16.10 port 39886 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:10:39.148603 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:10:39.153562 systemd-logind[1698]: New session 20 of user core. Jan 30 13:10:39.159644 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:10:39.759131 sshd[4828]: Connection closed by 10.200.16.10 port 39886 Jan 30 13:10:39.759861 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:39.762739 systemd[1]: sshd@17-10.200.4.23:22-10.200.16.10:39886.service: Deactivated successfully. Jan 30 13:10:39.765132 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:10:39.766749 systemd-logind[1698]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:10:39.767777 systemd-logind[1698]: Removed session 20. Jan 30 13:10:39.876693 systemd[1]: Started sshd@18-10.200.4.23:22-10.200.16.10:39898.service - OpenSSH per-connection server daemon (10.200.16.10:39898). Jan 30 13:10:40.519494 sshd[4837]: Accepted publickey for core from 10.200.16.10 port 39898 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:10:40.520942 sshd-session[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:10:40.525748 systemd-logind[1698]: New session 21 of user core. Jan 30 13:10:40.533610 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:10:41.031418 sshd[4839]: Connection closed by 10.200.16.10 port 39898 Jan 30 13:10:41.032262 sshd-session[4837]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:41.035552 systemd[1]: sshd@18-10.200.4.23:22-10.200.16.10:39898.service: Deactivated successfully. Jan 30 13:10:41.037967 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:10:41.039437 systemd-logind[1698]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:10:41.040629 systemd-logind[1698]: Removed session 21. Jan 30 13:10:46.152860 systemd[1]: Started sshd@19-10.200.4.23:22-10.200.16.10:56086.service - OpenSSH per-connection server daemon (10.200.16.10:56086). Jan 30 13:10:46.790972 sshd[4853]: Accepted publickey for core from 10.200.16.10 port 56086 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:10:46.792361 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:10:46.796978 systemd-logind[1698]: New session 22 of user core. Jan 30 13:10:46.802664 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:10:47.296879 sshd[4855]: Connection closed by 10.200.16.10 port 56086 Jan 30 13:10:47.297727 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:47.302091 systemd[1]: sshd@19-10.200.4.23:22-10.200.16.10:56086.service: Deactivated successfully. Jan 30 13:10:47.304262 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:10:47.305070 systemd-logind[1698]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:10:47.306091 systemd-logind[1698]: Removed session 22. Jan 30 13:10:52.418837 systemd[1]: Started sshd@20-10.200.4.23:22-10.200.16.10:56102.service - OpenSSH per-connection server daemon (10.200.16.10:56102). Jan 30 13:10:53.061444 sshd[4866]: Accepted publickey for core from 10.200.16.10 port 56102 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:10:53.063169 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:10:53.068199 systemd-logind[1698]: New session 23 of user core. Jan 30 13:10:53.074656 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:10:53.568019 sshd[4868]: Connection closed by 10.200.16.10 port 56102 Jan 30 13:10:53.568802 sshd-session[4866]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:53.571796 systemd[1]: sshd@20-10.200.4.23:22-10.200.16.10:56102.service: Deactivated successfully. Jan 30 13:10:53.574201 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:10:53.575818 systemd-logind[1698]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:10:53.577151 systemd-logind[1698]: Removed session 23. Jan 30 13:10:58.692793 systemd[1]: Started sshd@21-10.200.4.23:22-10.200.16.10:36706.service - OpenSSH per-connection server daemon (10.200.16.10:36706). Jan 30 13:10:59.342759 sshd[4878]: Accepted publickey for core from 10.200.16.10 port 36706 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:10:59.344461 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:10:59.350164 systemd-logind[1698]: New session 24 of user core. Jan 30 13:10:59.358645 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:10:59.853081 sshd[4880]: Connection closed by 10.200.16.10 port 36706 Jan 30 13:10:59.853941 sshd-session[4878]: pam_unix(sshd:session): session closed for user core Jan 30 13:10:59.857391 systemd[1]: sshd@21-10.200.4.23:22-10.200.16.10:36706.service: Deactivated successfully. Jan 30 13:10:59.859950 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:10:59.861838 systemd-logind[1698]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:10:59.862757 systemd-logind[1698]: Removed session 24. Jan 30 13:10:59.970782 systemd[1]: Started sshd@22-10.200.4.23:22-10.200.16.10:36714.service - OpenSSH per-connection server daemon (10.200.16.10:36714). Jan 30 13:11:00.608493 sshd[4891]: Accepted publickey for core from 10.200.16.10 port 36714 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:11:00.609946 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:11:00.614503 systemd-logind[1698]: New session 25 of user core. Jan 30 13:11:00.624632 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:11:02.279735 systemd[1]: run-containerd-runc-k8s.io-06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec-runc.pYxCYr.mount: Deactivated successfully. Jan 30 13:11:02.282242 containerd[1721]: time="2025-01-30T13:11:02.281448102Z" level=info msg="StopContainer for \"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2\" with timeout 30 (s)" Jan 30 13:11:02.283053 containerd[1721]: time="2025-01-30T13:11:02.282388907Z" level=info msg="Stop container \"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2\" with signal terminated" Jan 30 13:11:02.294639 containerd[1721]: time="2025-01-30T13:11:02.294594776Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:11:02.300196 systemd[1]: cri-containerd-e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2.scope: Deactivated successfully. Jan 30 13:11:02.305891 containerd[1721]: time="2025-01-30T13:11:02.305830039Z" level=info msg="StopContainer for \"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec\" with timeout 2 (s)" Jan 30 13:11:02.306437 containerd[1721]: time="2025-01-30T13:11:02.306364242Z" level=info msg="Stop container \"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec\" with signal terminated" Jan 30 13:11:02.316894 systemd-networkd[1330]: lxc_health: Link DOWN Jan 30 13:11:02.316903 systemd-networkd[1330]: lxc_health: Lost carrier Jan 30 13:11:02.338655 systemd[1]: cri-containerd-06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec.scope: Deactivated successfully. Jan 30 13:11:02.339137 systemd[1]: cri-containerd-06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec.scope: Consumed 7.429s CPU time. Jan 30 13:11:02.350371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2-rootfs.mount: Deactivated successfully. Jan 30 13:11:02.366244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec-rootfs.mount: Deactivated successfully. Jan 30 13:11:02.422184 containerd[1721]: time="2025-01-30T13:11:02.422090594Z" level=info msg="shim disconnected" id=e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2 namespace=k8s.io Jan 30 13:11:02.422184 containerd[1721]: time="2025-01-30T13:11:02.422180995Z" level=warning msg="cleaning up after shim disconnected" id=e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2 namespace=k8s.io Jan 30 13:11:02.422184 containerd[1721]: time="2025-01-30T13:11:02.422196295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:11:02.423355 containerd[1721]: time="2025-01-30T13:11:02.423123000Z" level=info msg="shim disconnected" id=06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec namespace=k8s.io Jan 30 13:11:02.423355 containerd[1721]: time="2025-01-30T13:11:02.423180200Z" level=warning msg="cleaning up after shim disconnected" id=06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec namespace=k8s.io Jan 30 13:11:02.423355 containerd[1721]: time="2025-01-30T13:11:02.423191700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:11:02.446977 containerd[1721]: time="2025-01-30T13:11:02.446703033Z" level=info msg="StopContainer for \"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2\" returns successfully" Jan 30 13:11:02.446977 containerd[1721]: time="2025-01-30T13:11:02.446908334Z" level=info msg="StopContainer for \"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec\" returns successfully" Jan 30 13:11:02.447620 containerd[1721]: time="2025-01-30T13:11:02.447556438Z" level=info msg="StopPodSandbox for \"2b8dd582e3ad65df9ef3658137b405554d48c3f92e5c8f8c781bd8eb5131e41f\"" Jan 30 13:11:02.447735 containerd[1721]: time="2025-01-30T13:11:02.447566438Z" level=info msg="StopPodSandbox for \"60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539\"" Jan 30 13:11:02.447735 containerd[1721]: time="2025-01-30T13:11:02.447674738Z" level=info msg="Container to stop \"bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:11:02.447735 containerd[1721]: time="2025-01-30T13:11:02.447712239Z" level=info msg="Container to stop \"556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:11:02.447735 containerd[1721]: time="2025-01-30T13:11:02.447724139Z" level=info msg="Container to stop \"5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:11:02.447893 containerd[1721]: time="2025-01-30T13:11:02.447735939Z" level=info msg="Container to stop \"ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:11:02.447893 containerd[1721]: time="2025-01-30T13:11:02.447747439Z" level=info msg="Container to stop \"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:11:02.449610 containerd[1721]: time="2025-01-30T13:11:02.447628638Z" level=info msg="Container to stop \"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:11:02.451090 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539-shm.mount: Deactivated successfully. Jan 30 13:11:02.458574 systemd[1]: cri-containerd-60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539.scope: Deactivated successfully. Jan 30 13:11:02.462320 systemd[1]: cri-containerd-2b8dd582e3ad65df9ef3658137b405554d48c3f92e5c8f8c781bd8eb5131e41f.scope: Deactivated successfully. Jan 30 13:11:02.498008 containerd[1721]: time="2025-01-30T13:11:02.497786121Z" level=info msg="shim disconnected" id=60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539 namespace=k8s.io Jan 30 13:11:02.498008 containerd[1721]: time="2025-01-30T13:11:02.497848721Z" level=warning msg="cleaning up after shim disconnected" id=60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539 namespace=k8s.io Jan 30 13:11:02.498008 containerd[1721]: time="2025-01-30T13:11:02.497859921Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:11:02.499118 containerd[1721]: time="2025-01-30T13:11:02.498359124Z" level=info msg="shim disconnected" id=2b8dd582e3ad65df9ef3658137b405554d48c3f92e5c8f8c781bd8eb5131e41f namespace=k8s.io Jan 30 13:11:02.499118 containerd[1721]: time="2025-01-30T13:11:02.498400724Z" level=warning msg="cleaning up after shim disconnected" id=2b8dd582e3ad65df9ef3658137b405554d48c3f92e5c8f8c781bd8eb5131e41f namespace=k8s.io Jan 30 13:11:02.499118 containerd[1721]: time="2025-01-30T13:11:02.498410124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:11:02.520710 containerd[1721]: time="2025-01-30T13:11:02.520665050Z" level=info msg="TearDown network for sandbox \"2b8dd582e3ad65df9ef3658137b405554d48c3f92e5c8f8c781bd8eb5131e41f\" successfully" Jan 30 13:11:02.520889 containerd[1721]: time="2025-01-30T13:11:02.520870351Z" level=info msg="StopPodSandbox for \"2b8dd582e3ad65df9ef3658137b405554d48c3f92e5c8f8c781bd8eb5131e41f\" returns successfully" Jan 30 13:11:02.521251 containerd[1721]: time="2025-01-30T13:11:02.521218353Z" level=info msg="TearDown network for sandbox \"60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539\" successfully" Jan 30 13:11:02.521350 containerd[1721]: time="2025-01-30T13:11:02.521264553Z" level=info msg="StopPodSandbox for \"60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539\" returns successfully" Jan 30 13:11:02.547809 kubelet[3303]: I0130 13:11:02.547666 3303 scope.go:117] "RemoveContainer" containerID="e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2" Jan 30 13:11:02.553004 containerd[1721]: time="2025-01-30T13:11:02.552958832Z" level=info msg="RemoveContainer for \"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2\"" Jan 30 13:11:02.559349 containerd[1721]: time="2025-01-30T13:11:02.559303167Z" level=info msg="RemoveContainer for \"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2\" returns successfully" Jan 30 13:11:02.559594 kubelet[3303]: I0130 13:11:02.559572 3303 scope.go:117] "RemoveContainer" containerID="e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2" Jan 30 13:11:02.559798 containerd[1721]: time="2025-01-30T13:11:02.559760670Z" level=error msg="ContainerStatus for \"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2\": not found" Jan 30 13:11:02.560027 kubelet[3303]: E0130 13:11:02.559997 3303 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2\": not found" containerID="e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2" Jan 30 13:11:02.560133 kubelet[3303]: I0130 13:11:02.560034 3303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2"} err="failed to get container status \"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"e10210d0e079fbb668b66867dc8a84ae2f3fb34c5c52409b31fa85f979d436e2\": not found" Jan 30 13:11:02.560203 kubelet[3303]: I0130 13:11:02.560136 3303 scope.go:117] "RemoveContainer" containerID="06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec" Jan 30 13:11:02.561205 containerd[1721]: time="2025-01-30T13:11:02.561176978Z" level=info msg="RemoveContainer for \"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec\"" Jan 30 13:11:02.569593 containerd[1721]: time="2025-01-30T13:11:02.569418124Z" level=info msg="RemoveContainer for \"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec\" returns successfully" Jan 30 13:11:02.569903 kubelet[3303]: I0130 13:11:02.569855 3303 scope.go:117] "RemoveContainer" containerID="ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1" Jan 30 13:11:02.571268 containerd[1721]: time="2025-01-30T13:11:02.571237935Z" level=info msg="RemoveContainer for \"ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1\"" Jan 30 13:11:02.578337 containerd[1721]: time="2025-01-30T13:11:02.578301374Z" level=info msg="RemoveContainer for \"ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1\" returns successfully" Jan 30 13:11:02.578533 kubelet[3303]: I0130 13:11:02.578493 3303 scope.go:117] "RemoveContainer" containerID="bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897" Jan 30 13:11:02.579598 containerd[1721]: time="2025-01-30T13:11:02.579558482Z" level=info msg="RemoveContainer for \"bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897\"" Jan 30 13:11:02.585904 containerd[1721]: time="2025-01-30T13:11:02.585875417Z" level=info msg="RemoveContainer for \"bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897\" returns successfully" Jan 30 13:11:02.586053 kubelet[3303]: I0130 13:11:02.586027 3303 scope.go:117] "RemoveContainer" containerID="5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae" Jan 30 13:11:02.587049 containerd[1721]: time="2025-01-30T13:11:02.587011024Z" level=info msg="RemoveContainer for \"5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae\"" Jan 30 13:11:02.593459 containerd[1721]: time="2025-01-30T13:11:02.593427260Z" level=info msg="RemoveContainer for \"5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae\" returns successfully" Jan 30 13:11:02.593763 kubelet[3303]: I0130 13:11:02.593611 3303 scope.go:117] "RemoveContainer" containerID="556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc" Jan 30 13:11:02.595522 containerd[1721]: time="2025-01-30T13:11:02.595495071Z" level=info msg="RemoveContainer for \"556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc\"" Jan 30 13:11:02.602987 containerd[1721]: time="2025-01-30T13:11:02.602944313Z" level=info msg="RemoveContainer for \"556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc\" returns successfully" Jan 30 13:11:02.603229 kubelet[3303]: I0130 13:11:02.603125 3303 scope.go:117] "RemoveContainer" containerID="06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec" Jan 30 13:11:02.603450 containerd[1721]: time="2025-01-30T13:11:02.603414016Z" level=error msg="ContainerStatus for \"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec\": not found" Jan 30 13:11:02.603646 kubelet[3303]: E0130 13:11:02.603608 3303 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec\": not found" containerID="06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec" Jan 30 13:11:02.603758 kubelet[3303]: I0130 13:11:02.603641 3303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec"} err="failed to get container status \"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec\": rpc error: code = NotFound desc = an error occurred when try to find container \"06fb2045ffd9f7dbb6d604cd2935cd43b1ed3198879b2efab79dd4c96d8c5aec\": not found" Jan 30 13:11:02.603758 kubelet[3303]: I0130 13:11:02.603666 3303 scope.go:117] "RemoveContainer" containerID="ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1" Jan 30 13:11:02.603980 containerd[1721]: time="2025-01-30T13:11:02.603908719Z" level=error msg="ContainerStatus for \"ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1\": not found" Jan 30 13:11:02.604065 kubelet[3303]: E0130 13:11:02.604040 3303 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1\": not found" containerID="ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1" Jan 30 13:11:02.604124 kubelet[3303]: I0130 13:11:02.604070 3303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1"} err="failed to get container status \"ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed436c0adb7d00352bad9205f542a33b3c53901c1e07a3911a6f25f5300166e1\": not found" Jan 30 13:11:02.604124 kubelet[3303]: I0130 13:11:02.604094 3303 scope.go:117] "RemoveContainer" containerID="bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897" Jan 30 13:11:02.604307 containerd[1721]: time="2025-01-30T13:11:02.604251421Z" level=error msg="ContainerStatus for \"bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897\": not found" Jan 30 13:11:02.604417 kubelet[3303]: E0130 13:11:02.604398 3303 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897\": not found" containerID="bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897" Jan 30 13:11:02.604572 kubelet[3303]: I0130 13:11:02.604428 3303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897"} err="failed to get container status \"bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897\": rpc error: code = NotFound desc = an error occurred when try to find container \"bca224d12f0e1d8c306fc513da16a852b78c090460437452e3561956909f9897\": not found" Jan 30 13:11:02.604572 kubelet[3303]: I0130 13:11:02.604447 3303 scope.go:117] "RemoveContainer" containerID="5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae" Jan 30 13:11:02.604757 containerd[1721]: time="2025-01-30T13:11:02.604625623Z" level=error msg="ContainerStatus for \"5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae\": not found" Jan 30 13:11:02.604838 kubelet[3303]: E0130 13:11:02.604808 3303 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae\": not found" containerID="5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae" Jan 30 13:11:02.604899 kubelet[3303]: I0130 13:11:02.604838 3303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae"} err="failed to get container status \"5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"5868851e2690f407a09da608bfbf2ec85e253a1159dc6b9585f88117036529ae\": not found" Jan 30 13:11:02.604899 kubelet[3303]: I0130 13:11:02.604857 3303 scope.go:117] "RemoveContainer" containerID="556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc" Jan 30 13:11:02.605098 containerd[1721]: time="2025-01-30T13:11:02.605027725Z" level=error msg="ContainerStatus for \"556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc\": not found" Jan 30 13:11:02.605167 kubelet[3303]: E0130 13:11:02.605130 3303 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc\": not found" containerID="556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc" Jan 30 13:11:02.605167 kubelet[3303]: I0130 13:11:02.605152 3303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc"} err="failed to get container status \"556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc\": rpc error: code = NotFound desc = an error occurred when try to find container \"556c9ced5282625dee153ebac7473ea284630269a8b163bba1d71d680ea7bacc\": not found" Jan 30 13:11:02.707827 kubelet[3303]: I0130 13:11:02.707766 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-cilium-cgroup\") pod \"77b9d498-9654-4009-8d83-7ab065d09c75\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " Jan 30 13:11:02.707827 kubelet[3303]: I0130 13:11:02.707833 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/77b9d498-9654-4009-8d83-7ab065d09c75-hubble-tls\") pod \"77b9d498-9654-4009-8d83-7ab065d09c75\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " Jan 30 13:11:02.708091 kubelet[3303]: I0130 13:11:02.707863 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-lib-modules\") pod \"77b9d498-9654-4009-8d83-7ab065d09c75\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " Jan 30 13:11:02.708091 kubelet[3303]: I0130 13:11:02.707885 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-bpf-maps\") pod \"77b9d498-9654-4009-8d83-7ab065d09c75\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " Jan 30 13:11:02.708091 kubelet[3303]: I0130 13:11:02.707907 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-hostproc\") pod \"77b9d498-9654-4009-8d83-7ab065d09c75\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " Jan 30 13:11:02.708091 kubelet[3303]: I0130 13:11:02.707932 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2489171-fd17-4c85-b161-7d468aaddc51-cilium-config-path\") pod \"f2489171-fd17-4c85-b161-7d468aaddc51\" (UID: \"f2489171-fd17-4c85-b161-7d468aaddc51\") " Jan 30 13:11:02.708091 kubelet[3303]: I0130 13:11:02.707958 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-cilium-run\") pod \"77b9d498-9654-4009-8d83-7ab065d09c75\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " Jan 30 13:11:02.708091 kubelet[3303]: I0130 13:11:02.707984 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/77b9d498-9654-4009-8d83-7ab065d09c75-clustermesh-secrets\") pod \"77b9d498-9654-4009-8d83-7ab065d09c75\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " Jan 30 13:11:02.708401 kubelet[3303]: I0130 13:11:02.708007 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-host-proc-sys-net\") pod \"77b9d498-9654-4009-8d83-7ab065d09c75\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " Jan 30 13:11:02.708401 kubelet[3303]: I0130 13:11:02.708033 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7vp6\" (UniqueName: \"kubernetes.io/projected/f2489171-fd17-4c85-b161-7d468aaddc51-kube-api-access-t7vp6\") pod \"f2489171-fd17-4c85-b161-7d468aaddc51\" (UID: \"f2489171-fd17-4c85-b161-7d468aaddc51\") " Jan 30 13:11:02.708401 kubelet[3303]: I0130 13:11:02.708060 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-cni-path\") pod \"77b9d498-9654-4009-8d83-7ab065d09c75\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " Jan 30 13:11:02.708401 kubelet[3303]: I0130 13:11:02.708091 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-xtables-lock\") pod \"77b9d498-9654-4009-8d83-7ab065d09c75\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " Jan 30 13:11:02.708401 kubelet[3303]: I0130 13:11:02.708117 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-etc-cni-netd\") pod \"77b9d498-9654-4009-8d83-7ab065d09c75\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " Jan 30 13:11:02.708401 kubelet[3303]: I0130 13:11:02.708147 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t9qx\" (UniqueName: \"kubernetes.io/projected/77b9d498-9654-4009-8d83-7ab065d09c75-kube-api-access-8t9qx\") pod \"77b9d498-9654-4009-8d83-7ab065d09c75\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " Jan 30 13:11:02.708740 kubelet[3303]: I0130 13:11:02.708178 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77b9d498-9654-4009-8d83-7ab065d09c75-cilium-config-path\") pod \"77b9d498-9654-4009-8d83-7ab065d09c75\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " Jan 30 13:11:02.708740 kubelet[3303]: I0130 13:11:02.708204 3303 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-host-proc-sys-kernel\") pod \"77b9d498-9654-4009-8d83-7ab065d09c75\" (UID: \"77b9d498-9654-4009-8d83-7ab065d09c75\") " Jan 30 13:11:02.708740 kubelet[3303]: I0130 13:11:02.708296 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "77b9d498-9654-4009-8d83-7ab065d09c75" (UID: "77b9d498-9654-4009-8d83-7ab065d09c75"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:11:02.708740 kubelet[3303]: I0130 13:11:02.708352 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "77b9d498-9654-4009-8d83-7ab065d09c75" (UID: "77b9d498-9654-4009-8d83-7ab065d09c75"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:11:02.709990 kubelet[3303]: I0130 13:11:02.709011 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "77b9d498-9654-4009-8d83-7ab065d09c75" (UID: "77b9d498-9654-4009-8d83-7ab065d09c75"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:11:02.709990 kubelet[3303]: I0130 13:11:02.709067 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "77b9d498-9654-4009-8d83-7ab065d09c75" (UID: "77b9d498-9654-4009-8d83-7ab065d09c75"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:11:02.709990 kubelet[3303]: I0130 13:11:02.709090 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "77b9d498-9654-4009-8d83-7ab065d09c75" (UID: "77b9d498-9654-4009-8d83-7ab065d09c75"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:11:02.709990 kubelet[3303]: I0130 13:11:02.709107 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-hostproc" (OuterVolumeSpecName: "hostproc") pod "77b9d498-9654-4009-8d83-7ab065d09c75" (UID: "77b9d498-9654-4009-8d83-7ab065d09c75"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:11:02.712724 kubelet[3303]: I0130 13:11:02.712576 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2489171-fd17-4c85-b161-7d468aaddc51-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f2489171-fd17-4c85-b161-7d468aaddc51" (UID: "f2489171-fd17-4c85-b161-7d468aaddc51"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:11:02.712724 kubelet[3303]: I0130 13:11:02.712647 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "77b9d498-9654-4009-8d83-7ab065d09c75" (UID: "77b9d498-9654-4009-8d83-7ab065d09c75"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:11:02.713200 kubelet[3303]: I0130 13:11:02.713071 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-cni-path" (OuterVolumeSpecName: "cni-path") pod "77b9d498-9654-4009-8d83-7ab065d09c75" (UID: "77b9d498-9654-4009-8d83-7ab065d09c75"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:11:02.713200 kubelet[3303]: I0130 13:11:02.713118 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "77b9d498-9654-4009-8d83-7ab065d09c75" (UID: "77b9d498-9654-4009-8d83-7ab065d09c75"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:11:02.713200 kubelet[3303]: I0130 13:11:02.713140 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "77b9d498-9654-4009-8d83-7ab065d09c75" (UID: "77b9d498-9654-4009-8d83-7ab065d09c75"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:11:02.715753 kubelet[3303]: I0130 13:11:02.715593 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b9d498-9654-4009-8d83-7ab065d09c75-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "77b9d498-9654-4009-8d83-7ab065d09c75" (UID: "77b9d498-9654-4009-8d83-7ab065d09c75"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:11:02.718006 kubelet[3303]: I0130 13:11:02.717969 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b9d498-9654-4009-8d83-7ab065d09c75-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "77b9d498-9654-4009-8d83-7ab065d09c75" (UID: "77b9d498-9654-4009-8d83-7ab065d09c75"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:11:02.718561 kubelet[3303]: I0130 13:11:02.718077 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2489171-fd17-4c85-b161-7d468aaddc51-kube-api-access-t7vp6" (OuterVolumeSpecName: "kube-api-access-t7vp6") pod "f2489171-fd17-4c85-b161-7d468aaddc51" (UID: "f2489171-fd17-4c85-b161-7d468aaddc51"). InnerVolumeSpecName "kube-api-access-t7vp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:11:02.719009 kubelet[3303]: I0130 13:11:02.718982 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b9d498-9654-4009-8d83-7ab065d09c75-kube-api-access-8t9qx" (OuterVolumeSpecName: "kube-api-access-8t9qx") pod "77b9d498-9654-4009-8d83-7ab065d09c75" (UID: "77b9d498-9654-4009-8d83-7ab065d09c75"). InnerVolumeSpecName "kube-api-access-8t9qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:11:02.719148 kubelet[3303]: I0130 13:11:02.719123 3303 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b9d498-9654-4009-8d83-7ab065d09c75-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "77b9d498-9654-4009-8d83-7ab065d09c75" (UID: "77b9d498-9654-4009-8d83-7ab065d09c75"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:11:02.809421 kubelet[3303]: I0130 13:11:02.809283 3303 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-cilium-run\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809421 kubelet[3303]: I0130 13:11:02.809319 3303 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/77b9d498-9654-4009-8d83-7ab065d09c75-clustermesh-secrets\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809421 kubelet[3303]: I0130 13:11:02.809335 3303 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-host-proc-sys-net\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809421 kubelet[3303]: I0130 13:11:02.809347 3303 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-cni-path\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809421 kubelet[3303]: I0130 13:11:02.809360 3303 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-xtables-lock\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809421 kubelet[3303]: I0130 13:11:02.809373 3303 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-etc-cni-netd\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809421 kubelet[3303]: I0130 13:11:02.809385 3303 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-t7vp6\" (UniqueName: \"kubernetes.io/projected/f2489171-fd17-4c85-b161-7d468aaddc51-kube-api-access-t7vp6\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809421 kubelet[3303]: I0130 13:11:02.809396 3303 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8t9qx\" (UniqueName: \"kubernetes.io/projected/77b9d498-9654-4009-8d83-7ab065d09c75-kube-api-access-8t9qx\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809853 kubelet[3303]: I0130 13:11:02.809408 3303 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77b9d498-9654-4009-8d83-7ab065d09c75-cilium-config-path\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809853 kubelet[3303]: I0130 13:11:02.809419 3303 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-host-proc-sys-kernel\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809853 kubelet[3303]: I0130 13:11:02.809432 3303 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-cilium-cgroup\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809853 kubelet[3303]: I0130 13:11:02.809442 3303 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/77b9d498-9654-4009-8d83-7ab065d09c75-hubble-tls\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809853 kubelet[3303]: I0130 13:11:02.809452 3303 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-lib-modules\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809853 kubelet[3303]: I0130 13:11:02.809461 3303 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-bpf-maps\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809853 kubelet[3303]: I0130 13:11:02.809488 3303 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/77b9d498-9654-4009-8d83-7ab065d09c75-hostproc\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.809853 kubelet[3303]: I0130 13:11:02.809499 3303 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2489171-fd17-4c85-b161-7d468aaddc51-cilium-config-path\") on node \"ci-4186.1.0-a-065ab1add7\" DevicePath \"\"" Jan 30 13:11:02.854103 systemd[1]: Removed slice kubepods-besteffort-podf2489171_fd17_4c85_b161_7d468aaddc51.slice - libcontainer container kubepods-besteffort-podf2489171_fd17_4c85_b161_7d468aaddc51.slice. Jan 30 13:11:02.861385 systemd[1]: Removed slice kubepods-burstable-pod77b9d498_9654_4009_8d83_7ab065d09c75.slice - libcontainer container kubepods-burstable-pod77b9d498_9654_4009_8d83_7ab065d09c75.slice. Jan 30 13:11:02.861554 systemd[1]: kubepods-burstable-pod77b9d498_9654_4009_8d83_7ab065d09c75.slice: Consumed 7.517s CPU time. Jan 30 13:11:03.269370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b8dd582e3ad65df9ef3658137b405554d48c3f92e5c8f8c781bd8eb5131e41f-rootfs.mount: Deactivated successfully. Jan 30 13:11:03.269560 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b8dd582e3ad65df9ef3658137b405554d48c3f92e5c8f8c781bd8eb5131e41f-shm.mount: Deactivated successfully. Jan 30 13:11:03.269690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60a8d32e35f335428af6d771bc8768176ae2327eebbc8e242ebb25eb852ad539-rootfs.mount: Deactivated successfully. Jan 30 13:11:03.269787 systemd[1]: var-lib-kubelet-pods-f2489171\x2dfd17\x2d4c85\x2db161\x2d7d468aaddc51-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt7vp6.mount: Deactivated successfully. Jan 30 13:11:03.269905 systemd[1]: var-lib-kubelet-pods-77b9d498\x2d9654\x2d4009\x2d8d83\x2d7ab065d09c75-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8t9qx.mount: Deactivated successfully. Jan 30 13:11:03.270013 systemd[1]: var-lib-kubelet-pods-77b9d498\x2d9654\x2d4009\x2d8d83\x2d7ab065d09c75-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:11:03.270130 systemd[1]: var-lib-kubelet-pods-77b9d498\x2d9654\x2d4009\x2d8d83\x2d7ab065d09c75-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:11:04.125541 kubelet[3303]: I0130 13:11:04.125486 3303 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77b9d498-9654-4009-8d83-7ab065d09c75" path="/var/lib/kubelet/pods/77b9d498-9654-4009-8d83-7ab065d09c75/volumes" Jan 30 13:11:04.126406 kubelet[3303]: I0130 13:11:04.126371 3303 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2489171-fd17-4c85-b161-7d468aaddc51" path="/var/lib/kubelet/pods/f2489171-fd17-4c85-b161-7d468aaddc51/volumes" Jan 30 13:11:04.314054 sshd[4895]: Connection closed by 10.200.16.10 port 36714 Jan 30 13:11:04.315035 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Jan 30 13:11:04.318383 systemd[1]: sshd@22-10.200.4.23:22-10.200.16.10:36714.service: Deactivated successfully. Jan 30 13:11:04.320564 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:11:04.322280 systemd-logind[1698]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:11:04.323316 systemd-logind[1698]: Removed session 25. Jan 30 13:11:04.434786 systemd[1]: Started sshd@23-10.200.4.23:22-10.200.16.10:36716.service - OpenSSH per-connection server daemon (10.200.16.10:36716). Jan 30 13:11:05.184736 sshd[5054]: Accepted publickey for core from 10.200.16.10 port 36716 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:11:05.186397 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:11:05.192005 systemd-logind[1698]: New session 26 of user core. Jan 30 13:11:05.196650 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:11:05.233112 kubelet[3303]: E0130 13:11:05.233070 3303 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:11:06.092492 kubelet[3303]: E0130 13:11:06.092014 3303 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77b9d498-9654-4009-8d83-7ab065d09c75" containerName="mount-cgroup" Jan 30 13:11:06.092492 kubelet[3303]: E0130 13:11:06.092054 3303 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77b9d498-9654-4009-8d83-7ab065d09c75" containerName="apply-sysctl-overwrites" Jan 30 13:11:06.092492 kubelet[3303]: E0130 13:11:06.092065 3303 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77b9d498-9654-4009-8d83-7ab065d09c75" containerName="mount-bpf-fs" Jan 30 13:11:06.092492 kubelet[3303]: E0130 13:11:06.092073 3303 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f2489171-fd17-4c85-b161-7d468aaddc51" containerName="cilium-operator" Jan 30 13:11:06.092492 kubelet[3303]: E0130 13:11:06.092097 3303 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77b9d498-9654-4009-8d83-7ab065d09c75" containerName="clean-cilium-state" Jan 30 13:11:06.092492 kubelet[3303]: E0130 13:11:06.092106 3303 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77b9d498-9654-4009-8d83-7ab065d09c75" containerName="cilium-agent" Jan 30 13:11:06.092492 kubelet[3303]: I0130 13:11:06.092148 3303 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b9d498-9654-4009-8d83-7ab065d09c75" containerName="cilium-agent" Jan 30 13:11:06.092492 kubelet[3303]: I0130 13:11:06.092158 3303 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2489171-fd17-4c85-b161-7d468aaddc51" containerName="cilium-operator" Jan 30 13:11:06.104959 systemd[1]: Created slice kubepods-burstable-podc5346c8a_b8ae_4071_ba0c_20fface57a3e.slice - libcontainer container kubepods-burstable-podc5346c8a_b8ae_4071_ba0c_20fface57a3e.slice. Jan 30 13:11:06.190962 sshd[5056]: Connection closed by 10.200.16.10 port 36716 Jan 30 13:11:06.191720 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Jan 30 13:11:06.195126 systemd[1]: sshd@23-10.200.4.23:22-10.200.16.10:36716.service: Deactivated successfully. Jan 30 13:11:06.197338 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:11:06.198952 systemd-logind[1698]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:11:06.200280 systemd-logind[1698]: Removed session 26. Jan 30 13:11:06.227615 kubelet[3303]: I0130 13:11:06.227572 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5346c8a-b8ae-4071-ba0c-20fface57a3e-host-proc-sys-kernel\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.227615 kubelet[3303]: I0130 13:11:06.227631 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5346c8a-b8ae-4071-ba0c-20fface57a3e-xtables-lock\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.227887 kubelet[3303]: I0130 13:11:06.227656 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5346c8a-b8ae-4071-ba0c-20fface57a3e-host-proc-sys-net\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.227887 kubelet[3303]: I0130 13:11:06.227692 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5346c8a-b8ae-4071-ba0c-20fface57a3e-cni-path\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.227887 kubelet[3303]: I0130 13:11:06.227719 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5346c8a-b8ae-4071-ba0c-20fface57a3e-clustermesh-secrets\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.227887 kubelet[3303]: I0130 13:11:06.227752 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c5346c8a-b8ae-4071-ba0c-20fface57a3e-cilium-ipsec-secrets\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.227887 kubelet[3303]: I0130 13:11:06.227779 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5346c8a-b8ae-4071-ba0c-20fface57a3e-hostproc\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.227887 kubelet[3303]: I0130 13:11:06.227799 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5346c8a-b8ae-4071-ba0c-20fface57a3e-cilium-cgroup\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.228107 kubelet[3303]: I0130 13:11:06.227824 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5346c8a-b8ae-4071-ba0c-20fface57a3e-cilium-config-path\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.228107 kubelet[3303]: I0130 13:11:06.227851 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5346c8a-b8ae-4071-ba0c-20fface57a3e-bpf-maps\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.228107 kubelet[3303]: I0130 13:11:06.227876 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5346c8a-b8ae-4071-ba0c-20fface57a3e-etc-cni-netd\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.228107 kubelet[3303]: I0130 13:11:06.227906 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5346c8a-b8ae-4071-ba0c-20fface57a3e-cilium-run\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.228107 kubelet[3303]: I0130 13:11:06.227928 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5346c8a-b8ae-4071-ba0c-20fface57a3e-hubble-tls\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.228107 kubelet[3303]: I0130 13:11:06.227950 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5346c8a-b8ae-4071-ba0c-20fface57a3e-lib-modules\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.228260 kubelet[3303]: I0130 13:11:06.227971 3303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl6dh\" (UniqueName: \"kubernetes.io/projected/c5346c8a-b8ae-4071-ba0c-20fface57a3e-kube-api-access-pl6dh\") pod \"cilium-7qgmr\" (UID: \"c5346c8a-b8ae-4071-ba0c-20fface57a3e\") " pod="kube-system/cilium-7qgmr" Jan 30 13:11:06.303738 systemd[1]: Started sshd@24-10.200.4.23:22-10.200.16.10:56688.service - OpenSSH per-connection server daemon (10.200.16.10:56688). Jan 30 13:11:06.410585 containerd[1721]: time="2025-01-30T13:11:06.410360469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7qgmr,Uid:c5346c8a-b8ae-4071-ba0c-20fface57a3e,Namespace:kube-system,Attempt:0,}" Jan 30 13:11:06.449386 containerd[1721]: time="2025-01-30T13:11:06.449140888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:11:06.449386 containerd[1721]: time="2025-01-30T13:11:06.449220788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:11:06.449386 containerd[1721]: time="2025-01-30T13:11:06.449234388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:11:06.449386 containerd[1721]: time="2025-01-30T13:11:06.449379389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:11:06.473640 systemd[1]: Started cri-containerd-d5340da85a77985d41efb1d6e6d647f152b2c5646010a2d999fa50ba4c27162c.scope - libcontainer container d5340da85a77985d41efb1d6e6d647f152b2c5646010a2d999fa50ba4c27162c. Jan 30 13:11:06.495432 containerd[1721]: time="2025-01-30T13:11:06.495369348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7qgmr,Uid:c5346c8a-b8ae-4071-ba0c-20fface57a3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5340da85a77985d41efb1d6e6d647f152b2c5646010a2d999fa50ba4c27162c\"" Jan 30 13:11:06.499686 containerd[1721]: time="2025-01-30T13:11:06.499651372Z" level=info msg="CreateContainer within sandbox \"d5340da85a77985d41efb1d6e6d647f152b2c5646010a2d999fa50ba4c27162c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:11:06.532496 containerd[1721]: time="2025-01-30T13:11:06.529457940Z" level=info msg="CreateContainer within sandbox \"d5340da85a77985d41efb1d6e6d647f152b2c5646010a2d999fa50ba4c27162c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bb5fc23ad5f65b9984c14ccbc86ef8ad3804beefe626ee80d89d382d987a0379\"" Jan 30 13:11:06.532496 containerd[1721]: time="2025-01-30T13:11:06.530475946Z" level=info msg="StartContainer for \"bb5fc23ad5f65b9984c14ccbc86ef8ad3804beefe626ee80d89d382d987a0379\"" Jan 30 13:11:06.559674 systemd[1]: Started cri-containerd-bb5fc23ad5f65b9984c14ccbc86ef8ad3804beefe626ee80d89d382d987a0379.scope - libcontainer container bb5fc23ad5f65b9984c14ccbc86ef8ad3804beefe626ee80d89d382d987a0379. Jan 30 13:11:06.591143 containerd[1721]: time="2025-01-30T13:11:06.591088387Z" level=info msg="StartContainer for \"bb5fc23ad5f65b9984c14ccbc86ef8ad3804beefe626ee80d89d382d987a0379\" returns successfully" Jan 30 13:11:06.598111 systemd[1]: cri-containerd-bb5fc23ad5f65b9984c14ccbc86ef8ad3804beefe626ee80d89d382d987a0379.scope: Deactivated successfully. Jan 30 13:11:06.664180 containerd[1721]: time="2025-01-30T13:11:06.664008798Z" level=info msg="shim disconnected" id=bb5fc23ad5f65b9984c14ccbc86ef8ad3804beefe626ee80d89d382d987a0379 namespace=k8s.io Jan 30 13:11:06.664180 containerd[1721]: time="2025-01-30T13:11:06.664075199Z" level=warning msg="cleaning up after shim disconnected" id=bb5fc23ad5f65b9984c14ccbc86ef8ad3804beefe626ee80d89d382d987a0379 namespace=k8s.io Jan 30 13:11:06.664180 containerd[1721]: time="2025-01-30T13:11:06.664086499Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:11:06.947916 sshd[5066]: Accepted publickey for core from 10.200.16.10 port 56688 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:11:06.949689 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:11:06.956136 systemd-logind[1698]: New session 27 of user core. Jan 30 13:11:06.958729 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:11:07.400180 sshd[5177]: Connection closed by 10.200.16.10 port 56688 Jan 30 13:11:07.401361 sshd-session[5066]: pam_unix(sshd:session): session closed for user core Jan 30 13:11:07.404940 systemd[1]: sshd@24-10.200.4.23:22-10.200.16.10:56688.service: Deactivated successfully. Jan 30 13:11:07.407609 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:11:07.409312 systemd-logind[1698]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:11:07.410625 systemd-logind[1698]: Removed session 27. Jan 30 13:11:07.515718 systemd[1]: Started sshd@25-10.200.4.23:22-10.200.16.10:56704.service - OpenSSH per-connection server daemon (10.200.16.10:56704). Jan 30 13:11:07.577411 containerd[1721]: time="2025-01-30T13:11:07.577197044Z" level=info msg="CreateContainer within sandbox \"d5340da85a77985d41efb1d6e6d647f152b2c5646010a2d999fa50ba4c27162c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:11:07.607364 containerd[1721]: time="2025-01-30T13:11:07.607323014Z" level=info msg="CreateContainer within sandbox \"d5340da85a77985d41efb1d6e6d647f152b2c5646010a2d999fa50ba4c27162c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9fa72a3010ee60e44dc281f085d8e2ce87e0c89709b5a3f728401108aaaf7eba\"" Jan 30 13:11:07.608039 containerd[1721]: time="2025-01-30T13:11:07.607938718Z" level=info msg="StartContainer for \"9fa72a3010ee60e44dc281f085d8e2ce87e0c89709b5a3f728401108aaaf7eba\"" Jan 30 13:11:07.641632 systemd[1]: Started cri-containerd-9fa72a3010ee60e44dc281f085d8e2ce87e0c89709b5a3f728401108aaaf7eba.scope - libcontainer container 9fa72a3010ee60e44dc281f085d8e2ce87e0c89709b5a3f728401108aaaf7eba. Jan 30 13:11:07.670956 containerd[1721]: time="2025-01-30T13:11:07.670653771Z" level=info msg="StartContainer for \"9fa72a3010ee60e44dc281f085d8e2ce87e0c89709b5a3f728401108aaaf7eba\" returns successfully" Jan 30 13:11:07.675962 systemd[1]: cri-containerd-9fa72a3010ee60e44dc281f085d8e2ce87e0c89709b5a3f728401108aaaf7eba.scope: Deactivated successfully. Jan 30 13:11:07.706322 containerd[1721]: time="2025-01-30T13:11:07.706257172Z" level=info msg="shim disconnected" id=9fa72a3010ee60e44dc281f085d8e2ce87e0c89709b5a3f728401108aaaf7eba namespace=k8s.io Jan 30 13:11:07.706322 containerd[1721]: time="2025-01-30T13:11:07.706322572Z" level=warning msg="cleaning up after shim disconnected" id=9fa72a3010ee60e44dc281f085d8e2ce87e0c89709b5a3f728401108aaaf7eba namespace=k8s.io Jan 30 13:11:07.706615 containerd[1721]: time="2025-01-30T13:11:07.706333072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:11:08.124255 kubelet[3303]: E0130 13:11:08.122692 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-p2csm" podUID="2dddf0ce-8a70-4234-a900-c81411c159be" Jan 30 13:11:08.166344 sshd[5186]: Accepted publickey for core from 10.200.16.10 port 56704 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:11:08.168034 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:11:08.172524 systemd-logind[1698]: New session 28 of user core. Jan 30 13:11:08.179634 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 13:11:08.336157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fa72a3010ee60e44dc281f085d8e2ce87e0c89709b5a3f728401108aaaf7eba-rootfs.mount: Deactivated successfully. Jan 30 13:11:08.585401 containerd[1721]: time="2025-01-30T13:11:08.582280706Z" level=info msg="CreateContainer within sandbox \"d5340da85a77985d41efb1d6e6d647f152b2c5646010a2d999fa50ba4c27162c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:11:08.619553 containerd[1721]: time="2025-01-30T13:11:08.619507416Z" level=info msg="CreateContainer within sandbox \"d5340da85a77985d41efb1d6e6d647f152b2c5646010a2d999fa50ba4c27162c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e2cc31b989f3a5c9712698e805b1fd75b4529e8f0bb1f4fa3aff5083fd28e60f\"" Jan 30 13:11:08.620237 containerd[1721]: time="2025-01-30T13:11:08.620033619Z" level=info msg="StartContainer for \"e2cc31b989f3a5c9712698e805b1fd75b4529e8f0bb1f4fa3aff5083fd28e60f\"" Jan 30 13:11:08.661629 systemd[1]: Started cri-containerd-e2cc31b989f3a5c9712698e805b1fd75b4529e8f0bb1f4fa3aff5083fd28e60f.scope - libcontainer container e2cc31b989f3a5c9712698e805b1fd75b4529e8f0bb1f4fa3aff5083fd28e60f. Jan 30 13:11:08.689871 systemd[1]: cri-containerd-e2cc31b989f3a5c9712698e805b1fd75b4529e8f0bb1f4fa3aff5083fd28e60f.scope: Deactivated successfully. Jan 30 13:11:08.692162 containerd[1721]: time="2025-01-30T13:11:08.692120325Z" level=info msg="StartContainer for \"e2cc31b989f3a5c9712698e805b1fd75b4529e8f0bb1f4fa3aff5083fd28e60f\" returns successfully" Jan 30 13:11:08.724921 containerd[1721]: time="2025-01-30T13:11:08.724848409Z" level=info msg="shim disconnected" id=e2cc31b989f3a5c9712698e805b1fd75b4529e8f0bb1f4fa3aff5083fd28e60f namespace=k8s.io Jan 30 13:11:08.724921 containerd[1721]: time="2025-01-30T13:11:08.724915810Z" level=warning msg="cleaning up after shim disconnected" id=e2cc31b989f3a5c9712698e805b1fd75b4529e8f0bb1f4fa3aff5083fd28e60f namespace=k8s.io Jan 30 13:11:08.724921 containerd[1721]: time="2025-01-30T13:11:08.724927810Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:11:09.336376 systemd[1]: run-containerd-runc-k8s.io-e2cc31b989f3a5c9712698e805b1fd75b4529e8f0bb1f4fa3aff5083fd28e60f-runc.qMedmb.mount: Deactivated successfully. Jan 30 13:11:09.336514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2cc31b989f3a5c9712698e805b1fd75b4529e8f0bb1f4fa3aff5083fd28e60f-rootfs.mount: Deactivated successfully. Jan 30 13:11:09.585034 containerd[1721]: time="2025-01-30T13:11:09.584814853Z" level=info msg="CreateContainer within sandbox \"d5340da85a77985d41efb1d6e6d647f152b2c5646010a2d999fa50ba4c27162c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:11:09.620356 containerd[1721]: time="2025-01-30T13:11:09.620232552Z" level=info msg="CreateContainer within sandbox \"d5340da85a77985d41efb1d6e6d647f152b2c5646010a2d999fa50ba4c27162c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"46c24e79fec10e06c4401e71b65d18a290ffed3ae33ae77599c85350e9f39624\"" Jan 30 13:11:09.622720 containerd[1721]: time="2025-01-30T13:11:09.621537959Z" level=info msg="StartContainer for \"46c24e79fec10e06c4401e71b65d18a290ffed3ae33ae77599c85350e9f39624\"" Jan 30 13:11:09.663767 systemd[1]: Started cri-containerd-46c24e79fec10e06c4401e71b65d18a290ffed3ae33ae77599c85350e9f39624.scope - libcontainer container 46c24e79fec10e06c4401e71b65d18a290ffed3ae33ae77599c85350e9f39624. Jan 30 13:11:09.689739 systemd[1]: cri-containerd-46c24e79fec10e06c4401e71b65d18a290ffed3ae33ae77599c85350e9f39624.scope: Deactivated successfully. Jan 30 13:11:09.695455 containerd[1721]: time="2025-01-30T13:11:09.695333375Z" level=info msg="StartContainer for \"46c24e79fec10e06c4401e71b65d18a290ffed3ae33ae77599c85350e9f39624\" returns successfully" Jan 30 13:11:09.718944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46c24e79fec10e06c4401e71b65d18a290ffed3ae33ae77599c85350e9f39624-rootfs.mount: Deactivated successfully. Jan 30 13:11:09.733048 containerd[1721]: time="2025-01-30T13:11:09.732957687Z" level=info msg="shim disconnected" id=46c24e79fec10e06c4401e71b65d18a290ffed3ae33ae77599c85350e9f39624 namespace=k8s.io Jan 30 13:11:09.733048 containerd[1721]: time="2025-01-30T13:11:09.733050287Z" level=warning msg="cleaning up after shim disconnected" id=46c24e79fec10e06c4401e71b65d18a290ffed3ae33ae77599c85350e9f39624 namespace=k8s.io Jan 30 13:11:09.733373 containerd[1721]: time="2025-01-30T13:11:09.733073888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:11:10.125383 kubelet[3303]: E0130 13:11:10.123634 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-p2csm" podUID="2dddf0ce-8a70-4234-a900-c81411c159be" Jan 30 13:11:10.234826 kubelet[3303]: E0130 13:11:10.234777 3303 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:11:10.591750 containerd[1721]: time="2025-01-30T13:11:10.591697423Z" level=info msg="CreateContainer within sandbox \"d5340da85a77985d41efb1d6e6d647f152b2c5646010a2d999fa50ba4c27162c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:11:10.632715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1568333016.mount: Deactivated successfully. Jan 30 13:11:10.644218 containerd[1721]: time="2025-01-30T13:11:10.644070018Z" level=info msg="CreateContainer within sandbox \"d5340da85a77985d41efb1d6e6d647f152b2c5646010a2d999fa50ba4c27162c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9baf4db005d9ea2833faec7b340793ce8f016e557ab6c599dca788104392c470\"" Jan 30 13:11:10.646439 containerd[1721]: time="2025-01-30T13:11:10.645347826Z" level=info msg="StartContainer for \"9baf4db005d9ea2833faec7b340793ce8f016e557ab6c599dca788104392c470\"" Jan 30 13:11:10.680653 systemd[1]: Started cri-containerd-9baf4db005d9ea2833faec7b340793ce8f016e557ab6c599dca788104392c470.scope - libcontainer container 9baf4db005d9ea2833faec7b340793ce8f016e557ab6c599dca788104392c470. Jan 30 13:11:10.722896 containerd[1721]: time="2025-01-30T13:11:10.722843062Z" level=info msg="StartContainer for \"9baf4db005d9ea2833faec7b340793ce8f016e557ab6c599dca788104392c470\" returns successfully" Jan 30 13:11:11.179630 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 13:11:12.124242 kubelet[3303]: E0130 13:11:12.122534 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-p2csm" podUID="2dddf0ce-8a70-4234-a900-c81411c159be" Jan 30 13:11:12.739631 systemd[1]: run-containerd-runc-k8s.io-9baf4db005d9ea2833faec7b340793ce8f016e557ab6c599dca788104392c470-runc.scoBXV.mount: Deactivated successfully. Jan 30 13:11:13.452597 kubelet[3303]: I0130 13:11:13.452536 3303 setters.go:600] "Node became not ready" node="ci-4186.1.0-a-065ab1add7" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:11:13Z","lastTransitionTime":"2025-01-30T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:11:14.088614 systemd-networkd[1330]: lxc_health: Link UP Jan 30 13:11:14.103751 systemd-networkd[1330]: lxc_health: Gained carrier Jan 30 13:11:14.125574 kubelet[3303]: E0130 13:11:14.125522 3303 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-p2csm" podUID="2dddf0ce-8a70-4234-a900-c81411c159be" Jan 30 13:11:14.442574 kubelet[3303]: I0130 13:11:14.442507 3303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7qgmr" podStartSLOduration=8.442483212 podStartE2EDuration="8.442483212s" podCreationTimestamp="2025-01-30 13:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:11:11.616973898 +0000 UTC m=+192.163284419" watchObservedRunningTime="2025-01-30 13:11:14.442483212 +0000 UTC m=+194.988793833" Jan 30 13:11:15.741757 systemd-networkd[1330]: lxc_health: Gained IPv6LL Jan 30 13:11:19.520646 sshd[5251]: Connection closed by 10.200.16.10 port 56704 Jan 30 13:11:19.521693 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Jan 30 13:11:19.525216 systemd[1]: sshd@25-10.200.4.23:22-10.200.16.10:56704.service: Deactivated successfully. Jan 30 13:11:19.527882 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 13:11:19.529688 systemd-logind[1698]: Session 28 logged out. Waiting for processes to exit. Jan 30 13:11:19.530886 systemd-logind[1698]: Removed session 28.