Dec 13 13:31:31.093308 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:31:31.093348 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:31:31.093363 kernel: BIOS-provided physical RAM map: Dec 13 13:31:31.093374 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 13:31:31.093383 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 13:31:31.093393 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 13:31:31.093406 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 13:31:31.093417 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 13:31:31.093431 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 13:31:31.093442 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 13:31:31.093452 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 13:31:31.093463 kernel: printk: bootconsole [earlyser0] enabled Dec 13 13:31:31.093474 kernel: NX (Execute Disable) protection: active Dec 13 13:31:31.093485 kernel: APIC: Static calls initialized Dec 13 13:31:31.093501 kernel: efi: EFI v2.7 by Microsoft Dec 13 13:31:31.093513 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Dec 13 13:31:31.093525 kernel: random: crng init done Dec 13 13:31:31.093538 kernel: secureboot: Secure boot disabled Dec 13 13:31:31.093549 kernel: SMBIOS 3.1.0 present. Dec 13 13:31:31.093561 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 13:31:31.093573 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 13:31:31.093585 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 13:31:31.093597 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Dec 13 13:31:31.093609 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 13:31:31.093624 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 13:31:31.093635 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 13:31:31.093648 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 13:31:31.093660 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 13:31:31.093673 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 13:31:31.093686 kernel: tsc: Detected 2593.905 MHz processor Dec 13 13:31:31.093698 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:31:31.093711 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:31:31.093724 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 13:31:31.093739 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 13:31:31.093752 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:31:31.093764 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 13:31:31.093776 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 13:31:31.093789 kernel: Using GB pages for direct mapping Dec 13 13:31:31.093801 kernel: ACPI: Early table checksum verification disabled Dec 13 13:31:31.093814 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 13:31:31.093832 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093848 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093861 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 13:31:31.093895 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 13:31:31.093909 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093922 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093935 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093952 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093966 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093979 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093992 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.094006 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 13:31:31.094019 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 13:31:31.094032 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 13:31:31.094046 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 13:31:31.094062 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 13:31:31.094078 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 13:31:31.094092 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 13:31:31.094105 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 13:31:31.094118 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 13:31:31.094132 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 13:31:31.094145 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 13:31:31.094158 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 13:31:31.094171 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 13:31:31.094184 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 13:31:31.094200 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 13:31:31.094212 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 13:31:31.094226 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 13:31:31.094240 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 13:31:31.094255 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 13:31:31.094268 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 13:31:31.094282 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 13:31:31.094296 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 13:31:31.094314 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 13:31:31.094328 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 13:31:31.094342 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 13:31:31.094356 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 13:31:31.094370 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 13:31:31.094385 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 13:31:31.094400 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 13:31:31.094414 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 13:31:31.094428 kernel: Zone ranges: Dec 13 13:31:31.094444 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:31:31.094458 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 13:31:31.094471 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 13:31:31.094484 kernel: Movable zone start for each node Dec 13 13:31:31.094498 kernel: Early memory node ranges Dec 13 13:31:31.094511 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 13:31:31.094524 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 13:31:31.094537 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 13:31:31.094550 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 13:31:31.094566 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 13:31:31.094578 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:31:31.094591 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 13:31:31.094604 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 13:31:31.094616 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 13:31:31.094629 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 13:31:31.094642 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:31:31.094655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:31:31.094668 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:31:31.094684 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 13:31:31.094697 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 13:31:31.094711 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 13:31:31.094724 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 13:31:31.094737 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:31:31.094751 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 13:31:31.094764 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 13:31:31.094778 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 13:31:31.094790 kernel: pcpu-alloc: [0] 0 1 Dec 13 13:31:31.094806 kernel: Hyper-V: PV spinlocks enabled Dec 13 13:31:31.094819 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:31:31.094835 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:31:31.094849 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:31:31.094863 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 13:31:31.094896 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:31:31.094907 kernel: Fallback order for Node 0: 0 Dec 13 13:31:31.094919 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 13:31:31.094934 kernel: Policy zone: Normal Dec 13 13:31:31.094956 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:31:31.094970 kernel: software IO TLB: area num 2. Dec 13 13:31:31.094987 kernel: Memory: 8067572K/8387460K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 319632K reserved, 0K cma-reserved) Dec 13 13:31:31.095001 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 13:31:31.095015 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:31:31.095029 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:31:31.095043 kernel: Dynamic Preempt: voluntary Dec 13 13:31:31.095056 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:31:31.095075 kernel: rcu: RCU event tracing is enabled. Dec 13 13:31:31.095089 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 13:31:31.095105 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:31:31.095119 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:31:31.095133 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:31:31.095147 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:31:31.095161 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 13:31:31.095176 kernel: Using NULL legacy PIC Dec 13 13:31:31.095192 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 13:31:31.095205 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:31:31.095220 kernel: Console: colour dummy device 80x25 Dec 13 13:31:31.095233 kernel: printk: console [tty1] enabled Dec 13 13:31:31.095246 kernel: printk: console [ttyS0] enabled Dec 13 13:31:31.095261 kernel: printk: bootconsole [earlyser0] disabled Dec 13 13:31:31.095275 kernel: ACPI: Core revision 20230628 Dec 13 13:31:31.095289 kernel: Failed to register legacy timer interrupt Dec 13 13:31:31.095302 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:31:31.095319 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 13:31:31.095332 kernel: Hyper-V: Using IPI hypercalls Dec 13 13:31:31.095347 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 13 13:31:31.095361 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 13 13:31:31.095375 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 13 13:31:31.095389 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 13 13:31:31.095403 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 13 13:31:31.095417 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 13 13:31:31.095430 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Dec 13 13:31:31.095447 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 13:31:31.095460 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 13:31:31.095474 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:31:31.095488 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:31:31.095502 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:31:31.095516 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:31:31.095531 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 13:31:31.095544 kernel: RETBleed: Vulnerable Dec 13 13:31:31.095556 kernel: Speculative Store Bypass: Vulnerable Dec 13 13:31:31.095570 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 13:31:31.095584 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 13:31:31.095594 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 13:31:31.095606 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:31:31.095619 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:31:31.095634 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:31:31.095646 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 13:31:31.095662 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 13:31:31.095675 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 13:31:31.095689 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:31:31.095703 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 13:31:31.095718 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 13:31:31.095736 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 13:31:31.095751 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 13:31:31.095765 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:31:31.095780 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:31:31.095795 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:31:31.095808 kernel: landlock: Up and running. Dec 13 13:31:31.095822 kernel: SELinux: Initializing. Dec 13 13:31:31.095837 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:31:31.095852 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:31:31.095866 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 13:31:31.096567 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:31:31.096585 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:31:31.096594 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:31:31.096604 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 13:31:31.096614 kernel: signal: max sigframe size: 3632 Dec 13 13:31:31.096622 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:31:31.096634 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:31:31.096642 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 13:31:31.096651 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:31:31.096661 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:31:31.096671 kernel: .... node #0, CPUs: #1 Dec 13 13:31:31.096683 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 13:31:31.096692 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 13:31:31.096703 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:31:31.096711 kernel: smpboot: Max logical packages: 1 Dec 13 13:31:31.096719 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Dec 13 13:31:31.096727 kernel: devtmpfs: initialized Dec 13 13:31:31.096735 kernel: x86/mm: Memory block size: 128MB Dec 13 13:31:31.096743 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 13:31:31.096754 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:31:31.096762 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 13:31:31.096770 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:31:31.096778 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:31:31.096786 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:31:31.096794 kernel: audit: type=2000 audit(1734096689.028:1): state=initialized audit_enabled=0 res=1 Dec 13 13:31:31.096802 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:31:31.096812 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:31:31.096823 kernel: cpuidle: using governor menu Dec 13 13:31:31.096831 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:31:31.096842 kernel: dca service started, version 1.12.1 Dec 13 13:31:31.096850 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Dec 13 13:31:31.096859 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:31:31.096869 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:31:31.096917 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:31:31.096925 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:31:31.096936 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:31:31.096947 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:31:31.096957 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:31:31.096967 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:31:31.096978 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:31:31.096986 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:31:31.096995 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:31:31.097005 kernel: ACPI: Interpreter enabled Dec 13 13:31:31.097013 kernel: ACPI: PM: (supports S0 S5) Dec 13 13:31:31.097024 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:31:31.097035 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:31:31.097044 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 13:31:31.097055 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 13:31:31.097063 kernel: iommu: Default domain type: Translated Dec 13 13:31:31.097073 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:31:31.097082 kernel: efivars: Registered efivars operations Dec 13 13:31:31.097090 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:31:31.097101 kernel: PCI: System does not support PCI Dec 13 13:31:31.097109 kernel: vgaarb: loaded Dec 13 13:31:31.097118 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 13:31:31.097130 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:31:31.097138 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:31:31.097150 kernel: pnp: PnP ACPI init Dec 13 13:31:31.097158 kernel: pnp: PnP ACPI: found 3 devices Dec 13 13:31:31.097167 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:31:31.097177 kernel: NET: Registered PF_INET protocol family Dec 13 13:31:31.097185 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 13:31:31.097194 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 13:31:31.097204 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:31:31.097212 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:31:31.097223 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 13:31:31.097231 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 13:31:31.097241 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 13:31:31.097251 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 13:31:31.097259 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:31:31.097270 kernel: NET: Registered PF_XDP protocol family Dec 13 13:31:31.097277 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:31:31.097290 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 13:31:31.097299 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Dec 13 13:31:31.097309 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 13:31:31.097318 kernel: Initialise system trusted keyrings Dec 13 13:31:31.097326 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 13:31:31.097337 kernel: Key type asymmetric registered Dec 13 13:31:31.097345 kernel: Asymmetric key parser 'x509' registered Dec 13 13:31:31.097353 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:31:31.097363 kernel: io scheduler mq-deadline registered Dec 13 13:31:31.097374 kernel: io scheduler kyber registered Dec 13 13:31:31.097385 kernel: io scheduler bfq registered Dec 13 13:31:31.097394 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:31:31.097404 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:31:31.097412 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:31:31.097423 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 13:31:31.097432 kernel: i8042: PNP: No PS/2 controller found. Dec 13 13:31:31.097581 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 13:31:31.097679 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T13:31:30 UTC (1734096690) Dec 13 13:31:31.097764 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 13:31:31.097778 kernel: intel_pstate: CPU model not supported Dec 13 13:31:31.097787 kernel: efifb: probing for efifb Dec 13 13:31:31.097796 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 13:31:31.097806 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 13:31:31.097817 kernel: efifb: scrolling: redraw Dec 13 13:31:31.097825 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 13:31:31.097834 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 13:31:31.097846 kernel: fb0: EFI VGA frame buffer device Dec 13 13:31:31.097854 kernel: pstore: Using crash dump compression: deflate Dec 13 13:31:31.097866 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 13:31:31.097884 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:31:31.097894 kernel: Segment Routing with IPv6 Dec 13 13:31:31.097902 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:31:31.097912 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:31:31.097921 kernel: Key type dns_resolver registered Dec 13 13:31:31.097929 kernel: IPI shorthand broadcast: enabled Dec 13 13:31:31.097943 kernel: sched_clock: Marking stable (911151000, 49321300)->(1207662700, -247190400) Dec 13 13:31:31.097951 kernel: registered taskstats version 1 Dec 13 13:31:31.097962 kernel: Loading compiled-in X.509 certificates Dec 13 13:31:31.097971 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:31:31.097978 kernel: Key type .fscrypt registered Dec 13 13:31:31.097989 kernel: Key type fscrypt-provisioning registered Dec 13 13:31:31.097997 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:31:31.098007 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:31:31.098016 kernel: ima: No architecture policies found Dec 13 13:31:31.098027 kernel: clk: Disabling unused clocks Dec 13 13:31:31.098037 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:31:31.098046 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:31:31.098055 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:31:31.098065 kernel: Run /init as init process Dec 13 13:31:31.098073 kernel: with arguments: Dec 13 13:31:31.098082 kernel: /init Dec 13 13:31:31.098091 kernel: with environment: Dec 13 13:31:31.098099 kernel: HOME=/ Dec 13 13:31:31.098111 kernel: TERM=linux Dec 13 13:31:31.098119 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:31:31.098131 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:31:31.098142 systemd[1]: Detected virtualization microsoft. Dec 13 13:31:31.098152 systemd[1]: Detected architecture x86-64. Dec 13 13:31:31.098162 systemd[1]: Running in initrd. Dec 13 13:31:31.098170 systemd[1]: No hostname configured, using default hostname. Dec 13 13:31:31.098181 systemd[1]: Hostname set to . Dec 13 13:31:31.098193 systemd[1]: Initializing machine ID from random generator. Dec 13 13:31:31.098203 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:31:31.098213 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:31:31.098225 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:31:31.098234 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:31:31.098245 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:31:31.098254 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:31:31.098267 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:31:31.098278 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:31:31.098287 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:31:31.098299 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:31:31.098307 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:31:31.098319 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:31:31.098327 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:31:31.098341 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:31:31.098349 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:31:31.098361 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:31:31.098369 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:31:31.098379 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:31:31.098390 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:31:31.098398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:31:31.098410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:31:31.098418 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:31:31.098433 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:31:31.098441 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:31:31.098450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:31:31.098461 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:31:31.098470 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:31:31.098482 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:31:31.098490 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:31:31.098517 systemd-journald[177]: Collecting audit messages is disabled. Dec 13 13:31:31.098542 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:31:31.098554 systemd-journald[177]: Journal started Dec 13 13:31:31.098577 systemd-journald[177]: Runtime Journal (/run/log/journal/b1e903112dfd496d9472a6235ff23cb9) is 8.0M, max 158.8M, 150.8M free. Dec 13 13:31:31.111894 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:31:31.115107 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:31:31.122104 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:31:31.126160 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:31:31.133388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:31.136509 systemd-modules-load[178]: Inserted module 'overlay' Dec 13 13:31:31.152175 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:31:31.163062 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:31:31.180212 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:31:31.196888 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:31:31.197089 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:31.206681 kernel: Bridge firewalling registered Dec 13 13:31:31.206775 systemd-modules-load[178]: Inserted module 'br_netfilter' Dec 13 13:31:31.209305 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:31:31.216303 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:31:31.222664 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:31:31.233040 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:31:31.241036 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:31:31.248860 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:31:31.257830 dracut-cmdline[204]: dracut-dracut-053 Dec 13 13:31:31.264177 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:31:31.283357 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:31:31.298077 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:31:31.300662 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:31:31.351229 systemd-resolved[246]: Positive Trust Anchors: Dec 13 13:31:31.351243 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:31:31.351298 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:31:31.377911 systemd-resolved[246]: Defaulting to hostname 'linux'. Dec 13 13:31:31.379188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:31:31.384423 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:31:31.395886 kernel: SCSI subsystem initialized Dec 13 13:31:31.406893 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:31:31.418893 kernel: iscsi: registered transport (tcp) Dec 13 13:31:31.440362 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:31:31.440427 kernel: QLogic iSCSI HBA Driver Dec 13 13:31:31.476052 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:31:31.485123 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:31:31.512167 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:31:31.512232 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:31:31.512885 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:31:31.554895 kernel: raid6: avx512x4 gen() 18811 MB/s Dec 13 13:31:31.574892 kernel: raid6: avx512x2 gen() 18659 MB/s Dec 13 13:31:31.593886 kernel: raid6: avx512x1 gen() 18688 MB/s Dec 13 13:31:31.612884 kernel: raid6: avx2x4 gen() 18708 MB/s Dec 13 13:31:31.631891 kernel: raid6: avx2x2 gen() 18612 MB/s Dec 13 13:31:31.652127 kernel: raid6: avx2x1 gen() 14036 MB/s Dec 13 13:31:31.652177 kernel: raid6: using algorithm avx512x4 gen() 18811 MB/s Dec 13 13:31:31.673194 kernel: raid6: .... xor() 6903 MB/s, rmw enabled Dec 13 13:31:31.673236 kernel: raid6: using avx512x2 recovery algorithm Dec 13 13:31:31.695898 kernel: xor: automatically using best checksumming function avx Dec 13 13:31:31.837905 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:31:31.847046 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:31:31.858052 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:31:31.871629 systemd-udevd[396]: Using default interface naming scheme 'v255'. Dec 13 13:31:31.876072 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:31:31.896032 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:31:31.910219 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Dec 13 13:31:31.936858 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:31:31.950209 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:31:31.989235 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:31:32.003032 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:31:32.025405 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:31:32.034780 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:31:32.040080 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:31:32.040503 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:31:32.058900 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:31:32.059260 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:31:32.082704 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:31:32.113897 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:31:32.113941 kernel: AES CTR mode by8 optimization enabled Dec 13 13:31:32.121890 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 13:31:32.122745 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:31:32.124610 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:32.133150 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:31:32.139963 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:31:32.140239 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:32.148361 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:31:32.158206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:31:32.183599 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 13:31:32.183678 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 13:31:32.185955 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 13:31:32.189798 kernel: scsi host1: storvsc_host_t Dec 13 13:31:32.193904 kernel: scsi host0: storvsc_host_t Dec 13 13:31:32.196892 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:31:32.196920 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 13:31:32.213157 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 13:31:32.213194 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 13:31:32.214062 kernel: PTP clock support registered Dec 13 13:31:32.217205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:32.227355 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:31:32.240677 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 13:31:32.246888 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 13:31:32.265474 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 13:31:32.265512 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 13:31:32.265530 kernel: hv_vmbus: registering driver hv_utils Dec 13 13:31:32.265904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:32.277010 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 13:31:32.240292 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:31:32.248033 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 13:31:32.248059 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 13:31:32.248073 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 13:31:32.248232 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 13:31:32.248247 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 13:31:32.248263 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 13:31:32.251492 systemd-journald[177]: Time jumped backwards, rotating. Dec 13 13:31:32.233111 systemd-resolved[246]: Clock change detected. Flushing caches. Dec 13 13:31:32.268675 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 13:31:32.283018 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 13:31:32.283199 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 13:31:32.283437 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 13:31:32.283692 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 13:31:32.283855 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:31:32.283872 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 13:31:32.395093 kernel: hv_netvsc 000d3ab7-fe4d-000d-3ab7-fe4d000d3ab7 eth0: VF slot 1 added Dec 13 13:31:32.406797 kernel: hv_vmbus: registering driver hv_pci Dec 13 13:31:32.406841 kernel: hv_pci 28c0d482-88ab-410a-8958-fa717da5ccff: PCI VMBus probing: Using version 0x10004 Dec 13 13:31:32.450839 kernel: hv_pci 28c0d482-88ab-410a-8958-fa717da5ccff: PCI host bridge to bus 88ab:00 Dec 13 13:31:32.451262 kernel: pci_bus 88ab:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 13:31:32.451598 kernel: pci_bus 88ab:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 13:31:32.451760 kernel: pci 88ab:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 13:31:32.451954 kernel: pci 88ab:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 13:31:32.452127 kernel: pci 88ab:00:02.0: enabling Extended Tags Dec 13 13:31:32.452301 kernel: pci 88ab:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 88ab:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 13:31:32.452504 kernel: pci_bus 88ab:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 13:31:32.452655 kernel: pci 88ab:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 13:31:32.613152 kernel: mlx5_core 88ab:00:02.0: enabling device (0000 -> 0002) Dec 13 13:31:32.845297 kernel: mlx5_core 88ab:00:02.0: firmware version: 14.30.5000 Dec 13 13:31:32.845539 kernel: hv_netvsc 000d3ab7-fe4d-000d-3ab7-fe4d000d3ab7 eth0: VF registering: eth1 Dec 13 13:31:32.845696 kernel: mlx5_core 88ab:00:02.0 eth1: joined to eth0 Dec 13 13:31:32.845869 kernel: mlx5_core 88ab:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Dec 13 13:31:32.853406 kernel: mlx5_core 88ab:00:02.0 enP34987s1: renamed from eth1 Dec 13 13:31:32.856881 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 13:31:32.945452 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (445) Dec 13 13:31:32.953406 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (458) Dec 13 13:31:32.970332 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 13:31:32.982431 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 13:31:32.990722 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 13:31:32.993314 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 13:31:33.011601 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:31:33.032402 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:31:33.040401 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:31:34.047495 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:31:34.049352 disk-uuid[600]: The operation has completed successfully. Dec 13 13:31:34.133031 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:31:34.133144 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:31:34.147557 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:31:34.153630 sh[686]: Success Dec 13 13:31:34.187325 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 13:31:34.443091 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:31:34.458503 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:31:34.461207 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:31:34.498190 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:31:34.498269 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:34.502043 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:31:34.504993 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:31:34.507927 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:31:34.928219 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:31:34.931889 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:31:34.943901 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:31:34.949538 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:31:34.971841 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:34.971893 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:34.971911 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:31:34.993770 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:31:35.007273 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:35.006839 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:31:35.017149 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:31:35.028604 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:31:35.044998 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:31:35.054503 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:31:35.074735 systemd-networkd[870]: lo: Link UP Dec 13 13:31:35.074745 systemd-networkd[870]: lo: Gained carrier Dec 13 13:31:35.076874 systemd-networkd[870]: Enumeration completed Dec 13 13:31:35.077354 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:31:35.081854 systemd[1]: Reached target network.target - Network. Dec 13 13:31:35.090963 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:31:35.090972 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:31:35.153405 kernel: mlx5_core 88ab:00:02.0 enP34987s1: Link up Dec 13 13:31:35.192539 kernel: hv_netvsc 000d3ab7-fe4d-000d-3ab7-fe4d000d3ab7 eth0: Data path switched to VF: enP34987s1 Dec 13 13:31:35.192654 systemd-networkd[870]: enP34987s1: Link UP Dec 13 13:31:35.192825 systemd-networkd[870]: eth0: Link UP Dec 13 13:31:35.193055 systemd-networkd[870]: eth0: Gained carrier Dec 13 13:31:35.193069 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:31:35.198597 systemd-networkd[870]: enP34987s1: Gained carrier Dec 13 13:31:35.236449 systemd-networkd[870]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 13:31:36.076286 ignition[845]: Ignition 2.20.0 Dec 13 13:31:36.076302 ignition[845]: Stage: fetch-offline Dec 13 13:31:36.078027 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:31:36.076349 ignition[845]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:36.076359 ignition[845]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:31:36.076497 ignition[845]: parsed url from cmdline: "" Dec 13 13:31:36.076502 ignition[845]: no config URL provided Dec 13 13:31:36.076509 ignition[845]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:31:36.076519 ignition[845]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:31:36.094119 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:31:36.076525 ignition[845]: failed to fetch config: resource requires networking Dec 13 13:31:36.076891 ignition[845]: Ignition finished successfully Dec 13 13:31:36.107749 ignition[879]: Ignition 2.20.0 Dec 13 13:31:36.107757 ignition[879]: Stage: fetch Dec 13 13:31:36.107938 ignition[879]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:36.107948 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:31:36.108040 ignition[879]: parsed url from cmdline: "" Dec 13 13:31:36.108043 ignition[879]: no config URL provided Dec 13 13:31:36.108049 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:31:36.108058 ignition[879]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:31:36.108084 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 13:31:36.200084 ignition[879]: GET result: OK Dec 13 13:31:36.200193 ignition[879]: config has been read from IMDS userdata Dec 13 13:31:36.200230 ignition[879]: parsing config with SHA512: e012ece9ff90613d3f668751ef1e64a6fd4b81dee540c9d734cdb454f4b00ef665888aae951194f5854836647153ca7955b1ca6140c4bef15a1458d02e2b9f33 Dec 13 13:31:36.205789 unknown[879]: fetched base config from "system" Dec 13 13:31:36.205801 unknown[879]: fetched base config from "system" Dec 13 13:31:36.206155 ignition[879]: fetch: fetch complete Dec 13 13:31:36.205808 unknown[879]: fetched user config from "azure" Dec 13 13:31:36.206161 ignition[879]: fetch: fetch passed Dec 13 13:31:36.207937 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:31:36.206207 ignition[879]: Ignition finished successfully Dec 13 13:31:36.221099 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:31:36.233922 ignition[886]: Ignition 2.20.0 Dec 13 13:31:36.233933 ignition[886]: Stage: kargs Dec 13 13:31:36.234160 ignition[886]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:36.234173 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:31:36.235065 ignition[886]: kargs: kargs passed Dec 13 13:31:36.235106 ignition[886]: Ignition finished successfully Dec 13 13:31:36.244320 systemd-networkd[870]: enP34987s1: Gained IPv6LL Dec 13 13:31:36.244610 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:31:36.257533 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:31:36.272347 ignition[892]: Ignition 2.20.0 Dec 13 13:31:36.272356 ignition[892]: Stage: disks Dec 13 13:31:36.272593 ignition[892]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:36.272607 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:31:36.281309 ignition[892]: disks: disks passed Dec 13 13:31:36.281351 ignition[892]: Ignition finished successfully Dec 13 13:31:36.286116 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:31:36.288899 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:31:36.294027 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:31:36.297465 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:31:36.300172 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:31:36.306166 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:31:36.320644 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:31:36.380551 systemd-fsck[900]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 13:31:36.384726 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:31:36.396561 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:31:36.496737 kernel: EXT4-fs (sda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:31:36.497420 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:31:36.500370 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:31:36.504984 systemd-networkd[870]: eth0: Gained IPv6LL Dec 13 13:31:36.556487 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:31:36.565986 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:31:36.578592 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 13:31:36.585672 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:31:36.585717 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:31:36.593408 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (911) Dec 13 13:31:36.604053 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:36.604108 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:36.604141 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:31:36.609711 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:31:36.621501 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:31:36.628866 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:31:36.634035 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:31:37.369930 coreos-metadata[913]: Dec 13 13:31:37.369 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 13:31:37.377155 coreos-metadata[913]: Dec 13 13:31:37.377 INFO Fetch successful Dec 13 13:31:37.380372 coreos-metadata[913]: Dec 13 13:31:37.377 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 13:31:37.391650 coreos-metadata[913]: Dec 13 13:31:37.391 INFO Fetch successful Dec 13 13:31:37.408002 coreos-metadata[913]: Dec 13 13:31:37.407 INFO wrote hostname ci-4186.0.0-a-a6ca590029 to /sysroot/etc/hostname Dec 13 13:31:37.411272 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 13:31:37.420469 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:31:37.457125 initrd-setup-root[948]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:31:37.463327 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:31:37.468981 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:31:38.381871 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:31:38.390569 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:31:38.396578 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:31:38.408145 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:31:38.414822 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:38.434213 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:31:38.445230 ignition[1030]: INFO : Ignition 2.20.0 Dec 13 13:31:38.445230 ignition[1030]: INFO : Stage: mount Dec 13 13:31:38.452341 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:38.452341 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:31:38.452341 ignition[1030]: INFO : mount: mount passed Dec 13 13:31:38.452341 ignition[1030]: INFO : Ignition finished successfully Dec 13 13:31:38.447154 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:31:38.463025 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:31:38.477564 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:31:38.494137 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1041) Dec 13 13:31:38.494212 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:38.497518 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:38.500695 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:31:38.506762 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:31:38.508361 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:31:38.534388 ignition[1058]: INFO : Ignition 2.20.0 Dec 13 13:31:38.534388 ignition[1058]: INFO : Stage: files Dec 13 13:31:38.538748 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:38.538748 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:31:38.538748 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:31:38.564883 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:31:38.564883 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:31:38.688681 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:31:38.693217 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:31:38.697162 unknown[1058]: wrote ssh authorized keys file for user: core Dec 13 13:31:38.700004 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:31:38.712248 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:31:38.717433 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:31:38.766702 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:31:38.891092 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:31:38.897792 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:31:38.897792 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 13:31:39.424852 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 13:31:39.544505 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 13:31:40.006270 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 13:31:40.269889 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:40.269889 ignition[1058]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:31:40.294947 ignition[1058]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:31:40.301639 ignition[1058]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:31:40.301639 ignition[1058]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:31:40.301639 ignition[1058]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:31:40.316716 ignition[1058]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:31:40.316716 ignition[1058]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:31:40.316716 ignition[1058]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:31:40.316716 ignition[1058]: INFO : files: files passed Dec 13 13:31:40.316716 ignition[1058]: INFO : Ignition finished successfully Dec 13 13:31:40.303585 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:31:40.335695 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:31:40.345203 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:31:40.351753 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:31:40.353241 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:31:40.379776 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:31:40.379776 initrd-setup-root-after-ignition[1086]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:31:40.392320 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:31:40.383706 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:31:40.388368 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:31:40.409611 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:31:40.439962 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:31:40.440089 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:31:40.447022 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:31:40.455652 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:31:40.456651 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:31:40.466745 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:31:40.481611 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:31:40.493619 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:31:40.507169 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:31:40.510358 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:31:40.516686 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:31:40.524225 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:31:40.524417 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:31:40.533307 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:31:40.536249 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:31:40.543435 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:31:40.544540 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:31:40.544998 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:31:40.545440 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:31:40.546099 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:31:40.546513 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:31:40.546885 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:31:40.547502 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:31:40.547923 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:31:40.548061 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:31:40.548831 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:31:40.549368 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:31:40.549774 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:31:40.579974 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:31:40.583577 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:31:40.583733 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:31:40.597986 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:31:40.600712 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:31:40.606448 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:31:40.606593 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:31:40.611053 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 13:31:40.611184 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 13:31:40.628459 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:31:40.672648 ignition[1110]: INFO : Ignition 2.20.0 Dec 13 13:31:40.672648 ignition[1110]: INFO : Stage: umount Dec 13 13:31:40.672648 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:40.672648 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:31:40.672648 ignition[1110]: INFO : umount: umount passed Dec 13 13:31:40.672648 ignition[1110]: INFO : Ignition finished successfully Dec 13 13:31:40.640871 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:31:40.645929 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:31:40.646105 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:31:40.655250 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:31:40.655479 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:31:40.668569 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:31:40.668661 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:31:40.672983 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:31:40.673067 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:31:40.688997 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:31:40.689071 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:31:40.694567 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:31:40.694622 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:31:40.697236 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:31:40.697283 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:31:40.701575 systemd[1]: Stopped target network.target - Network. Dec 13 13:31:40.701971 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:31:40.702015 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:31:40.702473 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:31:40.703327 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:31:40.758088 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:31:40.765306 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:31:40.768931 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:31:40.772623 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:31:40.772678 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:31:40.777420 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:31:40.777460 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:31:40.780395 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:31:40.780457 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:31:40.783228 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:31:40.783275 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:31:40.802425 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:31:40.810005 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:31:40.819347 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:31:40.822436 systemd-networkd[870]: eth0: DHCPv6 lease lost Dec 13 13:31:40.824601 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:31:40.824697 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:31:40.830271 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:31:40.830348 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:31:40.847550 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:31:40.850046 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:31:40.850116 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:31:40.862183 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:31:40.865286 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:31:40.865424 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:31:40.884401 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:31:40.886153 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:31:40.891398 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:31:40.891445 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:31:40.903030 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:31:40.903091 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:31:40.909898 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:31:40.910026 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:31:40.916692 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:31:40.916765 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:31:40.921694 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:31:40.941746 kernel: hv_netvsc 000d3ab7-fe4d-000d-3ab7-fe4d000d3ab7 eth0: Data path switched from VF: enP34987s1 Dec 13 13:31:40.921734 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:31:40.930374 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:31:40.930444 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:31:40.942036 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:31:40.942112 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:31:40.948137 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:31:40.948181 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:40.967546 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:31:40.970300 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:31:40.970363 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:31:40.977103 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:31:40.979175 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:31:40.983571 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:31:40.983620 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:31:40.991653 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:31:40.991705 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:41.006646 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:31:41.006764 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:31:41.014942 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:31:41.015030 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:31:41.394867 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:31:41.395026 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:31:41.400249 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:31:41.405670 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:31:41.405741 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:31:41.417659 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:31:41.847339 systemd[1]: Switching root. Dec 13 13:31:41.936185 systemd-journald[177]: Journal stopped Dec 13 13:31:31.093308 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:31:31.093348 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:31:31.093363 kernel: BIOS-provided physical RAM map: Dec 13 13:31:31.093374 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 13:31:31.093383 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 13:31:31.093393 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 13:31:31.093406 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 13:31:31.093417 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 13:31:31.093431 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 13:31:31.093442 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 13:31:31.093452 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 13:31:31.093463 kernel: printk: bootconsole [earlyser0] enabled Dec 13 13:31:31.093474 kernel: NX (Execute Disable) protection: active Dec 13 13:31:31.093485 kernel: APIC: Static calls initialized Dec 13 13:31:31.093501 kernel: efi: EFI v2.7 by Microsoft Dec 13 13:31:31.093513 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Dec 13 13:31:31.093525 kernel: random: crng init done Dec 13 13:31:31.093538 kernel: secureboot: Secure boot disabled Dec 13 13:31:31.093549 kernel: SMBIOS 3.1.0 present. Dec 13 13:31:31.093561 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 13:31:31.093573 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 13:31:31.093585 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 13:31:31.093597 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Dec 13 13:31:31.093609 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 13:31:31.093624 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 13:31:31.093635 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 13:31:31.093648 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 13:31:31.093660 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 13:31:31.093673 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 13:31:31.093686 kernel: tsc: Detected 2593.905 MHz processor Dec 13 13:31:31.093698 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:31:31.093711 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:31:31.093724 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 13:31:31.093739 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 13:31:31.093752 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:31:31.093764 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 13:31:31.093776 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 13:31:31.093789 kernel: Using GB pages for direct mapping Dec 13 13:31:31.093801 kernel: ACPI: Early table checksum verification disabled Dec 13 13:31:31.093814 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 13:31:31.093832 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093848 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093861 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 13:31:31.093895 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 13:31:31.093909 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093922 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093935 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093952 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093966 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093979 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.093992 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:31:31.094006 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 13:31:31.094019 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 13:31:31.094032 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 13:31:31.094046 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 13:31:31.094062 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 13:31:31.094078 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 13:31:31.094092 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 13:31:31.094105 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 13:31:31.094118 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 13:31:31.094132 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 13:31:31.094145 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 13:31:31.094158 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 13:31:31.094171 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 13:31:31.094184 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 13:31:31.094200 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 13:31:31.094212 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 13:31:31.094226 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 13:31:31.094240 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 13:31:31.094255 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 13:31:31.094268 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 13:31:31.094282 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 13:31:31.094296 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 13:31:31.094314 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 13:31:31.094328 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 13:31:31.094342 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 13:31:31.094356 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 13:31:31.094370 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 13:31:31.094385 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 13:31:31.094400 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 13:31:31.094414 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 13:31:31.094428 kernel: Zone ranges: Dec 13 13:31:31.094444 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:31:31.094458 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 13:31:31.094471 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 13:31:31.094484 kernel: Movable zone start for each node Dec 13 13:31:31.094498 kernel: Early memory node ranges Dec 13 13:31:31.094511 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 13:31:31.094524 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 13:31:31.094537 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 13:31:31.094550 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 13:31:31.094566 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 13:31:31.094578 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:31:31.094591 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 13:31:31.094604 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 13:31:31.094616 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 13:31:31.094629 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 13:31:31.094642 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:31:31.094655 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:31:31.094668 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:31:31.094684 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 13:31:31.094697 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 13:31:31.094711 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 13:31:31.094724 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 13:31:31.094737 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:31:31.094751 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 13:31:31.094764 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 13:31:31.094778 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 13:31:31.094790 kernel: pcpu-alloc: [0] 0 1 Dec 13 13:31:31.094806 kernel: Hyper-V: PV spinlocks enabled Dec 13 13:31:31.094819 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:31:31.094835 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:31:31.094849 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:31:31.094863 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 13:31:31.094896 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:31:31.094907 kernel: Fallback order for Node 0: 0 Dec 13 13:31:31.094919 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 13:31:31.094934 kernel: Policy zone: Normal Dec 13 13:31:31.094956 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:31:31.094970 kernel: software IO TLB: area num 2. Dec 13 13:31:31.094987 kernel: Memory: 8067572K/8387460K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 319632K reserved, 0K cma-reserved) Dec 13 13:31:31.095001 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 13:31:31.095015 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:31:31.095029 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:31:31.095043 kernel: Dynamic Preempt: voluntary Dec 13 13:31:31.095056 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:31:31.095075 kernel: rcu: RCU event tracing is enabled. Dec 13 13:31:31.095089 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 13:31:31.095105 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:31:31.095119 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:31:31.095133 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:31:31.095147 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:31:31.095161 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 13:31:31.095176 kernel: Using NULL legacy PIC Dec 13 13:31:31.095192 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 13:31:31.095205 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:31:31.095220 kernel: Console: colour dummy device 80x25 Dec 13 13:31:31.095233 kernel: printk: console [tty1] enabled Dec 13 13:31:31.095246 kernel: printk: console [ttyS0] enabled Dec 13 13:31:31.095261 kernel: printk: bootconsole [earlyser0] disabled Dec 13 13:31:31.095275 kernel: ACPI: Core revision 20230628 Dec 13 13:31:31.095289 kernel: Failed to register legacy timer interrupt Dec 13 13:31:31.095302 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:31:31.095319 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 13:31:31.095332 kernel: Hyper-V: Using IPI hypercalls Dec 13 13:31:31.095347 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 13 13:31:31.095361 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 13 13:31:31.095375 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 13 13:31:31.095389 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 13 13:31:31.095403 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 13 13:31:31.095417 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 13 13:31:31.095430 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Dec 13 13:31:31.095447 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 13:31:31.095460 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 13:31:31.095474 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:31:31.095488 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:31:31.095502 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:31:31.095516 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:31:31.095531 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 13:31:31.095544 kernel: RETBleed: Vulnerable Dec 13 13:31:31.095556 kernel: Speculative Store Bypass: Vulnerable Dec 13 13:31:31.095570 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 13:31:31.095584 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 13:31:31.095594 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 13:31:31.095606 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:31:31.095619 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:31:31.095634 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:31:31.095646 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 13:31:31.095662 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 13:31:31.095675 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 13:31:31.095689 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:31:31.095703 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 13:31:31.095718 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 13:31:31.095736 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 13:31:31.095751 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 13:31:31.095765 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:31:31.095780 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:31:31.095795 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:31:31.095808 kernel: landlock: Up and running. Dec 13 13:31:31.095822 kernel: SELinux: Initializing. Dec 13 13:31:31.095837 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:31:31.095852 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:31:31.095866 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 13:31:31.096567 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:31:31.096585 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:31:31.096594 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:31:31.096604 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 13:31:31.096614 kernel: signal: max sigframe size: 3632 Dec 13 13:31:31.096622 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:31:31.096634 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:31:31.096642 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 13:31:31.096651 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:31:31.096661 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:31:31.096671 kernel: .... node #0, CPUs: #1 Dec 13 13:31:31.096683 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 13:31:31.096692 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 13:31:31.096703 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:31:31.096711 kernel: smpboot: Max logical packages: 1 Dec 13 13:31:31.096719 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Dec 13 13:31:31.096727 kernel: devtmpfs: initialized Dec 13 13:31:31.096735 kernel: x86/mm: Memory block size: 128MB Dec 13 13:31:31.096743 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 13:31:31.096754 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:31:31.096762 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 13:31:31.096770 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:31:31.096778 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:31:31.096786 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:31:31.096794 kernel: audit: type=2000 audit(1734096689.028:1): state=initialized audit_enabled=0 res=1 Dec 13 13:31:31.096802 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:31:31.096812 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:31:31.096823 kernel: cpuidle: using governor menu Dec 13 13:31:31.096831 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:31:31.096842 kernel: dca service started, version 1.12.1 Dec 13 13:31:31.096850 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Dec 13 13:31:31.096859 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:31:31.096869 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:31:31.096917 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:31:31.096925 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:31:31.096936 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:31:31.096947 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:31:31.096957 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:31:31.096967 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:31:31.096978 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:31:31.096986 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:31:31.096995 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:31:31.097005 kernel: ACPI: Interpreter enabled Dec 13 13:31:31.097013 kernel: ACPI: PM: (supports S0 S5) Dec 13 13:31:31.097024 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:31:31.097035 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:31:31.097044 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 13:31:31.097055 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 13:31:31.097063 kernel: iommu: Default domain type: Translated Dec 13 13:31:31.097073 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:31:31.097082 kernel: efivars: Registered efivars operations Dec 13 13:31:31.097090 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:31:31.097101 kernel: PCI: System does not support PCI Dec 13 13:31:31.097109 kernel: vgaarb: loaded Dec 13 13:31:31.097118 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 13:31:31.097130 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:31:31.097138 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:31:31.097150 kernel: pnp: PnP ACPI init Dec 13 13:31:31.097158 kernel: pnp: PnP ACPI: found 3 devices Dec 13 13:31:31.097167 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:31:31.097177 kernel: NET: Registered PF_INET protocol family Dec 13 13:31:31.097185 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 13:31:31.097194 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 13:31:31.097204 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:31:31.097212 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:31:31.097223 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 13:31:31.097231 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 13:31:31.097241 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 13:31:31.097251 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 13:31:31.097259 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:31:31.097270 kernel: NET: Registered PF_XDP protocol family Dec 13 13:31:31.097277 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:31:31.097290 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 13:31:31.097299 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Dec 13 13:31:31.097309 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 13:31:31.097318 kernel: Initialise system trusted keyrings Dec 13 13:31:31.097326 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 13:31:31.097337 kernel: Key type asymmetric registered Dec 13 13:31:31.097345 kernel: Asymmetric key parser 'x509' registered Dec 13 13:31:31.097353 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:31:31.097363 kernel: io scheduler mq-deadline registered Dec 13 13:31:31.097374 kernel: io scheduler kyber registered Dec 13 13:31:31.097385 kernel: io scheduler bfq registered Dec 13 13:31:31.097394 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:31:31.097404 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:31:31.097412 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:31:31.097423 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 13:31:31.097432 kernel: i8042: PNP: No PS/2 controller found. Dec 13 13:31:31.097581 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 13:31:31.097679 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T13:31:30 UTC (1734096690) Dec 13 13:31:31.097764 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 13:31:31.097778 kernel: intel_pstate: CPU model not supported Dec 13 13:31:31.097787 kernel: efifb: probing for efifb Dec 13 13:31:31.097796 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 13:31:31.097806 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 13:31:31.097817 kernel: efifb: scrolling: redraw Dec 13 13:31:31.097825 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 13:31:31.097834 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 13:31:31.097846 kernel: fb0: EFI VGA frame buffer device Dec 13 13:31:31.097854 kernel: pstore: Using crash dump compression: deflate Dec 13 13:31:31.097866 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 13:31:31.097884 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:31:31.097894 kernel: Segment Routing with IPv6 Dec 13 13:31:31.097902 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:31:31.097912 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:31:31.097921 kernel: Key type dns_resolver registered Dec 13 13:31:31.097929 kernel: IPI shorthand broadcast: enabled Dec 13 13:31:31.097943 kernel: sched_clock: Marking stable (911151000, 49321300)->(1207662700, -247190400) Dec 13 13:31:31.097951 kernel: registered taskstats version 1 Dec 13 13:31:31.097962 kernel: Loading compiled-in X.509 certificates Dec 13 13:31:31.097971 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:31:31.097978 kernel: Key type .fscrypt registered Dec 13 13:31:31.097989 kernel: Key type fscrypt-provisioning registered Dec 13 13:31:31.097997 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:31:31.098007 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:31:31.098016 kernel: ima: No architecture policies found Dec 13 13:31:31.098027 kernel: clk: Disabling unused clocks Dec 13 13:31:31.098037 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:31:31.098046 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:31:31.098055 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:31:31.098065 kernel: Run /init as init process Dec 13 13:31:31.098073 kernel: with arguments: Dec 13 13:31:31.098082 kernel: /init Dec 13 13:31:31.098091 kernel: with environment: Dec 13 13:31:31.098099 kernel: HOME=/ Dec 13 13:31:31.098111 kernel: TERM=linux Dec 13 13:31:31.098119 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:31:31.098131 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:31:31.098142 systemd[1]: Detected virtualization microsoft. Dec 13 13:31:31.098152 systemd[1]: Detected architecture x86-64. Dec 13 13:31:31.098162 systemd[1]: Running in initrd. Dec 13 13:31:31.098170 systemd[1]: No hostname configured, using default hostname. Dec 13 13:31:31.098181 systemd[1]: Hostname set to . Dec 13 13:31:31.098193 systemd[1]: Initializing machine ID from random generator. Dec 13 13:31:31.098203 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:31:31.098213 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:31:31.098225 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:31:31.098234 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:31:31.098245 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:31:31.098254 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:31:31.098267 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:31:31.098278 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:31:31.098287 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:31:31.098299 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:31:31.098307 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:31:31.098319 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:31:31.098327 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:31:31.098341 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:31:31.098349 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:31:31.098361 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:31:31.098369 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:31:31.098379 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:31:31.098390 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:31:31.098398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:31:31.098410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:31:31.098418 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:31:31.098433 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:31:31.098441 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:31:31.098450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:31:31.098461 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:31:31.098470 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:31:31.098482 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:31:31.098490 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:31:31.098517 systemd-journald[177]: Collecting audit messages is disabled. Dec 13 13:31:31.098542 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:31:31.098554 systemd-journald[177]: Journal started Dec 13 13:31:31.098577 systemd-journald[177]: Runtime Journal (/run/log/journal/b1e903112dfd496d9472a6235ff23cb9) is 8.0M, max 158.8M, 150.8M free. Dec 13 13:31:31.111894 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:31:31.115107 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:31:31.122104 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:31:31.126160 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:31:31.133388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:31.136509 systemd-modules-load[178]: Inserted module 'overlay' Dec 13 13:31:31.152175 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:31:31.163062 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:31:31.180212 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:31:31.196888 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:31:31.197089 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:31.206681 kernel: Bridge firewalling registered Dec 13 13:31:31.206775 systemd-modules-load[178]: Inserted module 'br_netfilter' Dec 13 13:31:31.209305 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:31:31.216303 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:31:31.222664 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:31:31.233040 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:31:31.241036 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:31:31.248860 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:31:31.257830 dracut-cmdline[204]: dracut-dracut-053 Dec 13 13:31:31.264177 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:31:31.283357 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:31:31.298077 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:31:31.300662 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:31:31.351229 systemd-resolved[246]: Positive Trust Anchors: Dec 13 13:31:31.351243 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:31:31.351298 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:31:31.377911 systemd-resolved[246]: Defaulting to hostname 'linux'. Dec 13 13:31:31.379188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:31:31.384423 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:31:31.395886 kernel: SCSI subsystem initialized Dec 13 13:31:31.406893 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:31:31.418893 kernel: iscsi: registered transport (tcp) Dec 13 13:31:31.440362 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:31:31.440427 kernel: QLogic iSCSI HBA Driver Dec 13 13:31:31.476052 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:31:31.485123 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:31:31.512167 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:31:31.512232 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:31:31.512885 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:31:31.554895 kernel: raid6: avx512x4 gen() 18811 MB/s Dec 13 13:31:31.574892 kernel: raid6: avx512x2 gen() 18659 MB/s Dec 13 13:31:31.593886 kernel: raid6: avx512x1 gen() 18688 MB/s Dec 13 13:31:31.612884 kernel: raid6: avx2x4 gen() 18708 MB/s Dec 13 13:31:31.631891 kernel: raid6: avx2x2 gen() 18612 MB/s Dec 13 13:31:31.652127 kernel: raid6: avx2x1 gen() 14036 MB/s Dec 13 13:31:31.652177 kernel: raid6: using algorithm avx512x4 gen() 18811 MB/s Dec 13 13:31:31.673194 kernel: raid6: .... xor() 6903 MB/s, rmw enabled Dec 13 13:31:31.673236 kernel: raid6: using avx512x2 recovery algorithm Dec 13 13:31:31.695898 kernel: xor: automatically using best checksumming function avx Dec 13 13:31:31.837905 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:31:31.847046 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:31:31.858052 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:31:31.871629 systemd-udevd[396]: Using default interface naming scheme 'v255'. Dec 13 13:31:31.876072 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:31:31.896032 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:31:31.910219 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Dec 13 13:31:31.936858 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:31:31.950209 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:31:31.989235 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:31:32.003032 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:31:32.025405 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:31:32.034780 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:31:32.040080 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:31:32.040503 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:31:32.058900 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:31:32.059260 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:31:32.082704 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:31:32.113897 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:31:32.113941 kernel: AES CTR mode by8 optimization enabled Dec 13 13:31:32.121890 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 13:31:32.122745 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:31:32.124610 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:32.133150 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:31:32.139963 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:31:32.140239 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:32.148361 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:31:32.158206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:31:32.183599 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 13:31:32.183678 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 13:31:32.185955 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 13:31:32.189798 kernel: scsi host1: storvsc_host_t Dec 13 13:31:32.193904 kernel: scsi host0: storvsc_host_t Dec 13 13:31:32.196892 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:31:32.196920 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 13:31:32.213157 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 13:31:32.213194 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 13:31:32.214062 kernel: PTP clock support registered Dec 13 13:31:32.217205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:32.227355 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:31:32.240677 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 13:31:32.246888 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 13:31:32.265474 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 13:31:32.265512 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 13:31:32.265530 kernel: hv_vmbus: registering driver hv_utils Dec 13 13:31:32.265904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:32.277010 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 13:31:32.240292 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:31:32.248033 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 13:31:32.248059 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 13:31:32.248073 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 13:31:32.248232 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 13:31:32.248247 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 13:31:32.248263 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 13:31:32.251492 systemd-journald[177]: Time jumped backwards, rotating. Dec 13 13:31:32.233111 systemd-resolved[246]: Clock change detected. Flushing caches. Dec 13 13:31:32.268675 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 13:31:32.283018 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 13:31:32.283199 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 13:31:32.283437 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 13:31:32.283692 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 13:31:32.283855 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:31:32.283872 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 13:31:32.395093 kernel: hv_netvsc 000d3ab7-fe4d-000d-3ab7-fe4d000d3ab7 eth0: VF slot 1 added Dec 13 13:31:32.406797 kernel: hv_vmbus: registering driver hv_pci Dec 13 13:31:32.406841 kernel: hv_pci 28c0d482-88ab-410a-8958-fa717da5ccff: PCI VMBus probing: Using version 0x10004 Dec 13 13:31:32.450839 kernel: hv_pci 28c0d482-88ab-410a-8958-fa717da5ccff: PCI host bridge to bus 88ab:00 Dec 13 13:31:32.451262 kernel: pci_bus 88ab:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 13:31:32.451598 kernel: pci_bus 88ab:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 13:31:32.451760 kernel: pci 88ab:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 13:31:32.451954 kernel: pci 88ab:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 13:31:32.452127 kernel: pci 88ab:00:02.0: enabling Extended Tags Dec 13 13:31:32.452301 kernel: pci 88ab:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 88ab:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 13:31:32.452504 kernel: pci_bus 88ab:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 13:31:32.452655 kernel: pci 88ab:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 13:31:32.613152 kernel: mlx5_core 88ab:00:02.0: enabling device (0000 -> 0002) Dec 13 13:31:32.845297 kernel: mlx5_core 88ab:00:02.0: firmware version: 14.30.5000 Dec 13 13:31:32.845539 kernel: hv_netvsc 000d3ab7-fe4d-000d-3ab7-fe4d000d3ab7 eth0: VF registering: eth1 Dec 13 13:31:32.845696 kernel: mlx5_core 88ab:00:02.0 eth1: joined to eth0 Dec 13 13:31:32.845869 kernel: mlx5_core 88ab:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Dec 13 13:31:32.853406 kernel: mlx5_core 88ab:00:02.0 enP34987s1: renamed from eth1 Dec 13 13:31:32.856881 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 13:31:32.945452 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (445) Dec 13 13:31:32.953406 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (458) Dec 13 13:31:32.970332 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 13:31:32.982431 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 13:31:32.990722 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 13:31:32.993314 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 13:31:33.011601 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:31:33.032402 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:31:33.040401 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:31:34.047495 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:31:34.049352 disk-uuid[600]: The operation has completed successfully. Dec 13 13:31:34.133031 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:31:34.133144 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:31:34.147557 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:31:34.153630 sh[686]: Success Dec 13 13:31:34.187325 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 13:31:34.443091 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:31:34.458503 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:31:34.461207 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:31:34.498190 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:31:34.498269 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:34.502043 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:31:34.504993 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:31:34.507927 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:31:34.928219 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:31:34.931889 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:31:34.943901 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:31:34.949538 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:31:34.971841 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:34.971893 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:34.971911 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:31:34.993770 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:31:35.007273 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:35.006839 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:31:35.017149 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:31:35.028604 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:31:35.044998 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:31:35.054503 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:31:35.074735 systemd-networkd[870]: lo: Link UP Dec 13 13:31:35.074745 systemd-networkd[870]: lo: Gained carrier Dec 13 13:31:35.076874 systemd-networkd[870]: Enumeration completed Dec 13 13:31:35.077354 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:31:35.081854 systemd[1]: Reached target network.target - Network. Dec 13 13:31:35.090963 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:31:35.090972 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:31:35.153405 kernel: mlx5_core 88ab:00:02.0 enP34987s1: Link up Dec 13 13:31:35.192539 kernel: hv_netvsc 000d3ab7-fe4d-000d-3ab7-fe4d000d3ab7 eth0: Data path switched to VF: enP34987s1 Dec 13 13:31:35.192654 systemd-networkd[870]: enP34987s1: Link UP Dec 13 13:31:35.192825 systemd-networkd[870]: eth0: Link UP Dec 13 13:31:35.193055 systemd-networkd[870]: eth0: Gained carrier Dec 13 13:31:35.193069 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:31:35.198597 systemd-networkd[870]: enP34987s1: Gained carrier Dec 13 13:31:35.236449 systemd-networkd[870]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 13:31:36.076286 ignition[845]: Ignition 2.20.0 Dec 13 13:31:36.076302 ignition[845]: Stage: fetch-offline Dec 13 13:31:36.078027 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:31:36.076349 ignition[845]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:36.076359 ignition[845]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:31:36.076497 ignition[845]: parsed url from cmdline: "" Dec 13 13:31:36.076502 ignition[845]: no config URL provided Dec 13 13:31:36.076509 ignition[845]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:31:36.076519 ignition[845]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:31:36.094119 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:31:36.076525 ignition[845]: failed to fetch config: resource requires networking Dec 13 13:31:36.076891 ignition[845]: Ignition finished successfully Dec 13 13:31:36.107749 ignition[879]: Ignition 2.20.0 Dec 13 13:31:36.107757 ignition[879]: Stage: fetch Dec 13 13:31:36.107938 ignition[879]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:36.107948 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:31:36.108040 ignition[879]: parsed url from cmdline: "" Dec 13 13:31:36.108043 ignition[879]: no config URL provided Dec 13 13:31:36.108049 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:31:36.108058 ignition[879]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:31:36.108084 ignition[879]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 13:31:36.200084 ignition[879]: GET result: OK Dec 13 13:31:36.200193 ignition[879]: config has been read from IMDS userdata Dec 13 13:31:36.200230 ignition[879]: parsing config with SHA512: e012ece9ff90613d3f668751ef1e64a6fd4b81dee540c9d734cdb454f4b00ef665888aae951194f5854836647153ca7955b1ca6140c4bef15a1458d02e2b9f33 Dec 13 13:31:36.205789 unknown[879]: fetched base config from "system" Dec 13 13:31:36.205801 unknown[879]: fetched base config from "system" Dec 13 13:31:36.206155 ignition[879]: fetch: fetch complete Dec 13 13:31:36.205808 unknown[879]: fetched user config from "azure" Dec 13 13:31:36.206161 ignition[879]: fetch: fetch passed Dec 13 13:31:36.207937 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:31:36.206207 ignition[879]: Ignition finished successfully Dec 13 13:31:36.221099 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:31:36.233922 ignition[886]: Ignition 2.20.0 Dec 13 13:31:36.233933 ignition[886]: Stage: kargs Dec 13 13:31:36.234160 ignition[886]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:36.234173 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:31:36.235065 ignition[886]: kargs: kargs passed Dec 13 13:31:36.235106 ignition[886]: Ignition finished successfully Dec 13 13:31:36.244320 systemd-networkd[870]: enP34987s1: Gained IPv6LL Dec 13 13:31:36.244610 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:31:36.257533 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:31:36.272347 ignition[892]: Ignition 2.20.0 Dec 13 13:31:36.272356 ignition[892]: Stage: disks Dec 13 13:31:36.272593 ignition[892]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:36.272607 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:31:36.281309 ignition[892]: disks: disks passed Dec 13 13:31:36.281351 ignition[892]: Ignition finished successfully Dec 13 13:31:36.286116 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:31:36.288899 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:31:36.294027 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:31:36.297465 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:31:36.300172 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:31:36.306166 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:31:36.320644 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:31:36.380551 systemd-fsck[900]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 13:31:36.384726 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:31:36.396561 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:31:36.496737 kernel: EXT4-fs (sda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:31:36.497420 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:31:36.500370 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:31:36.504984 systemd-networkd[870]: eth0: Gained IPv6LL Dec 13 13:31:36.556487 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:31:36.565986 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:31:36.578592 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 13:31:36.585672 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:31:36.585717 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:31:36.593408 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (911) Dec 13 13:31:36.604053 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:36.604108 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:36.604141 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:31:36.609711 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:31:36.621501 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:31:36.628866 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:31:36.634035 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:31:37.369930 coreos-metadata[913]: Dec 13 13:31:37.369 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 13:31:37.377155 coreos-metadata[913]: Dec 13 13:31:37.377 INFO Fetch successful Dec 13 13:31:37.380372 coreos-metadata[913]: Dec 13 13:31:37.377 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 13:31:37.391650 coreos-metadata[913]: Dec 13 13:31:37.391 INFO Fetch successful Dec 13 13:31:37.408002 coreos-metadata[913]: Dec 13 13:31:37.407 INFO wrote hostname ci-4186.0.0-a-a6ca590029 to /sysroot/etc/hostname Dec 13 13:31:37.411272 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 13:31:37.420469 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:31:37.457125 initrd-setup-root[948]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:31:37.463327 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:31:37.468981 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:31:38.381871 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:31:38.390569 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:31:38.396578 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:31:38.408145 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:31:38.414822 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:38.434213 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:31:38.445230 ignition[1030]: INFO : Ignition 2.20.0 Dec 13 13:31:38.445230 ignition[1030]: INFO : Stage: mount Dec 13 13:31:38.452341 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:38.452341 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:31:38.452341 ignition[1030]: INFO : mount: mount passed Dec 13 13:31:38.452341 ignition[1030]: INFO : Ignition finished successfully Dec 13 13:31:38.447154 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:31:38.463025 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:31:38.477564 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:31:38.494137 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1041) Dec 13 13:31:38.494212 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:31:38.497518 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:31:38.500695 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:31:38.506762 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:31:38.508361 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:31:38.534388 ignition[1058]: INFO : Ignition 2.20.0 Dec 13 13:31:38.534388 ignition[1058]: INFO : Stage: files Dec 13 13:31:38.538748 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:38.538748 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:31:38.538748 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:31:38.564883 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:31:38.564883 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:31:38.688681 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:31:38.693217 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:31:38.697162 unknown[1058]: wrote ssh authorized keys file for user: core Dec 13 13:31:38.700004 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:31:38.712248 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:31:38.717433 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:31:38.766702 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:31:38.891092 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:31:38.897792 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:31:38.897792 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 13:31:39.424852 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 13:31:39.544505 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:39.549649 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 13:31:40.006270 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 13:31:40.269889 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:31:40.269889 ignition[1058]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:31:40.294947 ignition[1058]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:31:40.301639 ignition[1058]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:31:40.301639 ignition[1058]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:31:40.301639 ignition[1058]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:31:40.316716 ignition[1058]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:31:40.316716 ignition[1058]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:31:40.316716 ignition[1058]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:31:40.316716 ignition[1058]: INFO : files: files passed Dec 13 13:31:40.316716 ignition[1058]: INFO : Ignition finished successfully Dec 13 13:31:40.303585 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:31:40.335695 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:31:40.345203 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:31:40.351753 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:31:40.353241 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:31:40.379776 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:31:40.379776 initrd-setup-root-after-ignition[1086]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:31:40.392320 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:31:40.383706 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:31:40.388368 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:31:40.409611 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:31:40.439962 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:31:40.440089 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:31:40.447022 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:31:40.455652 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:31:40.456651 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:31:40.466745 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:31:40.481611 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:31:40.493619 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:31:40.507169 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:31:40.510358 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:31:40.516686 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:31:40.524225 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:31:40.524417 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:31:40.533307 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:31:40.536249 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:31:40.543435 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:31:40.544540 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:31:40.544998 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:31:40.545440 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:31:40.546099 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:31:40.546513 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:31:40.546885 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:31:40.547502 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:31:40.547923 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:31:40.548061 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:31:40.548831 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:31:40.549368 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:31:40.549774 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:31:40.579974 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:31:40.583577 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:31:40.583733 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:31:40.597986 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:31:40.600712 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:31:40.606448 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:31:40.606593 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:31:40.611053 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 13:31:40.611184 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 13:31:40.628459 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:31:40.672648 ignition[1110]: INFO : Ignition 2.20.0 Dec 13 13:31:40.672648 ignition[1110]: INFO : Stage: umount Dec 13 13:31:40.672648 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:31:40.672648 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:31:40.672648 ignition[1110]: INFO : umount: umount passed Dec 13 13:31:40.672648 ignition[1110]: INFO : Ignition finished successfully Dec 13 13:31:40.640871 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:31:40.645929 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:31:40.646105 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:31:40.655250 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:31:40.655479 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:31:40.668569 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:31:40.668661 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:31:40.672983 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:31:40.673067 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:31:40.688997 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:31:40.689071 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:31:40.694567 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:31:40.694622 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:31:40.697236 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:31:40.697283 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:31:40.701575 systemd[1]: Stopped target network.target - Network. Dec 13 13:31:40.701971 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:31:40.702015 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:31:40.702473 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:31:40.703327 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:31:40.758088 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:31:40.765306 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:31:40.768931 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:31:40.772623 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:31:40.772678 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:31:40.777420 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:31:40.777460 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:31:40.780395 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:31:40.780457 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:31:40.783228 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:31:40.783275 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:31:40.802425 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:31:40.810005 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:31:40.819347 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:31:40.822436 systemd-networkd[870]: eth0: DHCPv6 lease lost Dec 13 13:31:40.824601 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:31:40.824697 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:31:40.830271 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:31:40.830348 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:31:40.847550 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:31:40.850046 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:31:40.850116 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:31:40.862183 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:31:40.865286 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:31:40.865424 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:31:40.884401 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:31:40.886153 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:31:40.891398 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:31:40.891445 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:31:40.903030 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:31:40.903091 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:31:40.909898 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:31:40.910026 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:31:40.916692 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:31:40.916765 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:31:40.921694 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:31:40.941746 kernel: hv_netvsc 000d3ab7-fe4d-000d-3ab7-fe4d000d3ab7 eth0: Data path switched from VF: enP34987s1 Dec 13 13:31:40.921734 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:31:40.930374 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:31:40.930444 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:31:40.942036 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:31:40.942112 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:31:40.948137 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:31:40.948181 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:31:40.967546 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:31:40.970300 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:31:40.970363 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:31:40.977103 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:31:40.979175 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:31:40.983571 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:31:40.983620 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:31:40.991653 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:31:40.991705 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:41.006646 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:31:41.006764 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:31:41.014942 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:31:41.015030 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:31:41.394867 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:31:41.395026 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:31:41.400249 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:31:41.405670 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:31:41.405741 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:31:41.417659 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:31:41.847339 systemd[1]: Switching root. Dec 13 13:31:41.936185 systemd-journald[177]: Journal stopped Dec 13 13:31:47.019553 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Dec 13 13:31:47.019593 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:31:47.019611 kernel: SELinux: policy capability open_perms=1 Dec 13 13:31:47.019624 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:31:47.019638 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:31:47.019651 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:31:47.019668 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:31:47.019685 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:31:47.019699 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:31:47.019713 kernel: audit: type=1403 audit(1734096703.294:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:31:47.019729 systemd[1]: Successfully loaded SELinux policy in 219.726ms. Dec 13 13:31:47.019745 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.460ms. Dec 13 13:31:47.019762 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:31:47.019778 systemd[1]: Detected virtualization microsoft. Dec 13 13:31:47.019797 systemd[1]: Detected architecture x86-64. Dec 13 13:31:47.019813 systemd[1]: Detected first boot. Dec 13 13:31:47.019829 systemd[1]: Hostname set to . Dec 13 13:31:47.019845 systemd[1]: Initializing machine ID from random generator. Dec 13 13:31:47.019861 zram_generator::config[1153]: No configuration found. Dec 13 13:31:47.019881 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:31:47.019896 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:31:47.019912 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:31:47.019927 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:31:47.019944 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:31:47.019960 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:31:47.019976 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:31:47.019995 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:31:47.020012 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:31:47.020029 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:31:47.020045 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:31:47.020061 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:31:47.020077 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:31:47.020093 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:31:47.020109 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:31:47.020128 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:31:47.020144 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:31:47.020161 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:31:47.020177 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:31:47.020192 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:31:47.020209 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:31:47.020230 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:31:47.020246 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:31:47.020266 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:31:47.020283 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:31:47.020299 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:31:47.020315 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:31:47.020332 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:31:47.020348 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:31:47.020367 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:31:47.020391 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:31:47.020413 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:31:47.020430 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:31:47.020447 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:31:47.020464 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:31:47.020484 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:31:47.020500 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:31:47.020518 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:31:47.020534 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:31:47.020551 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:31:47.020568 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:31:47.020585 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:31:47.020602 systemd[1]: Reached target machines.target - Containers. Dec 13 13:31:47.020622 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:31:47.020639 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:31:47.020656 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:31:47.020673 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:31:47.020690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:31:47.020707 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:31:47.020724 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:31:47.020742 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:31:47.020759 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:31:47.020779 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:31:47.020796 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:31:47.020813 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:31:47.020830 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:31:47.020846 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:31:47.020863 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:31:47.020880 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:31:47.020896 kernel: loop: module loaded Dec 13 13:31:47.020914 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:31:47.020930 kernel: fuse: init (API version 7.39) Dec 13 13:31:47.020946 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:31:47.020963 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:31:47.021003 systemd-journald[1252]: Collecting audit messages is disabled. Dec 13 13:31:47.021041 systemd-journald[1252]: Journal started Dec 13 13:31:47.021075 systemd-journald[1252]: Runtime Journal (/run/log/journal/c7a212cf17d94bdc845e040cdfc88a6b) is 8.0M, max 158.8M, 150.8M free. Dec 13 13:31:46.304982 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:31:47.033491 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:31:47.033540 systemd[1]: Stopped verity-setup.service. Dec 13 13:31:46.449692 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 13:31:46.450072 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:31:47.045393 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:31:47.052396 kernel: ACPI: bus type drm_connector registered Dec 13 13:31:47.060775 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:31:47.061468 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:31:47.064175 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:31:47.067246 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:31:47.069806 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:31:47.072871 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:31:47.075850 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:31:47.078509 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:31:47.081909 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:31:47.085403 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:31:47.085713 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:31:47.089228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:31:47.089542 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:31:47.093155 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:31:47.093582 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:31:47.096842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:31:47.097006 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:31:47.101082 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:31:47.101265 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:31:47.104535 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:31:47.104709 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:31:47.108253 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:31:47.111767 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:31:47.115951 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:31:47.134505 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:31:47.150357 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:31:47.164340 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:31:47.168097 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:31:47.168235 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:31:47.174774 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:31:47.188593 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:31:47.192873 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:31:47.195821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:31:47.206528 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:31:47.210701 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:31:47.213989 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:31:47.215405 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:31:47.218359 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:31:47.222499 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:31:47.230588 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:31:47.242812 systemd-journald[1252]: Time spent on flushing to /var/log/journal/c7a212cf17d94bdc845e040cdfc88a6b is 98.894ms for 958 entries. Dec 13 13:31:47.242812 systemd-journald[1252]: System Journal (/var/log/journal/c7a212cf17d94bdc845e040cdfc88a6b) is 11.8M, max 2.6G, 2.6G free. Dec 13 13:31:47.395375 systemd-journald[1252]: Received client request to flush runtime journal. Dec 13 13:31:47.395469 systemd-journald[1252]: /var/log/journal/c7a212cf17d94bdc845e040cdfc88a6b/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Dec 13 13:31:47.395512 systemd-journald[1252]: Rotating system journal. Dec 13 13:31:47.395547 kernel: loop0: detected capacity change from 0 to 141000 Dec 13 13:31:47.243497 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:31:47.255156 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:31:47.259073 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:31:47.262836 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:31:47.277885 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:31:47.288371 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:31:47.297077 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:31:47.309566 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:31:47.318540 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:31:47.342166 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:31:47.358804 udevadm[1302]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:31:47.391805 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Dec 13 13:31:47.391833 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Dec 13 13:31:47.397914 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:31:47.401951 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:31:47.411622 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:31:47.462596 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:31:47.463555 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:31:47.697882 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:31:47.705558 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:31:47.724583 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Dec 13 13:31:47.724607 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Dec 13 13:31:47.731332 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:31:47.937445 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:31:47.990407 kernel: loop1: detected capacity change from 0 to 138184 Dec 13 13:31:48.452414 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 13:31:48.494422 kernel: loop3: detected capacity change from 0 to 28304 Dec 13 13:31:48.598734 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:31:48.610753 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:31:48.631363 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Dec 13 13:31:48.893415 kernel: loop4: detected capacity change from 0 to 141000 Dec 13 13:31:48.910406 kernel: loop5: detected capacity change from 0 to 138184 Dec 13 13:31:48.925406 kernel: loop6: detected capacity change from 0 to 211296 Dec 13 13:31:48.931845 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:31:48.957426 kernel: loop7: detected capacity change from 0 to 28304 Dec 13 13:31:48.945893 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:31:48.977627 (sd-merge)[1322]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 13 13:31:48.980550 (sd-merge)[1322]: Merged extensions into '/usr'. Dec 13 13:31:49.022963 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:31:49.023687 systemd[1]: Reloading requested from client PID 1289 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:31:49.023824 systemd[1]: Reloading... Dec 13 13:31:49.148487 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1344) Dec 13 13:31:49.148597 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:31:49.157433 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 13:31:49.163400 kernel: hv_vmbus: registering driver hv_balloon Dec 13 13:31:49.170425 zram_generator::config[1377]: No configuration found. Dec 13 13:31:49.182417 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1344) Dec 13 13:31:49.214403 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 13:31:49.214500 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 13:31:49.221062 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 13:31:49.232796 kernel: Console: switching to colour dummy device 80x25 Dec 13 13:31:49.232880 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 13:31:49.494427 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1339) Dec 13 13:31:49.632444 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 13 13:31:49.659602 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:31:49.759445 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 13:31:49.763125 systemd[1]: Reloading finished in 737 ms. Dec 13 13:31:49.796136 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:31:49.837662 systemd[1]: Starting ensure-sysext.service... Dec 13 13:31:49.842698 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:31:49.858669 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:31:49.865690 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:31:49.871663 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:31:49.897925 systemd[1]: Reloading requested from client PID 1505 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:31:49.897951 systemd[1]: Reloading... Dec 13 13:31:49.923821 systemd-tmpfiles[1507]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:31:49.924237 systemd-tmpfiles[1507]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:31:49.932099 systemd-tmpfiles[1507]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:31:49.932570 systemd-tmpfiles[1507]: ACLs are not supported, ignoring. Dec 13 13:31:49.934140 systemd-tmpfiles[1507]: ACLs are not supported, ignoring. Dec 13 13:31:49.992219 systemd-tmpfiles[1507]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:31:49.992236 systemd-tmpfiles[1507]: Skipping /boot Dec 13 13:31:50.002406 zram_generator::config[1546]: No configuration found. Dec 13 13:31:50.038092 systemd-tmpfiles[1507]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:31:50.038112 systemd-tmpfiles[1507]: Skipping /boot Dec 13 13:31:50.175254 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:31:50.216175 systemd-networkd[1333]: lo: Link UP Dec 13 13:31:50.216188 systemd-networkd[1333]: lo: Gained carrier Dec 13 13:31:50.219987 systemd-networkd[1333]: Enumeration completed Dec 13 13:31:50.220541 systemd-networkd[1333]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:31:50.220602 systemd-networkd[1333]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:31:50.278415 kernel: mlx5_core 88ab:00:02.0 enP34987s1: Link up Dec 13 13:31:50.279049 systemd[1]: Reloading finished in 380 ms. Dec 13 13:31:50.301408 kernel: hv_netvsc 000d3ab7-fe4d-000d-3ab7-fe4d000d3ab7 eth0: Data path switched to VF: enP34987s1 Dec 13 13:31:50.302686 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:31:50.303720 systemd-networkd[1333]: enP34987s1: Link UP Dec 13 13:31:50.303833 systemd-networkd[1333]: eth0: Link UP Dec 13 13:31:50.303837 systemd-networkd[1333]: eth0: Gained carrier Dec 13 13:31:50.303857 systemd-networkd[1333]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:31:50.306224 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:31:50.309579 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:31:50.309762 systemd-networkd[1333]: enP34987s1: Gained carrier Dec 13 13:31:50.317832 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:31:50.322647 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:31:50.332936 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:31:50.347664 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:31:50.352084 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:31:50.358359 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:31:50.367281 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:31:50.377215 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:31:50.385169 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:31:50.396891 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:31:50.404542 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:31:50.404812 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:31:50.408484 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:31:50.420710 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:31:50.430634 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:31:50.434540 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:31:50.434700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:31:50.438177 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:31:50.439181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:31:50.450235 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:31:50.450822 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:31:50.454787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:31:50.454960 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:31:50.469970 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:31:50.471772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:31:50.471994 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:31:50.472140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:31:50.472298 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:31:50.473486 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:31:50.476155 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:31:50.480506 systemd-networkd[1333]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 13:31:50.483034 lvm[1614]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:31:50.499976 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:31:50.500340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:31:50.509003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:31:50.520646 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:31:50.527572 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:31:50.534626 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:31:50.542901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:31:50.543004 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:31:50.546438 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:31:50.547422 systemd[1]: Finished ensure-sysext.service. Dec 13 13:31:50.549943 systemd-resolved[1618]: Positive Trust Anchors: Dec 13 13:31:50.549984 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:31:50.550244 systemd-resolved[1618]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:31:50.550286 systemd-resolved[1618]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:31:50.553358 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:31:50.556722 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:31:50.556892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:31:50.558193 systemd-resolved[1618]: Using system hostname 'ci-4186.0.0-a-a6ca590029'. Dec 13 13:31:50.559877 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:31:50.560047 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:31:50.562738 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:31:50.565863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:31:50.566018 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:31:50.569202 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:31:50.569355 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:31:50.578189 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:31:50.581139 systemd[1]: Reached target network.target - Network. Dec 13 13:31:50.583535 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:31:50.590718 augenrules[1658]: No rules Dec 13 13:31:50.593565 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:31:50.596900 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:31:50.600563 lvm[1656]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:31:50.596959 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:31:50.597446 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:31:50.597642 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:31:50.627193 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:31:51.107307 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:31:51.111422 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:31:51.667625 systemd-networkd[1333]: eth0: Gained IPv6LL Dec 13 13:31:51.671106 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:31:51.675422 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:31:52.115522 systemd-networkd[1333]: enP34987s1: Gained IPv6LL Dec 13 13:31:54.583362 ldconfig[1284]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:31:54.595861 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:31:54.606656 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:31:54.618284 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:31:54.621597 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:31:54.624568 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:31:54.628013 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:31:54.631440 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:31:54.634240 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:31:54.637536 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:31:54.640816 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:31:54.640864 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:31:54.643542 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:31:54.646843 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:31:54.651029 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:31:54.666289 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:31:54.669878 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:31:54.672741 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:31:54.675174 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:31:54.677598 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:31:54.677625 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:31:54.684473 systemd[1]: Starting chronyd.service - NTP client/server... Dec 13 13:31:54.690497 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:31:54.697630 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 13:31:54.703624 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:31:54.708585 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:31:54.714424 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:31:54.715545 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:31:54.715584 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Dec 13 13:31:54.717214 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 13 13:31:54.720559 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 13 13:31:54.722512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:31:54.733615 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:31:54.739110 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:31:54.744498 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:31:54.751192 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:31:54.763615 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:31:54.775081 KVP[1678]: KVP starting; pid is:1678 Dec 13 13:31:54.773571 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:31:54.776788 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:31:54.779615 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:31:54.784608 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:31:54.790398 jq[1676]: false Dec 13 13:31:54.790507 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:31:54.804524 jq[1692]: true Dec 13 13:31:54.804878 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:31:54.806440 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:31:54.836413 kernel: hv_utils: KVP IC version 4.0 Dec 13 13:31:54.834641 KVP[1678]: KVP LIC Version: 3.1 Dec 13 13:31:54.838823 (chronyd)[1672]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Dec 13 13:31:54.856452 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:31:54.856700 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:31:54.865823 (ntainerd)[1699]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:31:54.869087 chronyd[1712]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Dec 13 13:31:54.872443 extend-filesystems[1677]: Found loop4 Dec 13 13:31:54.875142 extend-filesystems[1677]: Found loop5 Dec 13 13:31:54.878640 extend-filesystems[1677]: Found loop6 Dec 13 13:31:54.882878 extend-filesystems[1677]: Found loop7 Dec 13 13:31:54.882878 extend-filesystems[1677]: Found sda Dec 13 13:31:54.882878 extend-filesystems[1677]: Found sda1 Dec 13 13:31:54.882878 extend-filesystems[1677]: Found sda2 Dec 13 13:31:54.882878 extend-filesystems[1677]: Found sda3 Dec 13 13:31:54.882878 extend-filesystems[1677]: Found usr Dec 13 13:31:54.882878 extend-filesystems[1677]: Found sda4 Dec 13 13:31:54.882878 extend-filesystems[1677]: Found sda6 Dec 13 13:31:54.882878 extend-filesystems[1677]: Found sda7 Dec 13 13:31:54.882878 extend-filesystems[1677]: Found sda9 Dec 13 13:31:54.882878 extend-filesystems[1677]: Checking size of /dev/sda9 Dec 13 13:31:54.959951 chronyd[1712]: Timezone right/UTC failed leap second check, ignoring Dec 13 13:31:54.969151 update_engine[1690]: I20241213 13:31:54.923769 1690 main.cc:92] Flatcar Update Engine starting Dec 13 13:31:54.890821 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:31:54.960226 chronyd[1712]: Loaded seccomp filter (level 2) Dec 13 13:31:54.891075 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:31:54.961759 systemd[1]: Started chronyd.service - NTP client/server. Dec 13 13:31:54.983715 jq[1698]: true Dec 13 13:31:54.996328 extend-filesystems[1677]: Old size kept for /dev/sda9 Dec 13 13:31:54.996328 extend-filesystems[1677]: Found sr0 Dec 13 13:31:54.988909 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:31:54.990490 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:31:55.027982 tar[1697]: linux-amd64/helm Dec 13 13:31:54.997891 systemd-logind[1688]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:31:54.999979 systemd-logind[1688]: New seat seat0. Dec 13 13:31:55.000944 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:31:55.019765 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:31:55.073773 dbus-daemon[1675]: [system] SELinux support is enabled Dec 13 13:31:55.073995 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:31:55.086587 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:31:55.086825 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:31:55.091339 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:31:55.091359 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:31:55.108765 dbus-daemon[1675]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 13:31:55.113600 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:31:55.135775 update_engine[1690]: I20241213 13:31:55.127859 1690 update_check_scheduler.cc:74] Next update check in 5m1s Dec 13 13:31:55.130908 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:31:55.211095 bash[1745]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:31:55.204741 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:31:55.223109 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:31:55.267748 coreos-metadata[1674]: Dec 13 13:31:55.267 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 13:31:55.277739 coreos-metadata[1674]: Dec 13 13:31:55.277 INFO Fetch successful Dec 13 13:31:55.277739 coreos-metadata[1674]: Dec 13 13:31:55.277 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 13:31:55.287703 coreos-metadata[1674]: Dec 13 13:31:55.284 INFO Fetch successful Dec 13 13:31:55.287703 coreos-metadata[1674]: Dec 13 13:31:55.287 INFO Fetching http://168.63.129.16/machine/72646eef-d92a-4a19-8d5e-005ea22e29e7/4ecc69c0%2Df6db%2D4891%2D9a13%2Da227586d7d0d.%5Fci%2D4186.0.0%2Da%2Da6ca590029?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 13:31:55.290925 coreos-metadata[1674]: Dec 13 13:31:55.290 INFO Fetch successful Dec 13 13:31:55.290925 coreos-metadata[1674]: Dec 13 13:31:55.290 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 13:31:55.308947 coreos-metadata[1674]: Dec 13 13:31:55.307 INFO Fetch successful Dec 13 13:31:55.353238 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1741) Dec 13 13:31:55.430781 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 13:31:55.459717 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:31:55.541827 locksmithd[1751]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:31:55.842663 sshd_keygen[1715]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:31:55.877565 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:31:55.886850 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:31:55.891650 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 13 13:31:55.912656 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:31:55.913600 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:31:55.932723 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:31:55.966459 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:31:55.980451 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:31:55.993847 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:31:55.998618 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:31:56.008514 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 13 13:31:56.055251 tar[1697]: linux-amd64/LICENSE Dec 13 13:31:56.055251 tar[1697]: linux-amd64/README.md Dec 13 13:31:56.068551 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:31:56.351724 containerd[1699]: time="2024-12-13T13:31:56.351631100Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:31:56.382888 containerd[1699]: time="2024-12-13T13:31:56.382826600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:31:56.385309 containerd[1699]: time="2024-12-13T13:31:56.384863200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:31:56.385309 containerd[1699]: time="2024-12-13T13:31:56.384903300Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:31:56.385309 containerd[1699]: time="2024-12-13T13:31:56.384926900Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:31:56.385309 containerd[1699]: time="2024-12-13T13:31:56.385097900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:31:56.385309 containerd[1699]: time="2024-12-13T13:31:56.385117200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:31:56.385309 containerd[1699]: time="2024-12-13T13:31:56.385185600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:31:56.385309 containerd[1699]: time="2024-12-13T13:31:56.385200800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:31:56.385617 containerd[1699]: time="2024-12-13T13:31:56.385468600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:31:56.385617 containerd[1699]: time="2024-12-13T13:31:56.385494800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:31:56.385617 containerd[1699]: time="2024-12-13T13:31:56.385515600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:31:56.385617 containerd[1699]: time="2024-12-13T13:31:56.385531300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:31:56.385758 containerd[1699]: time="2024-12-13T13:31:56.385645600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:31:56.385909 containerd[1699]: time="2024-12-13T13:31:56.385882400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:31:56.386054 containerd[1699]: time="2024-12-13T13:31:56.386029100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:31:56.386054 containerd[1699]: time="2024-12-13T13:31:56.386049300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:31:56.386170 containerd[1699]: time="2024-12-13T13:31:56.386150400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:31:56.386230 containerd[1699]: time="2024-12-13T13:31:56.386211300Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:31:56.404284 containerd[1699]: time="2024-12-13T13:31:56.403522600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:31:56.404284 containerd[1699]: time="2024-12-13T13:31:56.403598900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:31:56.404284 containerd[1699]: time="2024-12-13T13:31:56.403623200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:31:56.404284 containerd[1699]: time="2024-12-13T13:31:56.403646300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:31:56.404284 containerd[1699]: time="2024-12-13T13:31:56.403666000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:31:56.404284 containerd[1699]: time="2024-12-13T13:31:56.403848400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:31:56.404284 containerd[1699]: time="2024-12-13T13:31:56.404149400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:31:56.404284 containerd[1699]: time="2024-12-13T13:31:56.404273500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:31:56.404284 containerd[1699]: time="2024-12-13T13:31:56.404297400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:31:56.404736 containerd[1699]: time="2024-12-13T13:31:56.404317900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:31:56.404736 containerd[1699]: time="2024-12-13T13:31:56.404335500Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:31:56.404736 containerd[1699]: time="2024-12-13T13:31:56.404351200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:31:56.404736 containerd[1699]: time="2024-12-13T13:31:56.404367100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:31:56.404736 containerd[1699]: time="2024-12-13T13:31:56.404404200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:31:56.404736 containerd[1699]: time="2024-12-13T13:31:56.404424500Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:31:56.404736 containerd[1699]: time="2024-12-13T13:31:56.404449500Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:31:56.404736 containerd[1699]: time="2024-12-13T13:31:56.404468000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:31:56.404736 containerd[1699]: time="2024-12-13T13:31:56.404485400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:31:56.404736 containerd[1699]: time="2024-12-13T13:31:56.404512700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.404736 containerd[1699]: time="2024-12-13T13:31:56.404533400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.404736 containerd[1699]: time="2024-12-13T13:31:56.404550200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.404736 containerd[1699]: time="2024-12-13T13:31:56.404570600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.404736 containerd[1699]: time="2024-12-13T13:31:56.404588000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.405165 containerd[1699]: time="2024-12-13T13:31:56.404605000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.405165 containerd[1699]: time="2024-12-13T13:31:56.404622100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.405165 containerd[1699]: time="2024-12-13T13:31:56.404641100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.405165 containerd[1699]: time="2024-12-13T13:31:56.404659800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.405165 containerd[1699]: time="2024-12-13T13:31:56.404684700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.405165 containerd[1699]: time="2024-12-13T13:31:56.404702400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.405165 containerd[1699]: time="2024-12-13T13:31:56.404720300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.405165 containerd[1699]: time="2024-12-13T13:31:56.404736600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.405165 containerd[1699]: time="2024-12-13T13:31:56.404755700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:31:56.405165 containerd[1699]: time="2024-12-13T13:31:56.404795100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.405165 containerd[1699]: time="2024-12-13T13:31:56.404816500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.405165 containerd[1699]: time="2024-12-13T13:31:56.404834600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:31:56.405165 containerd[1699]: time="2024-12-13T13:31:56.404891200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:31:56.405165 containerd[1699]: time="2024-12-13T13:31:56.404918000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:31:56.405619 containerd[1699]: time="2024-12-13T13:31:56.404932900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:31:56.405619 containerd[1699]: time="2024-12-13T13:31:56.404953200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:31:56.405619 containerd[1699]: time="2024-12-13T13:31:56.404968100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.405619 containerd[1699]: time="2024-12-13T13:31:56.404984900Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:31:56.405619 containerd[1699]: time="2024-12-13T13:31:56.404998700Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:31:56.405619 containerd[1699]: time="2024-12-13T13:31:56.405016300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:31:56.405824 containerd[1699]: time="2024-12-13T13:31:56.405561100Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:31:56.405824 containerd[1699]: time="2024-12-13T13:31:56.405632200Z" level=info msg="Connect containerd service" Dec 13 13:31:56.405824 containerd[1699]: time="2024-12-13T13:31:56.405682300Z" level=info msg="using legacy CRI server" Dec 13 13:31:56.405824 containerd[1699]: time="2024-12-13T13:31:56.405693300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:31:56.406108 containerd[1699]: time="2024-12-13T13:31:56.405863600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:31:56.407245 containerd[1699]: time="2024-12-13T13:31:56.406694000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:31:56.407245 containerd[1699]: time="2024-12-13T13:31:56.407042100Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:31:56.407245 containerd[1699]: time="2024-12-13T13:31:56.407104100Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:31:56.407245 containerd[1699]: time="2024-12-13T13:31:56.407168200Z" level=info msg="Start subscribing containerd event" Dec 13 13:31:56.407245 containerd[1699]: time="2024-12-13T13:31:56.407219900Z" level=info msg="Start recovering state" Dec 13 13:31:56.407879 containerd[1699]: time="2024-12-13T13:31:56.407562300Z" level=info msg="Start event monitor" Dec 13 13:31:56.407879 containerd[1699]: time="2024-12-13T13:31:56.407586700Z" level=info msg="Start snapshots syncer" Dec 13 13:31:56.407879 containerd[1699]: time="2024-12-13T13:31:56.407601000Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:31:56.407879 containerd[1699]: time="2024-12-13T13:31:56.407610500Z" level=info msg="Start streaming server" Dec 13 13:31:56.407879 containerd[1699]: time="2024-12-13T13:31:56.407684400Z" level=info msg="containerd successfully booted in 0.057085s" Dec 13 13:31:56.407804 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:31:56.516838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:31:56.520850 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:31:56.524024 systemd[1]: Startup finished in 871ms (firmware) + 32.245s (loader) + 1.057s (kernel) + 12.427s (initrd) + 13.448s (userspace) = 1min 50ms. Dec 13 13:31:56.532766 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:31:56.557188 agetty[1842]: failed to open credentials directory Dec 13 13:31:56.558082 agetty[1840]: failed to open credentials directory Dec 13 13:31:56.755993 login[1840]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:31:56.757390 login[1842]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:31:56.771143 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:31:56.783712 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:31:56.788731 systemd-logind[1688]: New session 1 of user core. Dec 13 13:31:56.794595 systemd-logind[1688]: New session 2 of user core. Dec 13 13:31:56.800849 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:31:56.809235 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:31:56.844872 (systemd)[1867]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:31:57.025497 systemd[1867]: Queued start job for default target default.target. Dec 13 13:31:57.034896 systemd[1867]: Created slice app.slice - User Application Slice. Dec 13 13:31:57.034941 systemd[1867]: Reached target paths.target - Paths. Dec 13 13:31:57.034966 systemd[1867]: Reached target timers.target - Timers. Dec 13 13:31:57.038024 systemd[1867]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:31:57.051324 systemd[1867]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:31:57.051476 systemd[1867]: Reached target sockets.target - Sockets. Dec 13 13:31:57.051496 systemd[1867]: Reached target basic.target - Basic System. Dec 13 13:31:57.051539 systemd[1867]: Reached target default.target - Main User Target. Dec 13 13:31:57.051574 systemd[1867]: Startup finished in 196ms. Dec 13 13:31:57.051784 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:31:57.058577 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:31:57.059802 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:31:57.341036 kubelet[1856]: E1213 13:31:57.340885 1856 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:31:57.343703 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:31:57.343910 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:31:57.998717 waagent[1843]: 2024-12-13T13:31:57.998607Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Dec 13 13:31:58.002155 waagent[1843]: 2024-12-13T13:31:58.002085Z INFO Daemon Daemon OS: flatcar 4186.0.0 Dec 13 13:31:58.038206 waagent[1843]: 2024-12-13T13:31:58.003341Z INFO Daemon Daemon Python: 3.11.10 Dec 13 13:31:58.038206 waagent[1843]: 2024-12-13T13:31:58.003999Z INFO Daemon Daemon Run daemon Dec 13 13:31:58.038206 waagent[1843]: 2024-12-13T13:31:58.004487Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.0.0' Dec 13 13:31:58.038206 waagent[1843]: 2024-12-13T13:31:58.005226Z INFO Daemon Daemon Using waagent for provisioning Dec 13 13:31:58.038206 waagent[1843]: 2024-12-13T13:31:58.006256Z INFO Daemon Daemon Activate resource disk Dec 13 13:31:58.038206 waagent[1843]: 2024-12-13T13:31:58.007073Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 13:31:58.038206 waagent[1843]: 2024-12-13T13:31:58.012603Z INFO Daemon Daemon Found device: None Dec 13 13:31:58.038206 waagent[1843]: 2024-12-13T13:31:58.013521Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 13:31:58.038206 waagent[1843]: 2024-12-13T13:31:58.014002Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 13:31:58.038206 waagent[1843]: 2024-12-13T13:31:58.015519Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 13:31:58.038206 waagent[1843]: 2024-12-13T13:31:58.016092Z INFO Daemon Daemon Running default provisioning handler Dec 13 13:31:58.041944 waagent[1843]: 2024-12-13T13:31:58.041869Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 13 13:31:58.048973 waagent[1843]: 2024-12-13T13:31:58.048913Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 13:31:58.055332 waagent[1843]: 2024-12-13T13:31:58.055266Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 13:31:58.059727 waagent[1843]: 2024-12-13T13:31:58.056336Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 13:31:58.143568 waagent[1843]: 2024-12-13T13:31:58.143455Z INFO Daemon Daemon Successfully mounted dvd Dec 13 13:31:58.175035 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 13:31:58.188609 waagent[1843]: 2024-12-13T13:31:58.175761Z INFO Daemon Daemon Detect protocol endpoint Dec 13 13:31:58.188609 waagent[1843]: 2024-12-13T13:31:58.177062Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 13:31:58.188609 waagent[1843]: 2024-12-13T13:31:58.178218Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 13:31:58.188609 waagent[1843]: 2024-12-13T13:31:58.179195Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 13:31:58.188609 waagent[1843]: 2024-12-13T13:31:58.180217Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 13:31:58.188609 waagent[1843]: 2024-12-13T13:31:58.181014Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 13:31:58.207012 waagent[1843]: 2024-12-13T13:31:58.206951Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 13:31:58.210593 waagent[1843]: 2024-12-13T13:31:58.210559Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 13:31:58.215774 waagent[1843]: 2024-12-13T13:31:58.211650Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 13:31:58.407402 waagent[1843]: 2024-12-13T13:31:58.407223Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 13:31:58.413919 waagent[1843]: 2024-12-13T13:31:58.408856Z INFO Daemon Daemon Forcing an update of the goal state. Dec 13 13:31:58.416758 waagent[1843]: 2024-12-13T13:31:58.416704Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 13:31:58.430571 waagent[1843]: 2024-12-13T13:31:58.430517Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Dec 13 13:31:58.447951 waagent[1843]: 2024-12-13T13:31:58.432284Z INFO Daemon Dec 13 13:31:58.447951 waagent[1843]: 2024-12-13T13:31:58.434190Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 97a8ab80-1e25-4279-95a2-eab2b4008eb3 eTag: 16534694716885720654 source: Fabric] Dec 13 13:31:58.447951 waagent[1843]: 2024-12-13T13:31:58.435862Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 13 13:31:58.447951 waagent[1843]: 2024-12-13T13:31:58.436563Z INFO Daemon Dec 13 13:31:58.447951 waagent[1843]: 2024-12-13T13:31:58.436977Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 13 13:31:58.451020 waagent[1843]: 2024-12-13T13:31:58.450976Z INFO Daemon Daemon Downloading artifacts profile blob Dec 13 13:31:58.531963 waagent[1843]: 2024-12-13T13:31:58.531864Z INFO Daemon Downloaded certificate {'thumbprint': 'BB3F992D63E18AE4ECDC385B048195AB54CFF5CE', 'hasPrivateKey': False} Dec 13 13:31:58.537367 waagent[1843]: 2024-12-13T13:31:58.537304Z INFO Daemon Downloaded certificate {'thumbprint': 'B1C00F7BE96677C25A116D3A5AE36DFB0CDCC147', 'hasPrivateKey': True} Dec 13 13:31:58.544213 waagent[1843]: 2024-12-13T13:31:58.538784Z INFO Daemon Fetch goal state completed Dec 13 13:31:58.547324 waagent[1843]: 2024-12-13T13:31:58.547276Z INFO Daemon Daemon Starting provisioning Dec 13 13:31:58.554588 waagent[1843]: 2024-12-13T13:31:58.548524Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 13:31:58.554588 waagent[1843]: 2024-12-13T13:31:58.549446Z INFO Daemon Daemon Set hostname [ci-4186.0.0-a-a6ca590029] Dec 13 13:31:58.566972 waagent[1843]: 2024-12-13T13:31:58.566892Z INFO Daemon Daemon Publish hostname [ci-4186.0.0-a-a6ca590029] Dec 13 13:31:58.577209 waagent[1843]: 2024-12-13T13:31:58.568511Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 13:31:58.577209 waagent[1843]: 2024-12-13T13:31:58.569360Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 13:31:58.595278 systemd-networkd[1333]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:31:58.595287 systemd-networkd[1333]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:31:58.595338 systemd-networkd[1333]: eth0: DHCP lease lost Dec 13 13:31:58.596729 waagent[1843]: 2024-12-13T13:31:58.596622Z INFO Daemon Daemon Create user account if not exists Dec 13 13:31:58.613637 waagent[1843]: 2024-12-13T13:31:58.597974Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 13:31:58.613637 waagent[1843]: 2024-12-13T13:31:58.598815Z INFO Daemon Daemon Configure sudoer Dec 13 13:31:58.613637 waagent[1843]: 2024-12-13T13:31:58.599548Z INFO Daemon Daemon Configure sshd Dec 13 13:31:58.613637 waagent[1843]: 2024-12-13T13:31:58.600330Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 13 13:31:58.613637 waagent[1843]: 2024-12-13T13:31:58.601096Z INFO Daemon Daemon Deploy ssh public key. Dec 13 13:31:58.614455 systemd-networkd[1333]: eth0: DHCPv6 lease lost Dec 13 13:31:58.641476 systemd-networkd[1333]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 13:31:59.705517 waagent[1843]: 2024-12-13T13:31:59.705440Z INFO Daemon Daemon Provisioning complete Dec 13 13:31:59.722970 waagent[1843]: 2024-12-13T13:31:59.722890Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 13:31:59.730566 waagent[1843]: 2024-12-13T13:31:59.724146Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 13:31:59.730566 waagent[1843]: 2024-12-13T13:31:59.725088Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Dec 13 13:31:59.850252 waagent[1926]: 2024-12-13T13:31:59.850138Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Dec 13 13:31:59.850731 waagent[1926]: 2024-12-13T13:31:59.850314Z INFO ExtHandler ExtHandler OS: flatcar 4186.0.0 Dec 13 13:31:59.850731 waagent[1926]: 2024-12-13T13:31:59.850416Z INFO ExtHandler ExtHandler Python: 3.11.10 Dec 13 13:31:59.891105 waagent[1926]: 2024-12-13T13:31:59.890992Z INFO ExtHandler ExtHandler Distro: flatcar-4186.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 13:31:59.891375 waagent[1926]: 2024-12-13T13:31:59.891311Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 13:31:59.891514 waagent[1926]: 2024-12-13T13:31:59.891462Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 13:31:59.899707 waagent[1926]: 2024-12-13T13:31:59.899647Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 13:31:59.905978 waagent[1926]: 2024-12-13T13:31:59.905924Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 13:31:59.906437 waagent[1926]: 2024-12-13T13:31:59.906368Z INFO ExtHandler Dec 13 13:31:59.906528 waagent[1926]: 2024-12-13T13:31:59.906476Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ab29b937-3219-43fb-8924-89513196d25f eTag: 16534694716885720654 source: Fabric] Dec 13 13:31:59.906844 waagent[1926]: 2024-12-13T13:31:59.906794Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 13:31:59.907396 waagent[1926]: 2024-12-13T13:31:59.907337Z INFO ExtHandler Dec 13 13:31:59.907490 waagent[1926]: 2024-12-13T13:31:59.907445Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 13:31:59.911365 waagent[1926]: 2024-12-13T13:31:59.911322Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 13:31:59.979276 waagent[1926]: 2024-12-13T13:31:59.979143Z INFO ExtHandler Downloaded certificate {'thumbprint': 'BB3F992D63E18AE4ECDC385B048195AB54CFF5CE', 'hasPrivateKey': False} Dec 13 13:31:59.979685 waagent[1926]: 2024-12-13T13:31:59.979628Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B1C00F7BE96677C25A116D3A5AE36DFB0CDCC147', 'hasPrivateKey': True} Dec 13 13:31:59.980113 waagent[1926]: 2024-12-13T13:31:59.980062Z INFO ExtHandler Fetch goal state completed Dec 13 13:31:59.996743 waagent[1926]: 2024-12-13T13:31:59.996685Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1926 Dec 13 13:31:59.996893 waagent[1926]: 2024-12-13T13:31:59.996847Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 13 13:31:59.998442 waagent[1926]: 2024-12-13T13:31:59.998368Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.0.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 13:31:59.998827 waagent[1926]: 2024-12-13T13:31:59.998778Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 13:32:00.051409 waagent[1926]: 2024-12-13T13:32:00.051329Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 13:32:00.051690 waagent[1926]: 2024-12-13T13:32:00.051628Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 13:32:00.060307 waagent[1926]: 2024-12-13T13:32:00.060157Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 13:32:00.067747 systemd[1]: Reloading requested from client PID 1941 ('systemctl') (unit waagent.service)... Dec 13 13:32:00.067763 systemd[1]: Reloading... Dec 13 13:32:00.141413 zram_generator::config[1974]: No configuration found. Dec 13 13:32:00.278750 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:32:00.366856 systemd[1]: Reloading finished in 298 ms. Dec 13 13:32:00.393204 waagent[1926]: 2024-12-13T13:32:00.393103Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Dec 13 13:32:00.401840 systemd[1]: Reloading requested from client PID 2032 ('systemctl') (unit waagent.service)... Dec 13 13:32:00.401857 systemd[1]: Reloading... Dec 13 13:32:00.488422 zram_generator::config[2066]: No configuration found. Dec 13 13:32:00.614502 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:32:00.702732 systemd[1]: Reloading finished in 300 ms. Dec 13 13:32:00.728398 waagent[1926]: 2024-12-13T13:32:00.728267Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 13 13:32:00.730499 waagent[1926]: 2024-12-13T13:32:00.729572Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 13 13:32:01.683298 waagent[1926]: 2024-12-13T13:32:01.683191Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 13:32:01.684051 waagent[1926]: 2024-12-13T13:32:01.683979Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Dec 13 13:32:01.703744 waagent[1926]: 2024-12-13T13:32:01.703670Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 13:32:01.703889 waagent[1926]: 2024-12-13T13:32:01.703823Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 13:32:01.704020 waagent[1926]: 2024-12-13T13:32:01.703933Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 13:32:01.704447 waagent[1926]: 2024-12-13T13:32:01.704387Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 13:32:01.704925 waagent[1926]: 2024-12-13T13:32:01.704873Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 13:32:01.705017 waagent[1926]: 2024-12-13T13:32:01.704964Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 13:32:01.705145 waagent[1926]: 2024-12-13T13:32:01.705098Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 13:32:01.705436 waagent[1926]: 2024-12-13T13:32:01.705387Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 13:32:01.705770 waagent[1926]: 2024-12-13T13:32:01.705722Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 13:32:01.705770 waagent[1926]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 13:32:01.705770 waagent[1926]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 13:32:01.705770 waagent[1926]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 13:32:01.705770 waagent[1926]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 13:32:01.705770 waagent[1926]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 13:32:01.705770 waagent[1926]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 13:32:01.706040 waagent[1926]: 2024-12-13T13:32:01.705983Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 13:32:01.706123 waagent[1926]: 2024-12-13T13:32:01.706079Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 13:32:01.706407 waagent[1926]: 2024-12-13T13:32:01.706302Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 13:32:01.706532 waagent[1926]: 2024-12-13T13:32:01.706461Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 13:32:01.706832 waagent[1926]: 2024-12-13T13:32:01.706781Z INFO EnvHandler ExtHandler Configure routes Dec 13 13:32:01.707004 waagent[1926]: 2024-12-13T13:32:01.706953Z INFO EnvHandler ExtHandler Gateway:None Dec 13 13:32:01.707949 waagent[1926]: 2024-12-13T13:32:01.707879Z INFO EnvHandler ExtHandler Routes:None Dec 13 13:32:01.713737 waagent[1926]: 2024-12-13T13:32:01.713680Z INFO ExtHandler ExtHandler Dec 13 13:32:01.714233 waagent[1926]: 2024-12-13T13:32:01.714183Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5cf532db-d4f7-4592-9aa1-3198bd222a55 correlation 40f82592-7c33-4a2b-8af5-1b97c3095686 created: 2024-12-13T13:30:45.039421Z] Dec 13 13:32:01.715373 waagent[1926]: 2024-12-13T13:32:01.715315Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 13:32:01.716240 waagent[1926]: 2024-12-13T13:32:01.716194Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Dec 13 13:32:01.752987 waagent[1926]: 2024-12-13T13:32:01.752925Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: DC2DA15E-645F-481C-BF26-CDB657B7005A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Dec 13 13:32:01.799330 waagent[1926]: 2024-12-13T13:32:01.799250Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 13:32:01.799330 waagent[1926]: Executing ['ip', '-a', '-o', 'link']: Dec 13 13:32:01.799330 waagent[1926]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 13:32:01.799330 waagent[1926]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b7:fe:4d brd ff:ff:ff:ff:ff:ff Dec 13 13:32:01.799330 waagent[1926]: 3: enP34987s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b7:fe:4d brd ff:ff:ff:ff:ff:ff\ altname enP34987p0s2 Dec 13 13:32:01.799330 waagent[1926]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 13:32:01.799330 waagent[1926]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 13:32:01.799330 waagent[1926]: 2: eth0 inet 10.200.8.13/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 13:32:01.799330 waagent[1926]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 13:32:01.799330 waagent[1926]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 13 13:32:01.799330 waagent[1926]: 2: eth0 inet6 fe80::20d:3aff:feb7:fe4d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 13:32:01.799330 waagent[1926]: 3: enP34987s1 inet6 fe80::20d:3aff:feb7:fe4d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 13:32:01.856153 waagent[1926]: 2024-12-13T13:32:01.856088Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Dec 13 13:32:01.856153 waagent[1926]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:32:01.856153 waagent[1926]: pkts bytes target prot opt in out source destination Dec 13 13:32:01.856153 waagent[1926]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:32:01.856153 waagent[1926]: pkts bytes target prot opt in out source destination Dec 13 13:32:01.856153 waagent[1926]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:32:01.856153 waagent[1926]: pkts bytes target prot opt in out source destination Dec 13 13:32:01.856153 waagent[1926]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 13:32:01.856153 waagent[1926]: 2 112 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 13:32:01.856153 waagent[1926]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 13:32:01.860160 waagent[1926]: 2024-12-13T13:32:01.860092Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 13:32:01.860160 waagent[1926]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:32:01.860160 waagent[1926]: pkts bytes target prot opt in out source destination Dec 13 13:32:01.860160 waagent[1926]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:32:01.860160 waagent[1926]: pkts bytes target prot opt in out source destination Dec 13 13:32:01.860160 waagent[1926]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:32:01.860160 waagent[1926]: pkts bytes target prot opt in out source destination Dec 13 13:32:01.860160 waagent[1926]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 13:32:01.860160 waagent[1926]: 7 579 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 13:32:01.860160 waagent[1926]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 13:32:01.860556 waagent[1926]: 2024-12-13T13:32:01.860484Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 13:32:07.496330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:32:07.501610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:32:07.595690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:32:07.600338 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:32:08.180865 kubelet[2162]: E1213 13:32:08.180799 2162 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:32:08.185062 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:32:08.185284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:32:18.246301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:32:18.251725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:32:18.355003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:32:18.368699 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:32:18.744944 chronyd[1712]: Selected source PHC0 Dec 13 13:32:18.924817 kubelet[2178]: E1213 13:32:18.924755 2178 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:32:18.927507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:32:18.927728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:32:22.252942 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:32:22.258674 systemd[1]: Started sshd@0-10.200.8.13:22-10.200.16.10:53520.service - OpenSSH per-connection server daemon (10.200.16.10:53520). Dec 13 13:32:23.095641 sshd[2188]: Accepted publickey for core from 10.200.16.10 port 53520 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:32:23.097510 sshd-session[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:23.101673 systemd-logind[1688]: New session 3 of user core. Dec 13 13:32:23.108549 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:32:23.717681 systemd[1]: Started sshd@1-10.200.8.13:22-10.200.16.10:53524.service - OpenSSH per-connection server daemon (10.200.16.10:53524). Dec 13 13:32:24.429013 sshd[2193]: Accepted publickey for core from 10.200.16.10 port 53524 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:32:24.430825 sshd-session[2193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:24.435956 systemd-logind[1688]: New session 4 of user core. Dec 13 13:32:24.445557 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:32:24.931984 sshd[2195]: Connection closed by 10.200.16.10 port 53524 Dec 13 13:32:24.933168 sshd-session[2193]: pam_unix(sshd:session): session closed for user core Dec 13 13:32:24.936180 systemd[1]: sshd@1-10.200.8.13:22-10.200.16.10:53524.service: Deactivated successfully. Dec 13 13:32:24.938227 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:32:24.939883 systemd-logind[1688]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:32:24.940827 systemd-logind[1688]: Removed session 4. Dec 13 13:32:25.060351 systemd[1]: Started sshd@2-10.200.8.13:22-10.200.16.10:53526.service - OpenSSH per-connection server daemon (10.200.16.10:53526). Dec 13 13:32:25.776198 sshd[2200]: Accepted publickey for core from 10.200.16.10 port 53526 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:32:25.778037 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:25.783757 systemd-logind[1688]: New session 5 of user core. Dec 13 13:32:25.790782 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:32:26.274066 sshd[2202]: Connection closed by 10.200.16.10 port 53526 Dec 13 13:32:26.274830 sshd-session[2200]: pam_unix(sshd:session): session closed for user core Dec 13 13:32:26.277686 systemd[1]: sshd@2-10.200.8.13:22-10.200.16.10:53526.service: Deactivated successfully. Dec 13 13:32:26.279725 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:32:26.281196 systemd-logind[1688]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:32:26.282168 systemd-logind[1688]: Removed session 5. Dec 13 13:32:26.402470 systemd[1]: Started sshd@3-10.200.8.13:22-10.200.16.10:53534.service - OpenSSH per-connection server daemon (10.200.16.10:53534). Dec 13 13:32:27.117785 sshd[2207]: Accepted publickey for core from 10.200.16.10 port 53534 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:32:27.119543 sshd-session[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:27.123906 systemd-logind[1688]: New session 6 of user core. Dec 13 13:32:27.133629 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:32:27.618259 sshd[2209]: Connection closed by 10.200.16.10 port 53534 Dec 13 13:32:27.619246 sshd-session[2207]: pam_unix(sshd:session): session closed for user core Dec 13 13:32:27.622577 systemd[1]: sshd@3-10.200.8.13:22-10.200.16.10:53534.service: Deactivated successfully. Dec 13 13:32:27.624793 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:32:27.626256 systemd-logind[1688]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:32:27.627349 systemd-logind[1688]: Removed session 6. Dec 13 13:32:27.747684 systemd[1]: Started sshd@4-10.200.8.13:22-10.200.16.10:53544.service - OpenSSH per-connection server daemon (10.200.16.10:53544). Dec 13 13:32:28.458872 sshd[2214]: Accepted publickey for core from 10.200.16.10 port 53544 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:32:28.460608 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:28.464862 systemd-logind[1688]: New session 7 of user core. Dec 13 13:32:28.474545 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:32:28.996101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 13:32:29.009022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:32:29.168806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:32:29.173414 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:32:29.221533 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:32:29.222025 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:32:29.624152 sudo[2217]: pam_unix(sudo:session): session closed for user root Dec 13 13:32:29.655126 kubelet[2226]: E1213 13:32:29.655066 2226 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:32:29.657954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:32:29.658151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:32:29.741196 sshd[2216]: Connection closed by 10.200.16.10 port 53544 Dec 13 13:32:29.742305 sshd-session[2214]: pam_unix(sshd:session): session closed for user core Dec 13 13:32:29.745926 systemd[1]: sshd@4-10.200.8.13:22-10.200.16.10:53544.service: Deactivated successfully. Dec 13 13:32:29.747914 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:32:29.749434 systemd-logind[1688]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:32:29.750657 systemd-logind[1688]: Removed session 7. Dec 13 13:32:29.874701 systemd[1]: Started sshd@5-10.200.8.13:22-10.200.16.10:53174.service - OpenSSH per-connection server daemon (10.200.16.10:53174). Dec 13 13:32:30.599327 sshd[2238]: Accepted publickey for core from 10.200.16.10 port 53174 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:32:30.600905 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:30.605681 systemd-logind[1688]: New session 8 of user core. Dec 13 13:32:30.615543 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:32:30.986318 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:32:30.986719 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:32:30.990319 sudo[2242]: pam_unix(sudo:session): session closed for user root Dec 13 13:32:30.995471 sudo[2241]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:32:30.995816 sudo[2241]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:32:31.008755 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:32:31.035262 augenrules[2264]: No rules Dec 13 13:32:31.036720 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:32:31.036959 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:32:31.038311 sudo[2241]: pam_unix(sudo:session): session closed for user root Dec 13 13:32:31.154966 sshd[2240]: Connection closed by 10.200.16.10 port 53174 Dec 13 13:32:31.155719 sshd-session[2238]: pam_unix(sshd:session): session closed for user core Dec 13 13:32:31.159913 systemd[1]: sshd@5-10.200.8.13:22-10.200.16.10:53174.service: Deactivated successfully. Dec 13 13:32:31.161825 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:32:31.162536 systemd-logind[1688]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:32:31.163569 systemd-logind[1688]: Removed session 8. Dec 13 13:32:31.284689 systemd[1]: Started sshd@6-10.200.8.13:22-10.200.16.10:53186.service - OpenSSH per-connection server daemon (10.200.16.10:53186). Dec 13 13:32:31.995555 sshd[2272]: Accepted publickey for core from 10.200.16.10 port 53186 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:32:31.997162 sshd-session[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:32.001859 systemd-logind[1688]: New session 9 of user core. Dec 13 13:32:32.008779 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:32:32.383187 sudo[2275]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:32:32.383670 sudo[2275]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:32:34.207771 (dockerd)[2293]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:32:34.208004 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:32:35.739127 dockerd[2293]: time="2024-12-13T13:32:35.739055433Z" level=info msg="Starting up" Dec 13 13:32:36.330443 dockerd[2293]: time="2024-12-13T13:32:36.330393747Z" level=info msg="Loading containers: start." Dec 13 13:32:36.624419 kernel: Initializing XFRM netlink socket Dec 13 13:32:36.835767 systemd-networkd[1333]: docker0: Link UP Dec 13 13:32:36.892589 dockerd[2293]: time="2024-12-13T13:32:36.892543180Z" level=info msg="Loading containers: done." Dec 13 13:32:36.978900 dockerd[2293]: time="2024-12-13T13:32:36.978836596Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:32:36.979118 dockerd[2293]: time="2024-12-13T13:32:36.978970799Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:32:36.979118 dockerd[2293]: time="2024-12-13T13:32:36.979108502Z" level=info msg="Daemon has completed initialization" Dec 13 13:32:37.030984 dockerd[2293]: time="2024-12-13T13:32:37.030616905Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:32:37.031326 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:32:37.344512 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 13:32:38.961786 containerd[1699]: time="2024-12-13T13:32:38.961737419Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 13:32:39.657285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount957594807.mount: Deactivated successfully. Dec 13 13:32:39.746059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 13:32:39.753328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:32:39.922359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:32:39.927187 (kubelet)[2500]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:32:40.386852 kubelet[2500]: E1213 13:32:40.386784 2500 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:32:40.389503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:32:40.389718 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:32:40.595493 update_engine[1690]: I20241213 13:32:40.595422 1690 update_attempter.cc:509] Updating boot flags... Dec 13 13:32:40.687404 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2527) Dec 13 13:32:40.936997 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2531) Dec 13 13:32:42.501258 containerd[1699]: time="2024-12-13T13:32:42.501181707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:42.504203 containerd[1699]: time="2024-12-13T13:32:42.504139984Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Dec 13 13:32:42.507482 containerd[1699]: time="2024-12-13T13:32:42.507431969Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:42.511679 containerd[1699]: time="2024-12-13T13:32:42.511629277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:42.512625 containerd[1699]: time="2024-12-13T13:32:42.512589602Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.550806482s" Dec 13 13:32:42.512989 containerd[1699]: time="2024-12-13T13:32:42.512745906Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 13:32:42.535228 containerd[1699]: time="2024-12-13T13:32:42.535181585Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 13:32:44.630300 containerd[1699]: time="2024-12-13T13:32:44.630232632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:44.632229 containerd[1699]: time="2024-12-13T13:32:44.632162479Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Dec 13 13:32:44.635687 containerd[1699]: time="2024-12-13T13:32:44.635630064Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:44.641148 containerd[1699]: time="2024-12-13T13:32:44.641092198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:44.642337 containerd[1699]: time="2024-12-13T13:32:44.642166325Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.106940438s" Dec 13 13:32:44.642337 containerd[1699]: time="2024-12-13T13:32:44.642220126Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 13:32:44.665778 containerd[1699]: time="2024-12-13T13:32:44.665744903Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 13:32:46.194067 containerd[1699]: time="2024-12-13T13:32:46.194001225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:46.198165 containerd[1699]: time="2024-12-13T13:32:46.198095825Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Dec 13 13:32:46.203024 containerd[1699]: time="2024-12-13T13:32:46.202983445Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:46.208203 containerd[1699]: time="2024-12-13T13:32:46.208148472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:46.209351 containerd[1699]: time="2024-12-13T13:32:46.209201198Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.543421093s" Dec 13 13:32:46.209351 containerd[1699]: time="2024-12-13T13:32:46.209237799Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 13:32:46.232137 containerd[1699]: time="2024-12-13T13:32:46.232096760Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 13:32:47.470936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1584822509.mount: Deactivated successfully. Dec 13 13:32:47.929852 containerd[1699]: time="2024-12-13T13:32:47.929754040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:47.933112 containerd[1699]: time="2024-12-13T13:32:47.933035421Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Dec 13 13:32:47.938614 containerd[1699]: time="2024-12-13T13:32:47.938574357Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:47.943872 containerd[1699]: time="2024-12-13T13:32:47.943781084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:47.945167 containerd[1699]: time="2024-12-13T13:32:47.944551303Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.712411642s" Dec 13 13:32:47.945167 containerd[1699]: time="2024-12-13T13:32:47.944591004Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 13:32:47.967466 containerd[1699]: time="2024-12-13T13:32:47.967415065Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:32:48.546530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2794845462.mount: Deactivated successfully. Dec 13 13:32:49.977928 containerd[1699]: time="2024-12-13T13:32:49.977865555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:49.981538 containerd[1699]: time="2024-12-13T13:32:49.981478047Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Dec 13 13:32:49.985940 containerd[1699]: time="2024-12-13T13:32:49.985882259Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:49.997882 containerd[1699]: time="2024-12-13T13:32:49.997835764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:49.999408 containerd[1699]: time="2024-12-13T13:32:49.998978194Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.031516827s" Dec 13 13:32:49.999408 containerd[1699]: time="2024-12-13T13:32:49.999018395Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 13:32:50.021142 containerd[1699]: time="2024-12-13T13:32:50.021110958Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:32:50.496078 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 13:32:50.501644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:32:50.594004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:32:50.598302 (kubelet)[2755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:32:51.168634 kubelet[2755]: E1213 13:32:51.168550 2755 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:32:51.171303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:32:51.171556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:32:51.301702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1964130023.mount: Deactivated successfully. Dec 13 13:32:51.328965 containerd[1699]: time="2024-12-13T13:32:51.328911697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:51.331968 containerd[1699]: time="2024-12-13T13:32:51.331829991Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Dec 13 13:32:51.339767 containerd[1699]: time="2024-12-13T13:32:51.339714645Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:51.344070 containerd[1699]: time="2024-12-13T13:32:51.344020583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:51.344754 containerd[1699]: time="2024-12-13T13:32:51.344718706Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.323568346s" Dec 13 13:32:51.345637 containerd[1699]: time="2024-12-13T13:32:51.344759407Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 13:32:51.367732 containerd[1699]: time="2024-12-13T13:32:51.367685045Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 13:32:51.991916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2004361404.mount: Deactivated successfully. Dec 13 13:32:54.404299 containerd[1699]: time="2024-12-13T13:32:54.404222468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:54.407743 containerd[1699]: time="2024-12-13T13:32:54.407549055Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Dec 13 13:32:54.413193 containerd[1699]: time="2024-12-13T13:32:54.413138100Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:54.423505 containerd[1699]: time="2024-12-13T13:32:54.423440669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:54.424709 containerd[1699]: time="2024-12-13T13:32:54.424537198Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.056810152s" Dec 13 13:32:54.424709 containerd[1699]: time="2024-12-13T13:32:54.424579299Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 13:32:57.799310 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:32:57.805667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:32:57.833434 systemd[1]: Reloading requested from client PID 2878 ('systemctl') (unit session-9.scope)... Dec 13 13:32:57.833626 systemd[1]: Reloading... Dec 13 13:32:57.956430 zram_generator::config[2917]: No configuration found. Dec 13 13:32:58.082923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:32:58.169481 systemd[1]: Reloading finished in 335 ms. Dec 13 13:32:58.217560 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 13:32:58.217659 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 13:32:58.217947 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:32:58.223756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:32:58.452674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:32:58.459036 (kubelet)[2990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:32:59.118435 kubelet[2990]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:32:59.118435 kubelet[2990]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:32:59.118435 kubelet[2990]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:32:59.118943 kubelet[2990]: I1213 13:32:59.118489 2990 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:32:59.676022 kubelet[2990]: I1213 13:32:59.675969 2990 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:32:59.676022 kubelet[2990]: I1213 13:32:59.676009 2990 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:32:59.676360 kubelet[2990]: I1213 13:32:59.676337 2990 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:32:59.700087 kubelet[2990]: E1213 13:32:59.700041 2990 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:32:59.700854 kubelet[2990]: I1213 13:32:59.700816 2990 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:32:59.712187 kubelet[2990]: I1213 13:32:59.712158 2990 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:32:59.713343 kubelet[2990]: I1213 13:32:59.713308 2990 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:32:59.713559 kubelet[2990]: I1213 13:32:59.713534 2990 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:32:59.713719 kubelet[2990]: I1213 13:32:59.713568 2990 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:32:59.713719 kubelet[2990]: I1213 13:32:59.713581 2990 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:32:59.713719 kubelet[2990]: I1213 13:32:59.713711 2990 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:32:59.713846 kubelet[2990]: I1213 13:32:59.713832 2990 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:32:59.713898 kubelet[2990]: I1213 13:32:59.713856 2990 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:32:59.713898 kubelet[2990]: I1213 13:32:59.713887 2990 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:32:59.713970 kubelet[2990]: I1213 13:32:59.713902 2990 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:32:59.715930 kubelet[2990]: W1213 13:32:59.715445 2990 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:32:59.715930 kubelet[2990]: E1213 13:32:59.715505 2990 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:32:59.715930 kubelet[2990]: W1213 13:32:59.715592 2990 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-a6ca590029&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:32:59.715930 kubelet[2990]: E1213 13:32:59.715633 2990 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-a6ca590029&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:32:59.716556 kubelet[2990]: I1213 13:32:59.716275 2990 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:32:59.720371 kubelet[2990]: I1213 13:32:59.719864 2990 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:32:59.720371 kubelet[2990]: W1213 13:32:59.719930 2990 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:32:59.720813 kubelet[2990]: I1213 13:32:59.720794 2990 server.go:1256] "Started kubelet" Dec 13 13:32:59.722266 kubelet[2990]: I1213 13:32:59.722240 2990 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:32:59.728220 kubelet[2990]: E1213 13:32:59.728167 2990 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.0.0-a-a6ca590029.1810bfd56167b0ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.0.0-a-a6ca590029,UID:ci-4186.0.0-a-a6ca590029,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.0.0-a-a6ca590029,},FirstTimestamp:2024-12-13 13:32:59.720765674 +0000 UTC m=+1.257465416,LastTimestamp:2024-12-13 13:32:59.720765674 +0000 UTC m=+1.257465416,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.0.0-a-a6ca590029,}" Dec 13 13:32:59.729409 kubelet[2990]: I1213 13:32:59.728978 2990 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:32:59.730021 kubelet[2990]: I1213 13:32:59.730000 2990 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:32:59.730331 kubelet[2990]: I1213 13:32:59.730309 2990 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:32:59.730589 kubelet[2990]: I1213 13:32:59.730573 2990 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:32:59.733436 kubelet[2990]: I1213 13:32:59.732806 2990 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:32:59.734552 kubelet[2990]: E1213 13:32:59.734534 2990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-a6ca590029?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="200ms" Dec 13 13:32:59.735438 kubelet[2990]: I1213 13:32:59.735414 2990 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:32:59.735655 kubelet[2990]: I1213 13:32:59.735633 2990 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:32:59.737268 kubelet[2990]: I1213 13:32:59.737240 2990 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:32:59.737939 kubelet[2990]: W1213 13:32:59.737642 2990 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:32:59.737939 kubelet[2990]: E1213 13:32:59.737691 2990 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:32:59.737939 kubelet[2990]: I1213 13:32:59.737771 2990 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:32:59.739444 kubelet[2990]: I1213 13:32:59.739419 2990 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:32:59.749510 kubelet[2990]: E1213 13:32:59.749484 2990 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:32:59.778082 kubelet[2990]: I1213 13:32:59.778052 2990 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:32:59.778082 kubelet[2990]: I1213 13:32:59.778076 2990 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:32:59.778082 kubelet[2990]: I1213 13:32:59.778100 2990 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:32:59.783830 kubelet[2990]: I1213 13:32:59.783803 2990 policy_none.go:49] "None policy: Start" Dec 13 13:32:59.784447 kubelet[2990]: I1213 13:32:59.784423 2990 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:32:59.784533 kubelet[2990]: I1213 13:32:59.784451 2990 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:32:59.792712 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:32:59.804077 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:32:59.807293 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:32:59.815229 kubelet[2990]: I1213 13:32:59.814459 2990 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:32:59.815229 kubelet[2990]: I1213 13:32:59.814779 2990 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:32:59.817512 kubelet[2990]: E1213 13:32:59.817494 2990 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.0.0-a-a6ca590029\" not found" Dec 13 13:32:59.835011 kubelet[2990]: I1213 13:32:59.834990 2990 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:32:59.835372 kubelet[2990]: E1213 13:32:59.835346 2990 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:32:59.854882 kubelet[2990]: I1213 13:32:59.854851 2990 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:32:59.857841 kubelet[2990]: I1213 13:32:59.857740 2990 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:32:59.857841 kubelet[2990]: I1213 13:32:59.857782 2990 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:32:59.857841 kubelet[2990]: I1213 13:32:59.857816 2990 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:32:59.858064 kubelet[2990]: E1213 13:32:59.857873 2990 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 13:32:59.860364 kubelet[2990]: W1213 13:32:59.859973 2990 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:32:59.860364 kubelet[2990]: E1213 13:32:59.860019 2990 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:32:59.935718 kubelet[2990]: E1213 13:32:59.935567 2990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-a6ca590029?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="400ms" Dec 13 13:32:59.958201 kubelet[2990]: I1213 13:32:59.958141 2990 topology_manager.go:215] "Topology Admit Handler" podUID="2c4717f561e0891973efcd0222133670" podNamespace="kube-system" podName="kube-apiserver-ci-4186.0.0-a-a6ca590029" Dec 13 13:32:59.960470 kubelet[2990]: I1213 13:32:59.960320 2990 topology_manager.go:215] "Topology Admit Handler" podUID="bea765bf40d037b2f49f44b7237c321f" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.0.0-a-a6ca590029" Dec 13 13:32:59.962358 kubelet[2990]: I1213 13:32:59.962095 2990 topology_manager.go:215] "Topology Admit Handler" podUID="e659aea3fc0eaf8f64a9170fa28be14e" podNamespace="kube-system" podName="kube-scheduler-ci-4186.0.0-a-a6ca590029" Dec 13 13:32:59.969162 systemd[1]: Created slice kubepods-burstable-pod2c4717f561e0891973efcd0222133670.slice - libcontainer container kubepods-burstable-pod2c4717f561e0891973efcd0222133670.slice. Dec 13 13:32:59.990078 systemd[1]: Created slice kubepods-burstable-podbea765bf40d037b2f49f44b7237c321f.slice - libcontainer container kubepods-burstable-podbea765bf40d037b2f49f44b7237c321f.slice. Dec 13 13:32:59.994987 systemd[1]: Created slice kubepods-burstable-pode659aea3fc0eaf8f64a9170fa28be14e.slice - libcontainer container kubepods-burstable-pode659aea3fc0eaf8f64a9170fa28be14e.slice. Dec 13 13:33:00.037614 kubelet[2990]: I1213 13:33:00.037571 2990 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:00.038046 kubelet[2990]: E1213 13:33:00.038015 2990 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:00.139867 kubelet[2990]: I1213 13:33:00.139803 2990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c4717f561e0891973efcd0222133670-k8s-certs\") pod \"kube-apiserver-ci-4186.0.0-a-a6ca590029\" (UID: \"2c4717f561e0891973efcd0222133670\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:00.139867 kubelet[2990]: I1213 13:33:00.139873 2990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bea765bf40d037b2f49f44b7237c321f-k8s-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-a6ca590029\" (UID: \"bea765bf40d037b2f49f44b7237c321f\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:00.140511 kubelet[2990]: I1213 13:33:00.139910 2990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bea765bf40d037b2f49f44b7237c321f-kubeconfig\") pod \"kube-controller-manager-ci-4186.0.0-a-a6ca590029\" (UID: \"bea765bf40d037b2f49f44b7237c321f\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:00.140511 kubelet[2990]: I1213 13:33:00.139939 2990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c4717f561e0891973efcd0222133670-ca-certs\") pod \"kube-apiserver-ci-4186.0.0-a-a6ca590029\" (UID: \"2c4717f561e0891973efcd0222133670\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:00.140511 kubelet[2990]: I1213 13:33:00.139977 2990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c4717f561e0891973efcd0222133670-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.0.0-a-a6ca590029\" (UID: \"2c4717f561e0891973efcd0222133670\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:00.140511 kubelet[2990]: I1213 13:33:00.140033 2990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bea765bf40d037b2f49f44b7237c321f-ca-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-a6ca590029\" (UID: \"bea765bf40d037b2f49f44b7237c321f\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:00.140511 kubelet[2990]: I1213 13:33:00.140075 2990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bea765bf40d037b2f49f44b7237c321f-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.0.0-a-a6ca590029\" (UID: \"bea765bf40d037b2f49f44b7237c321f\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:00.140704 kubelet[2990]: I1213 13:33:00.140113 2990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bea765bf40d037b2f49f44b7237c321f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.0.0-a-a6ca590029\" (UID: \"bea765bf40d037b2f49f44b7237c321f\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:00.140704 kubelet[2990]: I1213 13:33:00.140148 2990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e659aea3fc0eaf8f64a9170fa28be14e-kubeconfig\") pod \"kube-scheduler-ci-4186.0.0-a-a6ca590029\" (UID: \"e659aea3fc0eaf8f64a9170fa28be14e\") " pod="kube-system/kube-scheduler-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:00.289673 containerd[1699]: time="2024-12-13T13:33:00.289614583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.0.0-a-a6ca590029,Uid:2c4717f561e0891973efcd0222133670,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:00.293257 containerd[1699]: time="2024-12-13T13:33:00.293219278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.0.0-a-a6ca590029,Uid:bea765bf40d037b2f49f44b7237c321f,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:00.297746 containerd[1699]: time="2024-12-13T13:33:00.297712496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.0.0-a-a6ca590029,Uid:e659aea3fc0eaf8f64a9170fa28be14e,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:00.336621 kubelet[2990]: E1213 13:33:00.336573 2990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-a6ca590029?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="800ms" Dec 13 13:33:00.441065 kubelet[2990]: I1213 13:33:00.441027 2990 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:00.441491 kubelet[2990]: E1213 13:33:00.441452 2990 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:00.727753 kubelet[2990]: W1213 13:33:00.727583 2990 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:00.727753 kubelet[2990]: E1213 13:33:00.727658 2990 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:01.045399 kubelet[2990]: W1213 13:33:01.045312 2990 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-a6ca590029&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:01.045642 kubelet[2990]: E1213 13:33:01.045427 2990 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-a6ca590029&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:01.073181 kubelet[2990]: W1213 13:33:01.073130 2990 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:01.073181 kubelet[2990]: E1213 13:33:01.073184 2990 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:01.138190 kubelet[2990]: E1213 13:33:01.138153 2990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-a6ca590029?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="1.6s" Dec 13 13:33:01.244474 kubelet[2990]: I1213 13:33:01.244433 2990 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:01.245009 kubelet[2990]: E1213 13:33:01.244922 2990 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:02.154192 kubelet[2990]: W1213 13:33:01.452634 2990 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:02.154192 kubelet[2990]: E1213 13:33:01.452687 2990 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:02.154192 kubelet[2990]: E1213 13:33:01.876390 2990 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:02.739613 kubelet[2990]: E1213 13:33:02.739572 2990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-a6ca590029?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="3.2s" Dec 13 13:33:02.847606 kubelet[2990]: I1213 13:33:02.847550 2990 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:02.847979 kubelet[2990]: E1213 13:33:02.847955 2990 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:02.937015 kubelet[2990]: W1213 13:33:02.936969 2990 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:02.937015 kubelet[2990]: E1213 13:33:02.937020 2990 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:03.565562 kubelet[2990]: W1213 13:33:03.565510 2990 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:03.565562 kubelet[2990]: E1213 13:33:03.565561 2990 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:03.644665 kubelet[2990]: W1213 13:33:03.644615 2990 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-a6ca590029&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:03.644665 kubelet[2990]: E1213 13:33:03.644672 2990 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-a6ca590029&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:04.432151 kubelet[2990]: W1213 13:33:04.432098 2990 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:04.432151 kubelet[2990]: E1213 13:33:04.432150 2990 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:05.142079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1050349505.mount: Deactivated successfully. Dec 13 13:33:05.172722 containerd[1699]: time="2024-12-13T13:33:05.172658664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:05.186362 containerd[1699]: time="2024-12-13T13:33:05.186301821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 13:33:05.190841 containerd[1699]: time="2024-12-13T13:33:05.190803039Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:05.196049 containerd[1699]: time="2024-12-13T13:33:05.196010676Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:05.205081 containerd[1699]: time="2024-12-13T13:33:05.204923610Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:33:05.209217 containerd[1699]: time="2024-12-13T13:33:05.209126420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:05.216569 containerd[1699]: time="2024-12-13T13:33:05.216534214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:05.217315 containerd[1699]: time="2024-12-13T13:33:05.217282333Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.927542947s" Dec 13 13:33:05.224311 containerd[1699]: time="2024-12-13T13:33:05.223960608Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:33:05.227535 containerd[1699]: time="2024-12-13T13:33:05.227502501Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.929705903s" Dec 13 13:33:05.243138 containerd[1699]: time="2024-12-13T13:33:05.242936506Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 4.949621426s" Dec 13 13:33:05.940999 kubelet[2990]: E1213 13:33:05.940948 2990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-a6ca590029?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="6.4s" Dec 13 13:33:06.053013 kubelet[2990]: I1213 13:33:06.052242 2990 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:06.053013 kubelet[2990]: E1213 13:33:06.052689 2990 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:06.126053 kubelet[2990]: E1213 13:33:06.126015 2990 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.13:6443: connect: connection refused Dec 13 13:33:06.324339 containerd[1699]: time="2024-12-13T13:33:06.324244746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:06.325347 containerd[1699]: time="2024-12-13T13:33:06.324738259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:06.325347 containerd[1699]: time="2024-12-13T13:33:06.325161470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:06.327082 containerd[1699]: time="2024-12-13T13:33:06.325319674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:06.327428 containerd[1699]: time="2024-12-13T13:33:06.327122521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:06.327428 containerd[1699]: time="2024-12-13T13:33:06.327187323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:06.327428 containerd[1699]: time="2024-12-13T13:33:06.327212224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:06.327428 containerd[1699]: time="2024-12-13T13:33:06.327316027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:06.329594 containerd[1699]: time="2024-12-13T13:33:06.328955069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:06.329594 containerd[1699]: time="2024-12-13T13:33:06.329028871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:06.329594 containerd[1699]: time="2024-12-13T13:33:06.329437482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:06.331081 containerd[1699]: time="2024-12-13T13:33:06.330904221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:06.367725 systemd[1]: run-containerd-runc-k8s.io-8931e92a4c7d101e7e2accf3be1fb10f41f3433b309c9a98062ad1707eec79b1-runc.xDwboc.mount: Deactivated successfully. Dec 13 13:33:06.385546 systemd[1]: Started cri-containerd-63947e2e1d2e811f20aa82bdb48b83561987776b7c80cbd3394a64e28ee3d5cb.scope - libcontainer container 63947e2e1d2e811f20aa82bdb48b83561987776b7c80cbd3394a64e28ee3d5cb. Dec 13 13:33:06.387034 systemd[1]: Started cri-containerd-7485978cff71d681c506ac80c94249a6a5f53cdda00321e5c2590723e7166cfb.scope - libcontainer container 7485978cff71d681c506ac80c94249a6a5f53cdda00321e5c2590723e7166cfb. Dec 13 13:33:06.388855 systemd[1]: Started cri-containerd-8931e92a4c7d101e7e2accf3be1fb10f41f3433b309c9a98062ad1707eec79b1.scope - libcontainer container 8931e92a4c7d101e7e2accf3be1fb10f41f3433b309c9a98062ad1707eec79b1. Dec 13 13:33:06.442022 kubelet[2990]: E1213 13:33:06.441991 2990 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.0.0-a-a6ca590029.1810bfd56167b0ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.0.0-a-a6ca590029,UID:ci-4186.0.0-a-a6ca590029,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.0.0-a-a6ca590029,},FirstTimestamp:2024-12-13 13:32:59.720765674 +0000 UTC m=+1.257465416,LastTimestamp:2024-12-13 13:32:59.720765674 +0000 UTC m=+1.257465416,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.0.0-a-a6ca590029,}" Dec 13 13:33:06.459946 containerd[1699]: time="2024-12-13T13:33:06.459517191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.0.0-a-a6ca590029,Uid:bea765bf40d037b2f49f44b7237c321f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7485978cff71d681c506ac80c94249a6a5f53cdda00321e5c2590723e7166cfb\"" Dec 13 13:33:06.475028 containerd[1699]: time="2024-12-13T13:33:06.474920595Z" level=info msg="CreateContainer within sandbox \"7485978cff71d681c506ac80c94249a6a5f53cdda00321e5c2590723e7166cfb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:33:06.476416 containerd[1699]: time="2024-12-13T13:33:06.475236303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.0.0-a-a6ca590029,Uid:2c4717f561e0891973efcd0222133670,Namespace:kube-system,Attempt:0,} returns sandbox id \"63947e2e1d2e811f20aa82bdb48b83561987776b7c80cbd3394a64e28ee3d5cb\"" Dec 13 13:33:06.480998 containerd[1699]: time="2024-12-13T13:33:06.480968254Z" level=info msg="CreateContainer within sandbox \"63947e2e1d2e811f20aa82bdb48b83561987776b7c80cbd3394a64e28ee3d5cb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:33:06.488493 containerd[1699]: time="2024-12-13T13:33:06.488461250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.0.0-a-a6ca590029,Uid:e659aea3fc0eaf8f64a9170fa28be14e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8931e92a4c7d101e7e2accf3be1fb10f41f3433b309c9a98062ad1707eec79b1\"" Dec 13 13:33:06.491060 containerd[1699]: time="2024-12-13T13:33:06.491018017Z" level=info msg="CreateContainer within sandbox \"8931e92a4c7d101e7e2accf3be1fb10f41f3433b309c9a98062ad1707eec79b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:33:06.574836 containerd[1699]: time="2024-12-13T13:33:06.574699810Z" level=info msg="CreateContainer within sandbox \"7485978cff71d681c506ac80c94249a6a5f53cdda00321e5c2590723e7166cfb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4aacf7e4fdb23a02426c5f98077633a316b639e7a44d3102b7cb968322de4c46\"" Dec 13 13:33:06.578408 containerd[1699]: time="2024-12-13T13:33:06.578346806Z" level=info msg="CreateContainer within sandbox \"63947e2e1d2e811f20aa82bdb48b83561987776b7c80cbd3394a64e28ee3d5cb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"88e3cfaab6ace87c116bcf3358674b23f8df30eb8aa592bed6ad6a18ecce53a9\"" Dec 13 13:33:06.578887 containerd[1699]: time="2024-12-13T13:33:06.578665114Z" level=info msg="StartContainer for \"4aacf7e4fdb23a02426c5f98077633a316b639e7a44d3102b7cb968322de4c46\"" Dec 13 13:33:06.586561 containerd[1699]: time="2024-12-13T13:33:06.586521620Z" level=info msg="CreateContainer within sandbox \"8931e92a4c7d101e7e2accf3be1fb10f41f3433b309c9a98062ad1707eec79b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ac34f8c879ca7df98fd714bbe9377341518044a9ec393e26639420e1b4d171b3\"" Dec 13 13:33:06.586882 containerd[1699]: time="2024-12-13T13:33:06.586857129Z" level=info msg="StartContainer for \"88e3cfaab6ace87c116bcf3358674b23f8df30eb8aa592bed6ad6a18ecce53a9\"" Dec 13 13:33:06.593926 containerd[1699]: time="2024-12-13T13:33:06.593888413Z" level=info msg="StartContainer for \"ac34f8c879ca7df98fd714bbe9377341518044a9ec393e26639420e1b4d171b3\"" Dec 13 13:33:06.622775 systemd[1]: Started cri-containerd-4aacf7e4fdb23a02426c5f98077633a316b639e7a44d3102b7cb968322de4c46.scope - libcontainer container 4aacf7e4fdb23a02426c5f98077633a316b639e7a44d3102b7cb968322de4c46. Dec 13 13:33:06.636584 systemd[1]: Started cri-containerd-88e3cfaab6ace87c116bcf3358674b23f8df30eb8aa592bed6ad6a18ecce53a9.scope - libcontainer container 88e3cfaab6ace87c116bcf3358674b23f8df30eb8aa592bed6ad6a18ecce53a9. Dec 13 13:33:06.646868 systemd[1]: Started cri-containerd-ac34f8c879ca7df98fd714bbe9377341518044a9ec393e26639420e1b4d171b3.scope - libcontainer container ac34f8c879ca7df98fd714bbe9377341518044a9ec393e26639420e1b4d171b3. Dec 13 13:33:06.721375 containerd[1699]: time="2024-12-13T13:33:06.721312153Z" level=info msg="StartContainer for \"4aacf7e4fdb23a02426c5f98077633a316b639e7a44d3102b7cb968322de4c46\" returns successfully" Dec 13 13:33:06.722201 containerd[1699]: time="2024-12-13T13:33:06.721836767Z" level=info msg="StartContainer for \"88e3cfaab6ace87c116bcf3358674b23f8df30eb8aa592bed6ad6a18ecce53a9\" returns successfully" Dec 13 13:33:06.753553 containerd[1699]: time="2024-12-13T13:33:06.753500496Z" level=info msg="StartContainer for \"ac34f8c879ca7df98fd714bbe9377341518044a9ec393e26639420e1b4d171b3\" returns successfully" Dec 13 13:33:09.272631 kubelet[2990]: E1213 13:33:09.272582 2990 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4186.0.0-a-a6ca590029" not found Dec 13 13:33:09.663513 kubelet[2990]: E1213 13:33:09.663336 2990 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4186.0.0-a-a6ca590029" not found Dec 13 13:33:09.721088 kubelet[2990]: I1213 13:33:09.721040 2990 apiserver.go:52] "Watching apiserver" Dec 13 13:33:09.740346 kubelet[2990]: I1213 13:33:09.740291 2990 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:33:09.818472 kubelet[2990]: E1213 13:33:09.817883 2990 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.0.0-a-a6ca590029\" not found" Dec 13 13:33:10.092330 kubelet[2990]: E1213 13:33:10.092284 2990 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4186.0.0-a-a6ca590029" not found Dec 13 13:33:11.023016 kubelet[2990]: E1213 13:33:11.022962 2990 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4186.0.0-a-a6ca590029" not found Dec 13 13:33:12.345860 kubelet[2990]: E1213 13:33:12.345745 2990 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.0.0-a-a6ca590029\" not found" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:12.455727 kubelet[2990]: I1213 13:33:12.455682 2990 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:12.465087 kubelet[2990]: I1213 13:33:12.465035 2990 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:13.349862 systemd[1]: Reloading requested from client PID 3263 ('systemctl') (unit session-9.scope)... Dec 13 13:33:13.349880 systemd[1]: Reloading... Dec 13 13:33:13.451423 zram_generator::config[3306]: No configuration found. Dec 13 13:33:13.583245 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:33:13.689437 systemd[1]: Reloading finished in 339 ms. Dec 13 13:33:13.732697 kubelet[2990]: I1213 13:33:13.732654 2990 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:33:13.732990 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:13.737100 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:33:13.737345 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:13.737420 systemd[1]: kubelet.service: Consumed 1.092s CPU time, 113.6M memory peak, 0B memory swap peak. Dec 13 13:33:13.742715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:13.845612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:13.857893 (kubelet)[3370]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:33:13.913305 kubelet[3370]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:13.913305 kubelet[3370]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:33:13.913305 kubelet[3370]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:13.913870 kubelet[3370]: I1213 13:33:13.913365 3370 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:33:13.918180 kubelet[3370]: I1213 13:33:13.918142 3370 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:33:13.918180 kubelet[3370]: I1213 13:33:13.918168 3370 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:33:13.918415 kubelet[3370]: I1213 13:33:13.918396 3370 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:33:13.919914 kubelet[3370]: I1213 13:33:13.919884 3370 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:33:13.922563 kubelet[3370]: I1213 13:33:13.922521 3370 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:33:13.931429 kubelet[3370]: I1213 13:33:13.931372 3370 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:33:13.931809 kubelet[3370]: I1213 13:33:13.931694 3370 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:33:13.931989 kubelet[3370]: I1213 13:33:13.931919 3370 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:33:13.931989 kubelet[3370]: I1213 13:33:13.931953 3370 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:33:13.931989 kubelet[3370]: I1213 13:33:13.931967 3370 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:33:13.932566 kubelet[3370]: I1213 13:33:13.932009 3370 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:13.932566 kubelet[3370]: I1213 13:33:13.932132 3370 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:33:13.932566 kubelet[3370]: I1213 13:33:13.932150 3370 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:33:13.932566 kubelet[3370]: I1213 13:33:13.932181 3370 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:33:13.932566 kubelet[3370]: I1213 13:33:13.932208 3370 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:33:13.935902 kubelet[3370]: I1213 13:33:13.935474 3370 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:33:13.935902 kubelet[3370]: I1213 13:33:13.935689 3370 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:33:13.937267 kubelet[3370]: I1213 13:33:13.937248 3370 server.go:1256] "Started kubelet" Dec 13 13:33:13.943491 kubelet[3370]: I1213 13:33:13.943310 3370 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:33:13.948824 kubelet[3370]: I1213 13:33:13.948805 3370 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:33:13.949954 kubelet[3370]: I1213 13:33:13.949938 3370 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:33:13.954402 kubelet[3370]: I1213 13:33:13.952996 3370 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:33:13.954402 kubelet[3370]: I1213 13:33:13.953196 3370 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:33:13.958402 kubelet[3370]: I1213 13:33:13.957626 3370 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:33:13.965857 kubelet[3370]: I1213 13:33:13.964894 3370 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:33:13.965857 kubelet[3370]: I1213 13:33:13.965055 3370 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:33:13.971545 kubelet[3370]: I1213 13:33:13.971521 3370 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:33:13.974616 kubelet[3370]: I1213 13:33:13.974597 3370 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:33:13.974707 kubelet[3370]: I1213 13:33:13.974637 3370 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:33:13.974707 kubelet[3370]: I1213 13:33:13.974659 3370 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:33:13.974782 kubelet[3370]: E1213 13:33:13.974717 3370 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:33:13.980935 kubelet[3370]: I1213 13:33:13.980909 3370 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:33:13.980935 kubelet[3370]: I1213 13:33:13.980934 3370 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:33:13.981057 kubelet[3370]: I1213 13:33:13.981029 3370 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:33:14.038312 kubelet[3370]: I1213 13:33:14.038269 3370 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:33:14.038312 kubelet[3370]: I1213 13:33:14.038306 3370 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:33:14.038312 kubelet[3370]: I1213 13:33:14.038327 3370 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:14.038616 kubelet[3370]: I1213 13:33:14.038549 3370 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:33:14.038616 kubelet[3370]: I1213 13:33:14.038576 3370 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:33:14.038616 kubelet[3370]: I1213 13:33:14.038584 3370 policy_none.go:49] "None policy: Start" Dec 13 13:33:14.040405 kubelet[3370]: I1213 13:33:14.039347 3370 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:33:14.040405 kubelet[3370]: I1213 13:33:14.039373 3370 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:33:14.040405 kubelet[3370]: I1213 13:33:14.039619 3370 state_mem.go:75] "Updated machine memory state" Dec 13 13:33:14.045284 kubelet[3370]: I1213 13:33:14.044870 3370 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:33:14.045284 kubelet[3370]: I1213 13:33:14.045141 3370 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:33:14.063022 kubelet[3370]: I1213 13:33:14.062990 3370 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.075305 kubelet[3370]: I1213 13:33:14.075277 3370 topology_manager.go:215] "Topology Admit Handler" podUID="2c4717f561e0891973efcd0222133670" podNamespace="kube-system" podName="kube-apiserver-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.075471 kubelet[3370]: I1213 13:33:14.075438 3370 topology_manager.go:215] "Topology Admit Handler" podUID="bea765bf40d037b2f49f44b7237c321f" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.075534 kubelet[3370]: I1213 13:33:14.075492 3370 topology_manager.go:215] "Topology Admit Handler" podUID="e659aea3fc0eaf8f64a9170fa28be14e" podNamespace="kube-system" podName="kube-scheduler-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.380069 sudo[3402]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 13:33:14.380493 sudo[3402]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 13:33:14.400407 kubelet[3370]: I1213 13:33:14.399700 3370 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.400407 kubelet[3370]: I1213 13:33:14.399793 3370 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.408982 kubelet[3370]: W1213 13:33:14.408642 3370 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:33:14.411167 kubelet[3370]: W1213 13:33:14.411033 3370 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:33:14.426614 kubelet[3370]: W1213 13:33:14.426360 3370 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:33:14.494997 kubelet[3370]: I1213 13:33:14.494462 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bea765bf40d037b2f49f44b7237c321f-kubeconfig\") pod \"kube-controller-manager-ci-4186.0.0-a-a6ca590029\" (UID: \"bea765bf40d037b2f49f44b7237c321f\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.494997 kubelet[3370]: I1213 13:33:14.494537 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bea765bf40d037b2f49f44b7237c321f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.0.0-a-a6ca590029\" (UID: \"bea765bf40d037b2f49f44b7237c321f\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.494997 kubelet[3370]: I1213 13:33:14.494568 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c4717f561e0891973efcd0222133670-ca-certs\") pod \"kube-apiserver-ci-4186.0.0-a-a6ca590029\" (UID: \"2c4717f561e0891973efcd0222133670\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.494997 kubelet[3370]: I1213 13:33:14.494615 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c4717f561e0891973efcd0222133670-k8s-certs\") pod \"kube-apiserver-ci-4186.0.0-a-a6ca590029\" (UID: \"2c4717f561e0891973efcd0222133670\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.494997 kubelet[3370]: I1213 13:33:14.494653 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c4717f561e0891973efcd0222133670-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.0.0-a-a6ca590029\" (UID: \"2c4717f561e0891973efcd0222133670\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.495331 kubelet[3370]: I1213 13:33:14.494686 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bea765bf40d037b2f49f44b7237c321f-ca-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-a6ca590029\" (UID: \"bea765bf40d037b2f49f44b7237c321f\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.495331 kubelet[3370]: I1213 13:33:14.494718 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bea765bf40d037b2f49f44b7237c321f-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.0.0-a-a6ca590029\" (UID: \"bea765bf40d037b2f49f44b7237c321f\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.495331 kubelet[3370]: I1213 13:33:14.494756 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bea765bf40d037b2f49f44b7237c321f-k8s-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-a6ca590029\" (UID: \"bea765bf40d037b2f49f44b7237c321f\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.495331 kubelet[3370]: I1213 13:33:14.494785 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e659aea3fc0eaf8f64a9170fa28be14e-kubeconfig\") pod \"kube-scheduler-ci-4186.0.0-a-a6ca590029\" (UID: \"e659aea3fc0eaf8f64a9170fa28be14e\") " pod="kube-system/kube-scheduler-ci-4186.0.0-a-a6ca590029" Dec 13 13:33:14.905160 sudo[3402]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:14.932968 kubelet[3370]: I1213 13:33:14.932912 3370 apiserver.go:52] "Watching apiserver" Dec 13 13:33:14.965361 kubelet[3370]: I1213 13:33:14.965319 3370 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:33:15.044998 kubelet[3370]: I1213 13:33:15.044590 3370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a6ca590029" podStartSLOduration=1.044533999 podStartE2EDuration="1.044533999s" podCreationTimestamp="2024-12-13 13:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:15.033477207 +0000 UTC m=+1.169167052" watchObservedRunningTime="2024-12-13 13:33:15.044533999 +0000 UTC m=+1.180223944" Dec 13 13:33:15.067147 kubelet[3370]: I1213 13:33:15.067112 3370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.0.0-a-a6ca590029" podStartSLOduration=1.067040293 podStartE2EDuration="1.067040293s" podCreationTimestamp="2024-12-13 13:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:15.045662328 +0000 UTC m=+1.181352173" watchObservedRunningTime="2024-12-13 13:33:15.067040293 +0000 UTC m=+1.202730138" Dec 13 13:33:16.300631 kubelet[3370]: I1213 13:33:16.300588 3370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.0.0-a-a6ca590029" podStartSLOduration=2.300541658 podStartE2EDuration="2.300541658s" podCreationTimestamp="2024-12-13 13:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:15.067807013 +0000 UTC m=+1.203496858" watchObservedRunningTime="2024-12-13 13:33:16.300541658 +0000 UTC m=+2.436231903" Dec 13 13:33:16.486095 sudo[2275]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:16.602171 sshd[2274]: Connection closed by 10.200.16.10 port 53186 Dec 13 13:33:16.603274 sshd-session[2272]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:16.607603 systemd[1]: sshd@6-10.200.8.13:22-10.200.16.10:53186.service: Deactivated successfully. Dec 13 13:33:16.610190 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:33:16.610422 systemd[1]: session-9.scope: Consumed 5.093s CPU time, 185.8M memory peak, 0B memory swap peak. Dec 13 13:33:16.612111 systemd-logind[1688]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:33:16.613267 systemd-logind[1688]: Removed session 9. Dec 13 13:33:24.652314 kubelet[3370]: I1213 13:33:24.652064 3370 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:33:24.653246 containerd[1699]: time="2024-12-13T13:33:24.652900299Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:33:24.653921 kubelet[3370]: I1213 13:33:24.653260 3370 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:33:25.321417 kubelet[3370]: I1213 13:33:25.319853 3370 topology_manager.go:215] "Topology Admit Handler" podUID="3c09d376-d09a-4faa-8a22-854cf3716f8f" podNamespace="kube-system" podName="kube-proxy-7dq4t" Dec 13 13:33:25.330856 systemd[1]: Created slice kubepods-besteffort-pod3c09d376_d09a_4faa_8a22_854cf3716f8f.slice - libcontainer container kubepods-besteffort-pod3c09d376_d09a_4faa_8a22_854cf3716f8f.slice. Dec 13 13:33:25.343197 kubelet[3370]: I1213 13:33:25.342924 3370 topology_manager.go:215] "Topology Admit Handler" podUID="0a0230fd-9998-4e80-9d4c-76cfd56a5999" podNamespace="kube-system" podName="cilium-vxvnw" Dec 13 13:33:25.351500 systemd[1]: Created slice kubepods-burstable-pod0a0230fd_9998_4e80_9d4c_76cfd56a5999.slice - libcontainer container kubepods-burstable-pod0a0230fd_9998_4e80_9d4c_76cfd56a5999.slice. Dec 13 13:33:25.359768 kubelet[3370]: I1213 13:33:25.359554 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c09d376-d09a-4faa-8a22-854cf3716f8f-xtables-lock\") pod \"kube-proxy-7dq4t\" (UID: \"3c09d376-d09a-4faa-8a22-854cf3716f8f\") " pod="kube-system/kube-proxy-7dq4t" Dec 13 13:33:25.359768 kubelet[3370]: I1213 13:33:25.359601 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c09d376-d09a-4faa-8a22-854cf3716f8f-lib-modules\") pod \"kube-proxy-7dq4t\" (UID: \"3c09d376-d09a-4faa-8a22-854cf3716f8f\") " pod="kube-system/kube-proxy-7dq4t" Dec 13 13:33:25.359768 kubelet[3370]: I1213 13:33:25.359635 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c09d376-d09a-4faa-8a22-854cf3716f8f-kube-proxy\") pod \"kube-proxy-7dq4t\" (UID: \"3c09d376-d09a-4faa-8a22-854cf3716f8f\") " pod="kube-system/kube-proxy-7dq4t" Dec 13 13:33:25.359768 kubelet[3370]: I1213 13:33:25.359670 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl2nw\" (UniqueName: \"kubernetes.io/projected/3c09d376-d09a-4faa-8a22-854cf3716f8f-kube-api-access-fl2nw\") pod \"kube-proxy-7dq4t\" (UID: \"3c09d376-d09a-4faa-8a22-854cf3716f8f\") " pod="kube-system/kube-proxy-7dq4t" Dec 13 13:33:25.462436 kubelet[3370]: I1213 13:33:25.460185 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cilium-cgroup\") pod \"cilium-vxvnw\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " pod="kube-system/cilium-vxvnw" Dec 13 13:33:25.462436 kubelet[3370]: I1213 13:33:25.460285 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-bpf-maps\") pod \"cilium-vxvnw\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " pod="kube-system/cilium-vxvnw" Dec 13 13:33:25.462436 kubelet[3370]: I1213 13:33:25.460322 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cni-path\") pod \"cilium-vxvnw\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " pod="kube-system/cilium-vxvnw" Dec 13 13:33:25.462436 kubelet[3370]: I1213 13:33:25.460354 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-host-proc-sys-net\") pod \"cilium-vxvnw\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " pod="kube-system/cilium-vxvnw" Dec 13 13:33:25.462436 kubelet[3370]: I1213 13:33:25.460409 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-etc-cni-netd\") pod \"cilium-vxvnw\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " pod="kube-system/cilium-vxvnw" Dec 13 13:33:25.462436 kubelet[3370]: I1213 13:33:25.460443 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-lib-modules\") pod \"cilium-vxvnw\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " pod="kube-system/cilium-vxvnw" Dec 13 13:33:25.462889 kubelet[3370]: I1213 13:33:25.460494 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cilium-run\") pod \"cilium-vxvnw\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " pod="kube-system/cilium-vxvnw" Dec 13 13:33:25.462889 kubelet[3370]: I1213 13:33:25.460544 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-hostproc\") pod \"cilium-vxvnw\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " pod="kube-system/cilium-vxvnw" Dec 13 13:33:25.462889 kubelet[3370]: I1213 13:33:25.460577 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a0230fd-9998-4e80-9d4c-76cfd56a5999-clustermesh-secrets\") pod \"cilium-vxvnw\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " pod="kube-system/cilium-vxvnw" Dec 13 13:33:25.462889 kubelet[3370]: I1213 13:33:25.460614 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-host-proc-sys-kernel\") pod \"cilium-vxvnw\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " pod="kube-system/cilium-vxvnw" Dec 13 13:33:25.462889 kubelet[3370]: I1213 13:33:25.462005 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a0230fd-9998-4e80-9d4c-76cfd56a5999-hubble-tls\") pod \"cilium-vxvnw\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " pod="kube-system/cilium-vxvnw" Dec 13 13:33:25.462889 kubelet[3370]: I1213 13:33:25.462055 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5dml\" (UniqueName: \"kubernetes.io/projected/0a0230fd-9998-4e80-9d4c-76cfd56a5999-kube-api-access-t5dml\") pod \"cilium-vxvnw\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " pod="kube-system/cilium-vxvnw" Dec 13 13:33:25.463169 kubelet[3370]: I1213 13:33:25.462140 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-xtables-lock\") pod \"cilium-vxvnw\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " pod="kube-system/cilium-vxvnw" Dec 13 13:33:25.463169 kubelet[3370]: I1213 13:33:25.462174 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cilium-config-path\") pod \"cilium-vxvnw\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " pod="kube-system/cilium-vxvnw" Dec 13 13:33:25.639666 containerd[1699]: time="2024-12-13T13:33:25.639524901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7dq4t,Uid:3c09d376-d09a-4faa-8a22-854cf3716f8f,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:25.656203 containerd[1699]: time="2024-12-13T13:33:25.656152915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vxvnw,Uid:0a0230fd-9998-4e80-9d4c-76cfd56a5999,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:25.727574 containerd[1699]: time="2024-12-13T13:33:25.726586272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:25.727574 containerd[1699]: time="2024-12-13T13:33:25.726710475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:25.727574 containerd[1699]: time="2024-12-13T13:33:25.726738275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:25.727574 containerd[1699]: time="2024-12-13T13:33:25.726924880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:25.740351 containerd[1699]: time="2024-12-13T13:33:25.740124309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:25.740351 containerd[1699]: time="2024-12-13T13:33:25.740238912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:25.740699 containerd[1699]: time="2024-12-13T13:33:25.740357115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:25.741994 containerd[1699]: time="2024-12-13T13:33:25.741692448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:25.753693 systemd[1]: Started cri-containerd-03ee5d393685914bc20f6efd557741b41d9ecbb09da5b401c4378e12146e5d5a.scope - libcontainer container 03ee5d393685914bc20f6efd557741b41d9ecbb09da5b401c4378e12146e5d5a. Dec 13 13:33:25.767518 systemd[1]: Started cri-containerd-a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94.scope - libcontainer container a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94. Dec 13 13:33:25.780994 kubelet[3370]: I1213 13:33:25.780958 3370 topology_manager.go:215] "Topology Admit Handler" podUID="92b6fe37-fed1-4201-a9c0-3adaa630f2a2" podNamespace="kube-system" podName="cilium-operator-5cc964979-slp8s" Dec 13 13:33:25.797703 systemd[1]: Created slice kubepods-besteffort-pod92b6fe37_fed1_4201_a9c0_3adaa630f2a2.slice - libcontainer container kubepods-besteffort-pod92b6fe37_fed1_4201_a9c0_3adaa630f2a2.slice. Dec 13 13:33:25.810968 containerd[1699]: time="2024-12-13T13:33:25.810844873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7dq4t,Uid:3c09d376-d09a-4faa-8a22-854cf3716f8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"03ee5d393685914bc20f6efd557741b41d9ecbb09da5b401c4378e12146e5d5a\"" Dec 13 13:33:25.814068 containerd[1699]: time="2024-12-13T13:33:25.813650743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vxvnw,Uid:0a0230fd-9998-4e80-9d4c-76cfd56a5999,Namespace:kube-system,Attempt:0,} returns sandbox id \"a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94\"" Dec 13 13:33:25.815736 containerd[1699]: time="2024-12-13T13:33:25.815703094Z" level=info msg="CreateContainer within sandbox \"03ee5d393685914bc20f6efd557741b41d9ecbb09da5b401c4378e12146e5d5a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:33:25.816794 containerd[1699]: time="2024-12-13T13:33:25.816768620Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 13:33:25.865368 containerd[1699]: time="2024-12-13T13:33:25.865317431Z" level=info msg="CreateContainer within sandbox \"03ee5d393685914bc20f6efd557741b41d9ecbb09da5b401c4378e12146e5d5a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0337386447e7f943a4df14efa39bbba84c75a5c81be0e08e215dcaaf9b441bb5\"" Dec 13 13:33:25.865867 kubelet[3370]: I1213 13:33:25.865761 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr2tq\" (UniqueName: \"kubernetes.io/projected/92b6fe37-fed1-4201-a9c0-3adaa630f2a2-kube-api-access-xr2tq\") pod \"cilium-operator-5cc964979-slp8s\" (UID: \"92b6fe37-fed1-4201-a9c0-3adaa630f2a2\") " pod="kube-system/cilium-operator-5cc964979-slp8s" Dec 13 13:33:25.865867 kubelet[3370]: I1213 13:33:25.865816 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92b6fe37-fed1-4201-a9c0-3adaa630f2a2-cilium-config-path\") pod \"cilium-operator-5cc964979-slp8s\" (UID: \"92b6fe37-fed1-4201-a9c0-3adaa630f2a2\") " pod="kube-system/cilium-operator-5cc964979-slp8s" Dec 13 13:33:25.866047 containerd[1699]: time="2024-12-13T13:33:25.865993548Z" level=info msg="StartContainer for \"0337386447e7f943a4df14efa39bbba84c75a5c81be0e08e215dcaaf9b441bb5\"" Dec 13 13:33:25.898829 systemd[1]: Started cri-containerd-0337386447e7f943a4df14efa39bbba84c75a5c81be0e08e215dcaaf9b441bb5.scope - libcontainer container 0337386447e7f943a4df14efa39bbba84c75a5c81be0e08e215dcaaf9b441bb5. Dec 13 13:33:25.932254 containerd[1699]: time="2024-12-13T13:33:25.931897891Z" level=info msg="StartContainer for \"0337386447e7f943a4df14efa39bbba84c75a5c81be0e08e215dcaaf9b441bb5\" returns successfully" Dec 13 13:33:26.038841 kubelet[3370]: I1213 13:33:26.038807 3370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7dq4t" podStartSLOduration=1.038752156 podStartE2EDuration="1.038752156s" podCreationTimestamp="2024-12-13 13:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:26.036303695 +0000 UTC m=+12.171993640" watchObservedRunningTime="2024-12-13 13:33:26.038752156 +0000 UTC m=+12.174442101" Dec 13 13:33:26.103242 containerd[1699]: time="2024-12-13T13:33:26.102915456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-slp8s,Uid:92b6fe37-fed1-4201-a9c0-3adaa630f2a2,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:26.169120 containerd[1699]: time="2024-12-13T13:33:26.168233284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:26.169120 containerd[1699]: time="2024-12-13T13:33:26.168869500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:26.169120 containerd[1699]: time="2024-12-13T13:33:26.168887001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:26.169120 containerd[1699]: time="2024-12-13T13:33:26.168985703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:26.187822 systemd[1]: Started cri-containerd-7e470aec0964219de07a118bbc0e309aca414d544925741a4c96ec7b39f168fb.scope - libcontainer container 7e470aec0964219de07a118bbc0e309aca414d544925741a4c96ec7b39f168fb. Dec 13 13:33:26.233181 containerd[1699]: time="2024-12-13T13:33:26.232907297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-slp8s,Uid:92b6fe37-fed1-4201-a9c0-3adaa630f2a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e470aec0964219de07a118bbc0e309aca414d544925741a4c96ec7b39f168fb\"" Dec 13 13:33:32.726308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169563599.mount: Deactivated successfully. Dec 13 13:33:34.930713 containerd[1699]: time="2024-12-13T13:33:34.930658112Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:34.932708 containerd[1699]: time="2024-12-13T13:33:34.932641364Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734147" Dec 13 13:33:34.936168 containerd[1699]: time="2024-12-13T13:33:34.936114054Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:34.938290 containerd[1699]: time="2024-12-13T13:33:34.937660195Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.120762071s" Dec 13 13:33:34.938290 containerd[1699]: time="2024-12-13T13:33:34.937699596Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 13:33:34.938918 containerd[1699]: time="2024-12-13T13:33:34.938877326Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 13:33:34.940010 containerd[1699]: time="2024-12-13T13:33:34.939821651Z" level=info msg="CreateContainer within sandbox \"a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:33:34.990350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488912644.mount: Deactivated successfully. Dec 13 13:33:35.002946 containerd[1699]: time="2024-12-13T13:33:35.002905593Z" level=info msg="CreateContainer within sandbox \"a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc\"" Dec 13 13:33:35.003470 containerd[1699]: time="2024-12-13T13:33:35.003431807Z" level=info msg="StartContainer for \"f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc\"" Dec 13 13:33:35.035792 systemd[1]: run-containerd-runc-k8s.io-f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc-runc.7Wo3qV.mount: Deactivated successfully. Dec 13 13:33:35.045559 systemd[1]: Started cri-containerd-f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc.scope - libcontainer container f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc. Dec 13 13:33:35.073533 containerd[1699]: time="2024-12-13T13:33:35.073486430Z" level=info msg="StartContainer for \"f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc\" returns successfully" Dec 13 13:33:35.080443 systemd[1]: cri-containerd-f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc.scope: Deactivated successfully. Dec 13 13:33:35.986422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc-rootfs.mount: Deactivated successfully. Dec 13 13:33:38.773330 containerd[1699]: time="2024-12-13T13:33:38.773102631Z" level=info msg="shim disconnected" id=f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc namespace=k8s.io Dec 13 13:33:38.773330 containerd[1699]: time="2024-12-13T13:33:38.773176933Z" level=warning msg="cleaning up after shim disconnected" id=f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc namespace=k8s.io Dec 13 13:33:38.773330 containerd[1699]: time="2024-12-13T13:33:38.773191134Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:33:39.063056 containerd[1699]: time="2024-12-13T13:33:39.062451037Z" level=info msg="CreateContainer within sandbox \"a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:33:39.100182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237473536.mount: Deactivated successfully. Dec 13 13:33:39.110652 containerd[1699]: time="2024-12-13T13:33:39.110602885Z" level=info msg="CreateContainer within sandbox \"a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a\"" Dec 13 13:33:39.112133 containerd[1699]: time="2024-12-13T13:33:39.111181600Z" level=info msg="StartContainer for \"eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a\"" Dec 13 13:33:39.142956 systemd[1]: run-containerd-runc-k8s.io-eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a-runc.SS5ANv.mount: Deactivated successfully. Dec 13 13:33:39.150751 systemd[1]: Started cri-containerd-eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a.scope - libcontainer container eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a. Dec 13 13:33:39.184755 containerd[1699]: time="2024-12-13T13:33:39.184635403Z" level=info msg="StartContainer for \"eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a\" returns successfully" Dec 13 13:33:39.194304 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:33:39.194872 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:33:39.195181 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:33:39.202651 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:33:39.202928 systemd[1]: cri-containerd-eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a.scope: Deactivated successfully. Dec 13 13:33:39.230492 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:33:39.240309 containerd[1699]: time="2024-12-13T13:33:39.240242643Z" level=info msg="shim disconnected" id=eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a namespace=k8s.io Dec 13 13:33:39.240554 containerd[1699]: time="2024-12-13T13:33:39.240325445Z" level=warning msg="cleaning up after shim disconnected" id=eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a namespace=k8s.io Dec 13 13:33:39.240554 containerd[1699]: time="2024-12-13T13:33:39.240339746Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:33:40.070542 containerd[1699]: time="2024-12-13T13:33:40.070493652Z" level=info msg="CreateContainer within sandbox \"a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:33:40.094247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a-rootfs.mount: Deactivated successfully. Dec 13 13:33:40.117596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2988626572.mount: Deactivated successfully. Dec 13 13:33:40.130072 containerd[1699]: time="2024-12-13T13:33:40.129689986Z" level=info msg="CreateContainer within sandbox \"a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e\"" Dec 13 13:33:40.131564 containerd[1699]: time="2024-12-13T13:33:40.130755013Z" level=info msg="StartContainer for \"47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e\"" Dec 13 13:33:40.182654 systemd[1]: Started cri-containerd-47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e.scope - libcontainer container 47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e. Dec 13 13:33:40.230156 containerd[1699]: time="2024-12-13T13:33:40.230112187Z" level=info msg="StartContainer for \"47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e\" returns successfully" Dec 13 13:33:40.230727 systemd[1]: cri-containerd-47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e.scope: Deactivated successfully. Dec 13 13:33:40.469888 containerd[1699]: time="2024-12-13T13:33:40.469641093Z" level=info msg="shim disconnected" id=47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e namespace=k8s.io Dec 13 13:33:40.469888 containerd[1699]: time="2024-12-13T13:33:40.469714895Z" level=warning msg="cleaning up after shim disconnected" id=47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e namespace=k8s.io Dec 13 13:33:40.469888 containerd[1699]: time="2024-12-13T13:33:40.469725895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:33:40.788746 containerd[1699]: time="2024-12-13T13:33:40.788592256Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:40.790819 containerd[1699]: time="2024-12-13T13:33:40.790659209Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907161" Dec 13 13:33:40.794709 containerd[1699]: time="2024-12-13T13:33:40.794659813Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:40.796103 containerd[1699]: time="2024-12-13T13:33:40.795945846Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.856050693s" Dec 13 13:33:40.796103 containerd[1699]: time="2024-12-13T13:33:40.795984947Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 13:33:40.798563 containerd[1699]: time="2024-12-13T13:33:40.798312207Z" level=info msg="CreateContainer within sandbox \"7e470aec0964219de07a118bbc0e309aca414d544925741a4c96ec7b39f168fb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 13:33:40.836539 containerd[1699]: time="2024-12-13T13:33:40.836494597Z" level=info msg="CreateContainer within sandbox \"7e470aec0964219de07a118bbc0e309aca414d544925741a4c96ec7b39f168fb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e\"" Dec 13 13:33:40.837036 containerd[1699]: time="2024-12-13T13:33:40.836987509Z" level=info msg="StartContainer for \"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e\"" Dec 13 13:33:40.861551 systemd[1]: Started cri-containerd-cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e.scope - libcontainer container cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e. Dec 13 13:33:40.895185 containerd[1699]: time="2024-12-13T13:33:40.895132516Z" level=info msg="StartContainer for \"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e\" returns successfully" Dec 13 13:33:41.073177 containerd[1699]: time="2024-12-13T13:33:41.073043225Z" level=info msg="CreateContainer within sandbox \"a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:33:41.095394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e-rootfs.mount: Deactivated successfully. Dec 13 13:33:41.124551 containerd[1699]: time="2024-12-13T13:33:41.124351154Z" level=info msg="CreateContainer within sandbox \"a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60\"" Dec 13 13:33:41.128309 containerd[1699]: time="2024-12-13T13:33:41.127214828Z" level=info msg="StartContainer for \"f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60\"" Dec 13 13:33:41.186565 systemd[1]: Started cri-containerd-f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60.scope - libcontainer container f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60. Dec 13 13:33:41.261694 systemd[1]: cri-containerd-f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60.scope: Deactivated successfully. Dec 13 13:33:41.264900 containerd[1699]: time="2024-12-13T13:33:41.264202177Z" level=info msg="StartContainer for \"f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60\" returns successfully" Dec 13 13:33:41.575756 containerd[1699]: time="2024-12-13T13:33:41.575474941Z" level=info msg="shim disconnected" id=f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60 namespace=k8s.io Dec 13 13:33:41.575756 containerd[1699]: time="2024-12-13T13:33:41.575548243Z" level=warning msg="cleaning up after shim disconnected" id=f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60 namespace=k8s.io Dec 13 13:33:41.575756 containerd[1699]: time="2024-12-13T13:33:41.575558943Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:33:42.091422 containerd[1699]: time="2024-12-13T13:33:42.089636761Z" level=info msg="CreateContainer within sandbox \"a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:33:42.097084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60-rootfs.mount: Deactivated successfully. Dec 13 13:33:42.116122 kubelet[3370]: I1213 13:33:42.115903 3370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-slp8s" podStartSLOduration=2.553822116 podStartE2EDuration="17.11584454s" podCreationTimestamp="2024-12-13 13:33:25 +0000 UTC" firstStartedPulling="2024-12-13 13:33:26.234496037 +0000 UTC m=+12.370185882" lastFinishedPulling="2024-12-13 13:33:40.796518361 +0000 UTC m=+26.932208306" observedRunningTime="2024-12-13 13:33:41.285072018 +0000 UTC m=+27.420761863" watchObservedRunningTime="2024-12-13 13:33:42.11584454 +0000 UTC m=+28.251534385" Dec 13 13:33:42.133171 containerd[1699]: time="2024-12-13T13:33:42.133123488Z" level=info msg="CreateContainer within sandbox \"a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6\"" Dec 13 13:33:42.134137 containerd[1699]: time="2024-12-13T13:33:42.133832406Z" level=info msg="StartContainer for \"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6\"" Dec 13 13:33:42.165547 systemd[1]: Started cri-containerd-27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6.scope - libcontainer container 27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6. Dec 13 13:33:42.202789 containerd[1699]: time="2024-12-13T13:33:42.202734391Z" level=info msg="StartContainer for \"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6\" returns successfully" Dec 13 13:33:42.318410 kubelet[3370]: I1213 13:33:42.318272 3370 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:33:42.419124 kubelet[3370]: I1213 13:33:42.418987 3370 topology_manager.go:215] "Topology Admit Handler" podUID="1f844e51-7032-4d24-b5f7-d38e3f6c8bad" podNamespace="kube-system" podName="coredns-76f75df574-4qmj6" Dec 13 13:33:42.433524 systemd[1]: Created slice kubepods-burstable-pod1f844e51_7032_4d24_b5f7_d38e3f6c8bad.slice - libcontainer container kubepods-burstable-pod1f844e51_7032_4d24_b5f7_d38e3f6c8bad.slice. Dec 13 13:33:42.440356 kubelet[3370]: W1213 13:33:42.439744 3370 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186.0.0-a-a6ca590029" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.0.0-a-a6ca590029' and this object Dec 13 13:33:42.440356 kubelet[3370]: E1213 13:33:42.439789 3370 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186.0.0-a-a6ca590029" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.0.0-a-a6ca590029' and this object Dec 13 13:33:42.442682 kubelet[3370]: I1213 13:33:42.442459 3370 topology_manager.go:215] "Topology Admit Handler" podUID="9c3c3287-1ca6-4898-91c3-f175caa2fdb0" podNamespace="kube-system" podName="coredns-76f75df574-vxxdr" Dec 13 13:33:42.454918 systemd[1]: Created slice kubepods-burstable-pod9c3c3287_1ca6_4898_91c3_f175caa2fdb0.slice - libcontainer container kubepods-burstable-pod9c3c3287_1ca6_4898_91c3_f175caa2fdb0.slice. Dec 13 13:33:42.480684 kubelet[3370]: I1213 13:33:42.480369 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f844e51-7032-4d24-b5f7-d38e3f6c8bad-config-volume\") pod \"coredns-76f75df574-4qmj6\" (UID: \"1f844e51-7032-4d24-b5f7-d38e3f6c8bad\") " pod="kube-system/coredns-76f75df574-4qmj6" Dec 13 13:33:42.480684 kubelet[3370]: I1213 13:33:42.480624 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfqg\" (UniqueName: \"kubernetes.io/projected/9c3c3287-1ca6-4898-91c3-f175caa2fdb0-kube-api-access-pwfqg\") pod \"coredns-76f75df574-vxxdr\" (UID: \"9c3c3287-1ca6-4898-91c3-f175caa2fdb0\") " pod="kube-system/coredns-76f75df574-vxxdr" Dec 13 13:33:42.481192 kubelet[3370]: I1213 13:33:42.480963 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c3c3287-1ca6-4898-91c3-f175caa2fdb0-config-volume\") pod \"coredns-76f75df574-vxxdr\" (UID: \"9c3c3287-1ca6-4898-91c3-f175caa2fdb0\") " pod="kube-system/coredns-76f75df574-vxxdr" Dec 13 13:33:42.481192 kubelet[3370]: I1213 13:33:42.481092 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml2rm\" (UniqueName: \"kubernetes.io/projected/1f844e51-7032-4d24-b5f7-d38e3f6c8bad-kube-api-access-ml2rm\") pod \"coredns-76f75df574-4qmj6\" (UID: \"1f844e51-7032-4d24-b5f7-d38e3f6c8bad\") " pod="kube-system/coredns-76f75df574-4qmj6" Dec 13 13:33:43.133409 kubelet[3370]: I1213 13:33:43.132352 3370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vxvnw" podStartSLOduration=9.010190768 podStartE2EDuration="18.132302773s" podCreationTimestamp="2024-12-13 13:33:25 +0000 UTC" firstStartedPulling="2024-12-13 13:33:25.8159472 +0000 UTC m=+11.951637045" lastFinishedPulling="2024-12-13 13:33:34.938059205 +0000 UTC m=+21.073749050" observedRunningTime="2024-12-13 13:33:43.131668057 +0000 UTC m=+29.267358002" watchObservedRunningTime="2024-12-13 13:33:43.132302773 +0000 UTC m=+29.267992718" Dec 13 13:33:43.583256 kubelet[3370]: E1213 13:33:43.583126 3370 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 13 13:33:43.583689 kubelet[3370]: E1213 13:33:43.583431 3370 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1f844e51-7032-4d24-b5f7-d38e3f6c8bad-config-volume podName:1f844e51-7032-4d24-b5f7-d38e3f6c8bad nodeName:}" failed. No retries permitted until 2024-12-13 13:33:44.083322657 +0000 UTC m=+30.219012602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1f844e51-7032-4d24-b5f7-d38e3f6c8bad-config-volume") pod "coredns-76f75df574-4qmj6" (UID: "1f844e51-7032-4d24-b5f7-d38e3f6c8bad") : failed to sync configmap cache: timed out waiting for the condition Dec 13 13:33:43.583962 kubelet[3370]: E1213 13:33:43.583129 3370 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 13 13:33:43.583962 kubelet[3370]: E1213 13:33:43.583935 3370 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c3c3287-1ca6-4898-91c3-f175caa2fdb0-config-volume podName:9c3c3287-1ca6-4898-91c3-f175caa2fdb0 nodeName:}" failed. No retries permitted until 2024-12-13 13:33:44.083913673 +0000 UTC m=+30.219603518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9c3c3287-1ca6-4898-91c3-f175caa2fdb0-config-volume") pod "coredns-76f75df574-vxxdr" (UID: "9c3c3287-1ca6-4898-91c3-f175caa2fdb0") : failed to sync configmap cache: timed out waiting for the condition Dec 13 13:33:44.243996 containerd[1699]: time="2024-12-13T13:33:44.243938672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4qmj6,Uid:1f844e51-7032-4d24-b5f7-d38e3f6c8bad,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:44.263622 containerd[1699]: time="2024-12-13T13:33:44.263581781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vxxdr,Uid:9c3c3287-1ca6-4898-91c3-f175caa2fdb0,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:44.415744 systemd-networkd[1333]: cilium_host: Link UP Dec 13 13:33:44.417090 systemd-networkd[1333]: cilium_net: Link UP Dec 13 13:33:44.417343 systemd-networkd[1333]: cilium_net: Gained carrier Dec 13 13:33:44.417926 systemd-networkd[1333]: cilium_host: Gained carrier Dec 13 13:33:44.474442 systemd-networkd[1333]: cilium_host: Gained IPv6LL Dec 13 13:33:44.667322 systemd-networkd[1333]: cilium_vxlan: Link UP Dec 13 13:33:44.667332 systemd-networkd[1333]: cilium_vxlan: Gained carrier Dec 13 13:33:44.947551 systemd-networkd[1333]: cilium_net: Gained IPv6LL Dec 13 13:33:44.968429 kernel: NET: Registered PF_ALG protocol family Dec 13 13:33:45.733566 systemd-networkd[1333]: lxc_health: Link UP Dec 13 13:33:45.766231 systemd-networkd[1333]: lxc_health: Gained carrier Dec 13 13:33:45.844595 systemd-networkd[1333]: cilium_vxlan: Gained IPv6LL Dec 13 13:33:46.325164 systemd-networkd[1333]: lxcfaeaeb829e40: Link UP Dec 13 13:33:46.333559 kernel: eth0: renamed from tmp4bfaf Dec 13 13:33:46.339763 systemd-networkd[1333]: lxcfaeaeb829e40: Gained carrier Dec 13 13:33:46.370458 systemd-networkd[1333]: lxc3b2af0a3f83e: Link UP Dec 13 13:33:46.377500 kernel: eth0: renamed from tmpf836a Dec 13 13:33:46.386398 systemd-networkd[1333]: lxc3b2af0a3f83e: Gained carrier Dec 13 13:33:47.507624 systemd-networkd[1333]: lxc_health: Gained IPv6LL Dec 13 13:33:47.508632 systemd-networkd[1333]: lxcfaeaeb829e40: Gained IPv6LL Dec 13 13:33:48.084675 systemd-networkd[1333]: lxc3b2af0a3f83e: Gained IPv6LL Dec 13 13:33:49.978291 containerd[1699]: time="2024-12-13T13:33:49.977408239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:49.978291 containerd[1699]: time="2024-12-13T13:33:49.977473240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:49.978291 containerd[1699]: time="2024-12-13T13:33:49.977490141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:49.981930 containerd[1699]: time="2024-12-13T13:33:49.977585343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:50.006501 containerd[1699]: time="2024-12-13T13:33:50.003842315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:50.006839 containerd[1699]: time="2024-12-13T13:33:50.006592386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:50.006839 containerd[1699]: time="2024-12-13T13:33:50.006663888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:50.007220 containerd[1699]: time="2024-12-13T13:33:50.007055498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:50.035708 systemd[1]: Started cri-containerd-4bfaf749a0e3af9d118bf1a39071d396619bb2339927088f5fd2719f6d6b5671.scope - libcontainer container 4bfaf749a0e3af9d118bf1a39071d396619bb2339927088f5fd2719f6d6b5671. Dec 13 13:33:50.060813 systemd[1]: Started cri-containerd-f836a581cbd95924ff434d9a52726c2409e568c008d951381c464a1fa8857aa8.scope - libcontainer container f836a581cbd95924ff434d9a52726c2409e568c008d951381c464a1fa8857aa8. Dec 13 13:33:50.154562 containerd[1699]: time="2024-12-13T13:33:50.154500272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4qmj6,Uid:1f844e51-7032-4d24-b5f7-d38e3f6c8bad,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bfaf749a0e3af9d118bf1a39071d396619bb2339927088f5fd2719f6d6b5671\"" Dec 13 13:33:50.161102 containerd[1699]: time="2024-12-13T13:33:50.161039539Z" level=info msg="CreateContainer within sandbox \"4bfaf749a0e3af9d118bf1a39071d396619bb2339927088f5fd2719f6d6b5671\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:33:50.168744 containerd[1699]: time="2024-12-13T13:33:50.168702135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vxxdr,Uid:9c3c3287-1ca6-4898-91c3-f175caa2fdb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f836a581cbd95924ff434d9a52726c2409e568c008d951381c464a1fa8857aa8\"" Dec 13 13:33:50.173037 containerd[1699]: time="2024-12-13T13:33:50.173003745Z" level=info msg="CreateContainer within sandbox \"f836a581cbd95924ff434d9a52726c2409e568c008d951381c464a1fa8857aa8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:33:50.243638 containerd[1699]: time="2024-12-13T13:33:50.243489649Z" level=info msg="CreateContainer within sandbox \"4bfaf749a0e3af9d118bf1a39071d396619bb2339927088f5fd2719f6d6b5671\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"af4b4d9a52944514bdb622f3a5b14b8a4ea7735d6fb1271b21d1046143293d1c\"" Dec 13 13:33:50.245975 containerd[1699]: time="2024-12-13T13:33:50.244867285Z" level=info msg="StartContainer for \"af4b4d9a52944514bdb622f3a5b14b8a4ea7735d6fb1271b21d1046143293d1c\"" Dec 13 13:33:50.251489 containerd[1699]: time="2024-12-13T13:33:50.251447753Z" level=info msg="CreateContainer within sandbox \"f836a581cbd95924ff434d9a52726c2409e568c008d951381c464a1fa8857aa8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00e287cc63e4bce10f616b68785f5ea89916237f085f8af68f1f8848e54518fb\"" Dec 13 13:33:50.252702 containerd[1699]: time="2024-12-13T13:33:50.252675985Z" level=info msg="StartContainer for \"00e287cc63e4bce10f616b68785f5ea89916237f085f8af68f1f8848e54518fb\"" Dec 13 13:33:50.283223 systemd[1]: Started cri-containerd-af4b4d9a52944514bdb622f3a5b14b8a4ea7735d6fb1271b21d1046143293d1c.scope - libcontainer container af4b4d9a52944514bdb622f3a5b14b8a4ea7735d6fb1271b21d1046143293d1c. Dec 13 13:33:50.291596 systemd[1]: Started cri-containerd-00e287cc63e4bce10f616b68785f5ea89916237f085f8af68f1f8848e54518fb.scope - libcontainer container 00e287cc63e4bce10f616b68785f5ea89916237f085f8af68f1f8848e54518fb. Dec 13 13:33:50.325164 containerd[1699]: time="2024-12-13T13:33:50.324016311Z" level=info msg="StartContainer for \"af4b4d9a52944514bdb622f3a5b14b8a4ea7735d6fb1271b21d1046143293d1c\" returns successfully" Dec 13 13:33:50.349897 containerd[1699]: time="2024-12-13T13:33:50.349843772Z" level=info msg="StartContainer for \"00e287cc63e4bce10f616b68785f5ea89916237f085f8af68f1f8848e54518fb\" returns successfully" Dec 13 13:33:51.143812 kubelet[3370]: I1213 13:33:51.143764 3370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4qmj6" podStartSLOduration=26.143599089 podStartE2EDuration="26.143599089s" podCreationTimestamp="2024-12-13 13:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:51.142060749 +0000 UTC m=+37.277750594" watchObservedRunningTime="2024-12-13 13:33:51.143599089 +0000 UTC m=+37.279289034" Dec 13 13:33:51.144400 kubelet[3370]: I1213 13:33:51.144039 3370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vxxdr" podStartSLOduration=26.143904196 podStartE2EDuration="26.143904196s" podCreationTimestamp="2024-12-13 13:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:51.126687656 +0000 UTC m=+37.262377501" watchObservedRunningTime="2024-12-13 13:33:51.143904196 +0000 UTC m=+37.279594041" Dec 13 13:35:13.782705 systemd[1]: Started sshd@7-10.200.8.13:22-10.200.16.10:42776.service - OpenSSH per-connection server daemon (10.200.16.10:42776). Dec 13 13:35:14.503424 sshd[4745]: Accepted publickey for core from 10.200.16.10 port 42776 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:14.505153 sshd-session[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:14.510199 systemd-logind[1688]: New session 10 of user core. Dec 13 13:35:14.514583 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:35:15.075026 sshd[4749]: Connection closed by 10.200.16.10 port 42776 Dec 13 13:35:15.075786 sshd-session[4745]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:15.079211 systemd[1]: sshd@7-10.200.8.13:22-10.200.16.10:42776.service: Deactivated successfully. Dec 13 13:35:15.081856 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:35:15.084131 systemd-logind[1688]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:35:15.085460 systemd-logind[1688]: Removed session 10. Dec 13 13:35:20.205708 systemd[1]: Started sshd@8-10.200.8.13:22-10.200.16.10:39216.service - OpenSSH per-connection server daemon (10.200.16.10:39216). Dec 13 13:35:20.916105 sshd[4762]: Accepted publickey for core from 10.200.16.10 port 39216 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:20.917657 sshd-session[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:20.922623 systemd-logind[1688]: New session 11 of user core. Dec 13 13:35:20.927763 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:35:21.477990 sshd[4764]: Connection closed by 10.200.16.10 port 39216 Dec 13 13:35:21.478886 sshd-session[4762]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:21.482531 systemd[1]: sshd@8-10.200.8.13:22-10.200.16.10:39216.service: Deactivated successfully. Dec 13 13:35:21.485237 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:35:21.487100 systemd-logind[1688]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:35:21.488451 systemd-logind[1688]: Removed session 11. Dec 13 13:35:26.607697 systemd[1]: Started sshd@9-10.200.8.13:22-10.200.16.10:39222.service - OpenSSH per-connection server daemon (10.200.16.10:39222). Dec 13 13:35:27.318331 sshd[4778]: Accepted publickey for core from 10.200.16.10 port 39222 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:27.320199 sshd-session[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:27.326911 systemd-logind[1688]: New session 12 of user core. Dec 13 13:35:27.333547 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:35:27.875994 sshd[4780]: Connection closed by 10.200.16.10 port 39222 Dec 13 13:35:27.876858 sshd-session[4778]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:27.880293 systemd[1]: sshd@9-10.200.8.13:22-10.200.16.10:39222.service: Deactivated successfully. Dec 13 13:35:27.883080 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:35:27.884987 systemd-logind[1688]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:35:27.886450 systemd-logind[1688]: Removed session 12. Dec 13 13:35:33.017740 systemd[1]: Started sshd@10-10.200.8.13:22-10.200.16.10:55780.service - OpenSSH per-connection server daemon (10.200.16.10:55780). Dec 13 13:35:33.732243 sshd[4792]: Accepted publickey for core from 10.200.16.10 port 55780 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:33.733815 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:33.739603 systemd-logind[1688]: New session 13 of user core. Dec 13 13:35:33.745556 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:35:34.305212 sshd[4794]: Connection closed by 10.200.16.10 port 55780 Dec 13 13:35:34.306057 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:34.310286 systemd[1]: sshd@10-10.200.8.13:22-10.200.16.10:55780.service: Deactivated successfully. Dec 13 13:35:34.312643 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:35:34.313634 systemd-logind[1688]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:35:34.314772 systemd-logind[1688]: Removed session 13. Dec 13 13:35:34.440938 systemd[1]: Started sshd@11-10.200.8.13:22-10.200.16.10:55782.service - OpenSSH per-connection server daemon (10.200.16.10:55782). Dec 13 13:35:35.172012 sshd[4806]: Accepted publickey for core from 10.200.16.10 port 55782 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:35.173497 sshd-session[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:35.177676 systemd-logind[1688]: New session 14 of user core. Dec 13 13:35:35.181603 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:35:35.787937 sshd[4808]: Connection closed by 10.200.16.10 port 55782 Dec 13 13:35:35.788708 sshd-session[4806]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:35.793207 systemd[1]: sshd@11-10.200.8.13:22-10.200.16.10:55782.service: Deactivated successfully. Dec 13 13:35:35.795275 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:35:35.796156 systemd-logind[1688]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:35:35.797353 systemd-logind[1688]: Removed session 14. Dec 13 13:35:35.922706 systemd[1]: Started sshd@12-10.200.8.13:22-10.200.16.10:55796.service - OpenSSH per-connection server daemon (10.200.16.10:55796). Dec 13 13:35:36.704356 sshd[4816]: Accepted publickey for core from 10.200.16.10 port 55796 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:36.705859 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:36.710082 systemd-logind[1688]: New session 15 of user core. Dec 13 13:35:36.714558 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:35:37.292554 sshd[4818]: Connection closed by 10.200.16.10 port 55796 Dec 13 13:35:37.293362 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:37.297634 systemd[1]: sshd@12-10.200.8.13:22-10.200.16.10:55796.service: Deactivated successfully. Dec 13 13:35:37.299817 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:35:37.300700 systemd-logind[1688]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:35:37.301771 systemd-logind[1688]: Removed session 15. Dec 13 13:35:42.422666 systemd[1]: Started sshd@13-10.200.8.13:22-10.200.16.10:50468.service - OpenSSH per-connection server daemon (10.200.16.10:50468). Dec 13 13:35:43.134834 sshd[4828]: Accepted publickey for core from 10.200.16.10 port 50468 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:43.136692 sshd-session[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:43.141189 systemd-logind[1688]: New session 16 of user core. Dec 13 13:35:43.146544 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:35:43.693475 sshd[4830]: Connection closed by 10.200.16.10 port 50468 Dec 13 13:35:43.694158 sshd-session[4828]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:43.698172 systemd[1]: sshd@13-10.200.8.13:22-10.200.16.10:50468.service: Deactivated successfully. Dec 13 13:35:43.700666 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:35:43.701499 systemd-logind[1688]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:35:43.702482 systemd-logind[1688]: Removed session 16. Dec 13 13:35:48.823683 systemd[1]: Started sshd@14-10.200.8.13:22-10.200.16.10:41188.service - OpenSSH per-connection server daemon (10.200.16.10:41188). Dec 13 13:35:49.534602 sshd[4841]: Accepted publickey for core from 10.200.16.10 port 41188 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:49.536910 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:49.541888 systemd-logind[1688]: New session 17 of user core. Dec 13 13:35:49.548571 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:35:50.098225 sshd[4843]: Connection closed by 10.200.16.10 port 41188 Dec 13 13:35:50.099085 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:50.101914 systemd[1]: sshd@14-10.200.8.13:22-10.200.16.10:41188.service: Deactivated successfully. Dec 13 13:35:50.104100 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:35:50.105984 systemd-logind[1688]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:35:50.106966 systemd-logind[1688]: Removed session 17. Dec 13 13:35:50.227723 systemd[1]: Started sshd@15-10.200.8.13:22-10.200.16.10:41192.service - OpenSSH per-connection server daemon (10.200.16.10:41192). Dec 13 13:35:50.942943 sshd[4854]: Accepted publickey for core from 10.200.16.10 port 41192 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:50.944583 sshd-session[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:50.949475 systemd-logind[1688]: New session 18 of user core. Dec 13 13:35:50.955545 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:35:51.564239 sshd[4856]: Connection closed by 10.200.16.10 port 41192 Dec 13 13:35:51.565156 sshd-session[4854]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:51.569818 systemd[1]: sshd@15-10.200.8.13:22-10.200.16.10:41192.service: Deactivated successfully. Dec 13 13:35:51.572144 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:35:51.572966 systemd-logind[1688]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:35:51.574027 systemd-logind[1688]: Removed session 18. Dec 13 13:35:51.696712 systemd[1]: Started sshd@16-10.200.8.13:22-10.200.16.10:41206.service - OpenSSH per-connection server daemon (10.200.16.10:41206). Dec 13 13:35:52.408661 sshd[4864]: Accepted publickey for core from 10.200.16.10 port 41206 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:52.410249 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:52.414608 systemd-logind[1688]: New session 19 of user core. Dec 13 13:35:52.419543 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:35:54.450436 sshd[4866]: Connection closed by 10.200.16.10 port 41206 Dec 13 13:35:54.451329 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:54.455787 systemd[1]: sshd@16-10.200.8.13:22-10.200.16.10:41206.service: Deactivated successfully. Dec 13 13:35:54.458136 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:35:54.459115 systemd-logind[1688]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:35:54.460192 systemd-logind[1688]: Removed session 19. Dec 13 13:35:54.583681 systemd[1]: Started sshd@17-10.200.8.13:22-10.200.16.10:41214.service - OpenSSH per-connection server daemon (10.200.16.10:41214). Dec 13 13:35:55.299454 sshd[4882]: Accepted publickey for core from 10.200.16.10 port 41214 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:55.301146 sshd-session[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:55.305986 systemd-logind[1688]: New session 20 of user core. Dec 13 13:35:55.311553 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:35:55.969811 sshd[4885]: Connection closed by 10.200.16.10 port 41214 Dec 13 13:35:55.970834 sshd-session[4882]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:55.975350 systemd[1]: sshd@17-10.200.8.13:22-10.200.16.10:41214.service: Deactivated successfully. Dec 13 13:35:55.979436 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:35:55.980415 systemd-logind[1688]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:35:55.981507 systemd-logind[1688]: Removed session 20. Dec 13 13:35:56.099767 systemd[1]: Started sshd@18-10.200.8.13:22-10.200.16.10:41224.service - OpenSSH per-connection server daemon (10.200.16.10:41224). Dec 13 13:35:56.817616 sshd[4897]: Accepted publickey for core from 10.200.16.10 port 41224 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:35:56.819500 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:35:56.825364 systemd-logind[1688]: New session 21 of user core. Dec 13 13:35:56.830557 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:35:57.374669 sshd[4899]: Connection closed by 10.200.16.10 port 41224 Dec 13 13:35:57.375668 sshd-session[4897]: pam_unix(sshd:session): session closed for user core Dec 13 13:35:57.379502 systemd[1]: sshd@18-10.200.8.13:22-10.200.16.10:41224.service: Deactivated successfully. Dec 13 13:35:57.382108 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:35:57.383977 systemd-logind[1688]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:35:57.385421 systemd-logind[1688]: Removed session 21. Dec 13 13:36:02.504789 systemd[1]: Started sshd@19-10.200.8.13:22-10.200.16.10:35266.service - OpenSSH per-connection server daemon (10.200.16.10:35266). Dec 13 13:36:03.215344 sshd[4913]: Accepted publickey for core from 10.200.16.10 port 35266 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:03.216842 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:03.221707 systemd-logind[1688]: New session 22 of user core. Dec 13 13:36:03.228555 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 13:36:03.773669 sshd[4915]: Connection closed by 10.200.16.10 port 35266 Dec 13 13:36:03.774508 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:03.778897 systemd[1]: sshd@19-10.200.8.13:22-10.200.16.10:35266.service: Deactivated successfully. Dec 13 13:36:03.781350 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 13:36:03.783198 systemd-logind[1688]: Session 22 logged out. Waiting for processes to exit. Dec 13 13:36:03.784722 systemd-logind[1688]: Removed session 22. Dec 13 13:36:08.899589 systemd[1]: Started sshd@20-10.200.8.13:22-10.200.16.10:51074.service - OpenSSH per-connection server daemon (10.200.16.10:51074). Dec 13 13:36:09.617922 sshd[4926]: Accepted publickey for core from 10.200.16.10 port 51074 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:09.619419 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:09.624430 systemd-logind[1688]: New session 23 of user core. Dec 13 13:36:09.626555 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 13:36:10.180198 sshd[4928]: Connection closed by 10.200.16.10 port 51074 Dec 13 13:36:10.181185 sshd-session[4926]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:10.185433 systemd[1]: sshd@20-10.200.8.13:22-10.200.16.10:51074.service: Deactivated successfully. Dec 13 13:36:10.187583 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 13:36:10.188338 systemd-logind[1688]: Session 23 logged out. Waiting for processes to exit. Dec 13 13:36:10.189667 systemd-logind[1688]: Removed session 23. Dec 13 13:36:15.305656 systemd[1]: Started sshd@21-10.200.8.13:22-10.200.16.10:51090.service - OpenSSH per-connection server daemon (10.200.16.10:51090). Dec 13 13:36:16.022632 sshd[4941]: Accepted publickey for core from 10.200.16.10 port 51090 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:16.024148 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:16.028436 systemd-logind[1688]: New session 24 of user core. Dec 13 13:36:16.032535 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 13:36:16.581406 sshd[4943]: Connection closed by 10.200.16.10 port 51090 Dec 13 13:36:16.582307 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:16.585988 systemd[1]: sshd@21-10.200.8.13:22-10.200.16.10:51090.service: Deactivated successfully. Dec 13 13:36:16.588826 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 13:36:16.590973 systemd-logind[1688]: Session 24 logged out. Waiting for processes to exit. Dec 13 13:36:16.592042 systemd-logind[1688]: Removed session 24. Dec 13 13:36:16.710686 systemd[1]: Started sshd@22-10.200.8.13:22-10.200.16.10:51096.service - OpenSSH per-connection server daemon (10.200.16.10:51096). Dec 13 13:36:17.422582 sshd[4954]: Accepted publickey for core from 10.200.16.10 port 51096 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:17.424086 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:17.429348 systemd-logind[1688]: New session 25 of user core. Dec 13 13:36:17.435546 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 13:36:19.120805 containerd[1699]: time="2024-12-13T13:36:19.120731744Z" level=info msg="StopContainer for \"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e\" with timeout 30 (s)" Dec 13 13:36:19.122327 containerd[1699]: time="2024-12-13T13:36:19.122220082Z" level=info msg="Stop container \"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e\" with signal terminated" Dec 13 13:36:19.141805 systemd[1]: run-containerd-runc-k8s.io-27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6-runc.ggEHCG.mount: Deactivated successfully. Dec 13 13:36:19.144452 systemd[1]: cri-containerd-cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e.scope: Deactivated successfully. Dec 13 13:36:19.162728 containerd[1699]: time="2024-12-13T13:36:19.162680828Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:36:19.172704 containerd[1699]: time="2024-12-13T13:36:19.172566383Z" level=info msg="StopContainer for \"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6\" with timeout 2 (s)" Dec 13 13:36:19.173869 containerd[1699]: time="2024-12-13T13:36:19.173668512Z" level=info msg="Stop container \"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6\" with signal terminated" Dec 13 13:36:19.177918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e-rootfs.mount: Deactivated successfully. Dec 13 13:36:19.184814 systemd-networkd[1333]: lxc_health: Link DOWN Dec 13 13:36:19.184824 systemd-networkd[1333]: lxc_health: Lost carrier Dec 13 13:36:19.201549 systemd[1]: cri-containerd-27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6.scope: Deactivated successfully. Dec 13 13:36:19.201872 systemd[1]: cri-containerd-27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6.scope: Consumed 7.069s CPU time. Dec 13 13:36:19.222294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6-rootfs.mount: Deactivated successfully. Dec 13 13:36:19.272218 containerd[1699]: time="2024-12-13T13:36:19.272137556Z" level=info msg="shim disconnected" id=cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e namespace=k8s.io Dec 13 13:36:19.272218 containerd[1699]: time="2024-12-13T13:36:19.272211758Z" level=warning msg="cleaning up after shim disconnected" id=cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e namespace=k8s.io Dec 13 13:36:19.272218 containerd[1699]: time="2024-12-13T13:36:19.272224658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:36:19.273044 containerd[1699]: time="2024-12-13T13:36:19.272754772Z" level=info msg="shim disconnected" id=27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6 namespace=k8s.io Dec 13 13:36:19.273044 containerd[1699]: time="2024-12-13T13:36:19.272804573Z" level=warning msg="cleaning up after shim disconnected" id=27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6 namespace=k8s.io Dec 13 13:36:19.273044 containerd[1699]: time="2024-12-13T13:36:19.272818874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:36:19.301399 containerd[1699]: time="2024-12-13T13:36:19.301304410Z" level=info msg="StopContainer for \"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e\" returns successfully" Dec 13 13:36:19.302327 containerd[1699]: time="2024-12-13T13:36:19.302284735Z" level=info msg="StopPodSandbox for \"7e470aec0964219de07a118bbc0e309aca414d544925741a4c96ec7b39f168fb\"" Dec 13 13:36:19.302444 containerd[1699]: time="2024-12-13T13:36:19.302340037Z" level=info msg="Container to stop \"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:36:19.305240 containerd[1699]: time="2024-12-13T13:36:19.305119409Z" level=info msg="StopContainer for \"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6\" returns successfully" Dec 13 13:36:19.305534 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e470aec0964219de07a118bbc0e309aca414d544925741a4c96ec7b39f168fb-shm.mount: Deactivated successfully. Dec 13 13:36:19.305916 containerd[1699]: time="2024-12-13T13:36:19.305666823Z" level=info msg="StopPodSandbox for \"a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94\"" Dec 13 13:36:19.305916 containerd[1699]: time="2024-12-13T13:36:19.305700624Z" level=info msg="Container to stop \"eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:36:19.305916 containerd[1699]: time="2024-12-13T13:36:19.305741525Z" level=info msg="Container to stop \"47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:36:19.305916 containerd[1699]: time="2024-12-13T13:36:19.305754425Z" level=info msg="Container to stop \"f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:36:19.305916 containerd[1699]: time="2024-12-13T13:36:19.305766625Z" level=info msg="Container to stop \"f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:36:19.305916 containerd[1699]: time="2024-12-13T13:36:19.305780626Z" level=info msg="Container to stop \"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:36:19.314660 systemd[1]: cri-containerd-a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94.scope: Deactivated successfully. Dec 13 13:36:19.316140 systemd[1]: cri-containerd-7e470aec0964219de07a118bbc0e309aca414d544925741a4c96ec7b39f168fb.scope: Deactivated successfully. Dec 13 13:36:19.362370 containerd[1699]: time="2024-12-13T13:36:19.362153482Z" level=info msg="shim disconnected" id=7e470aec0964219de07a118bbc0e309aca414d544925741a4c96ec7b39f168fb namespace=k8s.io Dec 13 13:36:19.362370 containerd[1699]: time="2024-12-13T13:36:19.362354988Z" level=warning msg="cleaning up after shim disconnected" id=7e470aec0964219de07a118bbc0e309aca414d544925741a4c96ec7b39f168fb namespace=k8s.io Dec 13 13:36:19.362370 containerd[1699]: time="2024-12-13T13:36:19.362371288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:36:19.362370 containerd[1699]: time="2024-12-13T13:36:19.362159983Z" level=info msg="shim disconnected" id=a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94 namespace=k8s.io Dec 13 13:36:19.362370 containerd[1699]: time="2024-12-13T13:36:19.362440090Z" level=warning msg="cleaning up after shim disconnected" id=a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94 namespace=k8s.io Dec 13 13:36:19.362370 containerd[1699]: time="2024-12-13T13:36:19.362449490Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:36:19.384777 containerd[1699]: time="2024-12-13T13:36:19.383650838Z" level=info msg="TearDown network for sandbox \"7e470aec0964219de07a118bbc0e309aca414d544925741a4c96ec7b39f168fb\" successfully" Dec 13 13:36:19.384777 containerd[1699]: time="2024-12-13T13:36:19.383686939Z" level=info msg="StopPodSandbox for \"7e470aec0964219de07a118bbc0e309aca414d544925741a4c96ec7b39f168fb\" returns successfully" Dec 13 13:36:19.384777 containerd[1699]: time="2024-12-13T13:36:19.383687339Z" level=info msg="TearDown network for sandbox \"a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94\" successfully" Dec 13 13:36:19.384777 containerd[1699]: time="2024-12-13T13:36:19.383937445Z" level=info msg="StopPodSandbox for \"a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94\" returns successfully" Dec 13 13:36:19.434000 kubelet[3370]: I1213 13:36:19.433967 3370 scope.go:117] "RemoveContainer" containerID="cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e" Dec 13 13:36:19.436996 containerd[1699]: time="2024-12-13T13:36:19.436652708Z" level=info msg="RemoveContainer for \"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e\"" Dec 13 13:36:19.450206 containerd[1699]: time="2024-12-13T13:36:19.450163857Z" level=info msg="RemoveContainer for \"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e\" returns successfully" Dec 13 13:36:19.450426 kubelet[3370]: I1213 13:36:19.450404 3370 scope.go:117] "RemoveContainer" containerID="cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e" Dec 13 13:36:19.450698 containerd[1699]: time="2024-12-13T13:36:19.450609068Z" level=error msg="ContainerStatus for \"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e\": not found" Dec 13 13:36:19.450794 kubelet[3370]: E1213 13:36:19.450744 3370 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e\": not found" containerID="cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e" Dec 13 13:36:19.450877 kubelet[3370]: I1213 13:36:19.450849 3370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e"} err="failed to get container status \"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e\": rpc error: code = NotFound desc = an error occurred when try to find container \"cfb683c43e1028be39449c896def1d9f28b887ecbe04089b17f6666af58c740e\": not found" Dec 13 13:36:19.450877 kubelet[3370]: I1213 13:36:19.450870 3370 scope.go:117] "RemoveContainer" containerID="27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6" Dec 13 13:36:19.451917 containerd[1699]: time="2024-12-13T13:36:19.451882101Z" level=info msg="RemoveContainer for \"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6\"" Dec 13 13:36:19.462464 containerd[1699]: time="2024-12-13T13:36:19.462434374Z" level=info msg="RemoveContainer for \"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6\" returns successfully" Dec 13 13:36:19.462618 kubelet[3370]: I1213 13:36:19.462598 3370 scope.go:117] "RemoveContainer" containerID="f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60" Dec 13 13:36:19.463719 containerd[1699]: time="2024-12-13T13:36:19.463661005Z" level=info msg="RemoveContainer for \"f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60\"" Dec 13 13:36:19.473907 containerd[1699]: time="2024-12-13T13:36:19.473876369Z" level=info msg="RemoveContainer for \"f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60\" returns successfully" Dec 13 13:36:19.474084 kubelet[3370]: I1213 13:36:19.474026 3370 scope.go:117] "RemoveContainer" containerID="47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e" Dec 13 13:36:19.475020 containerd[1699]: time="2024-12-13T13:36:19.474984898Z" level=info msg="RemoveContainer for \"47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e\"" Dec 13 13:36:19.486933 containerd[1699]: time="2024-12-13T13:36:19.486900606Z" level=info msg="RemoveContainer for \"47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e\" returns successfully" Dec 13 13:36:19.487097 kubelet[3370]: I1213 13:36:19.487047 3370 scope.go:117] "RemoveContainer" containerID="eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a" Dec 13 13:36:19.487994 containerd[1699]: time="2024-12-13T13:36:19.487967034Z" level=info msg="RemoveContainer for \"eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a\"" Dec 13 13:36:19.496882 containerd[1699]: time="2024-12-13T13:36:19.496851263Z" level=info msg="RemoveContainer for \"eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a\" returns successfully" Dec 13 13:36:19.497012 kubelet[3370]: I1213 13:36:19.496994 3370 scope.go:117] "RemoveContainer" containerID="f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc" Dec 13 13:36:19.497977 containerd[1699]: time="2024-12-13T13:36:19.497944691Z" level=info msg="RemoveContainer for \"f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc\"" Dec 13 13:36:19.506892 containerd[1699]: time="2024-12-13T13:36:19.506862122Z" level=info msg="RemoveContainer for \"f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc\" returns successfully" Dec 13 13:36:19.507049 kubelet[3370]: I1213 13:36:19.507035 3370 scope.go:117] "RemoveContainer" containerID="27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6" Dec 13 13:36:19.507298 containerd[1699]: time="2024-12-13T13:36:19.507260332Z" level=error msg="ContainerStatus for \"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6\": not found" Dec 13 13:36:19.507463 kubelet[3370]: E1213 13:36:19.507438 3370 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6\": not found" containerID="27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6" Dec 13 13:36:19.507543 kubelet[3370]: I1213 13:36:19.507498 3370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6"} err="failed to get container status \"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"27b23a85bd5c81f29d609e41505f335b689f47a70e0ddf08ddd000977ca659a6\": not found" Dec 13 13:36:19.507543 kubelet[3370]: I1213 13:36:19.507519 3370 scope.go:117] "RemoveContainer" containerID="f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60" Dec 13 13:36:19.507767 containerd[1699]: time="2024-12-13T13:36:19.507738545Z" level=error msg="ContainerStatus for \"f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60\": not found" Dec 13 13:36:19.507945 kubelet[3370]: E1213 13:36:19.507929 3370 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60\": not found" containerID="f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60" Dec 13 13:36:19.508020 kubelet[3370]: I1213 13:36:19.507962 3370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60"} err="failed to get container status \"f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60\": rpc error: code = NotFound desc = an error occurred when try to find container \"f272d5cec8419928ac2e12a314244a51b542ff2b7358d9484d58f16f8f646d60\": not found" Dec 13 13:36:19.508020 kubelet[3370]: I1213 13:36:19.507976 3370 scope.go:117] "RemoveContainer" containerID="47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e" Dec 13 13:36:19.508215 containerd[1699]: time="2024-12-13T13:36:19.508180256Z" level=error msg="ContainerStatus for \"47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e\": not found" Dec 13 13:36:19.508334 kubelet[3370]: E1213 13:36:19.508316 3370 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e\": not found" containerID="47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e" Dec 13 13:36:19.508417 kubelet[3370]: I1213 13:36:19.508346 3370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e"} err="failed to get container status \"47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e\": rpc error: code = NotFound desc = an error occurred when try to find container \"47718fd0cdcc989938c623604489e6fc692259c1b602a515a9ff6aa80b639f7e\": not found" Dec 13 13:36:19.508417 kubelet[3370]: I1213 13:36:19.508359 3370 scope.go:117] "RemoveContainer" containerID="eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a" Dec 13 13:36:19.508604 containerd[1699]: time="2024-12-13T13:36:19.508551066Z" level=error msg="ContainerStatus for \"eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a\": not found" Dec 13 13:36:19.508762 kubelet[3370]: E1213 13:36:19.508743 3370 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a\": not found" containerID="eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a" Dec 13 13:36:19.508824 kubelet[3370]: I1213 13:36:19.508775 3370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a"} err="failed to get container status \"eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a\": rpc error: code = NotFound desc = an error occurred when try to find container \"eaeb9656011dea70c2868d4c967d94fcdeb86d1748f202d193734b89b824aa5a\": not found" Dec 13 13:36:19.508824 kubelet[3370]: I1213 13:36:19.508789 3370 scope.go:117] "RemoveContainer" containerID="f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc" Dec 13 13:36:19.509059 containerd[1699]: time="2024-12-13T13:36:19.509013977Z" level=error msg="ContainerStatus for \"f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc\": not found" Dec 13 13:36:19.509183 kubelet[3370]: E1213 13:36:19.509164 3370 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc\": not found" containerID="f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc" Dec 13 13:36:19.509245 kubelet[3370]: I1213 13:36:19.509198 3370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc"} err="failed to get container status \"f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8ecdd801613422d348a979c7ff5661201912a871bfb6bd1197358b6dcd0f1fc\": not found" Dec 13 13:36:19.569529 kubelet[3370]: I1213 13:36:19.569490 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-lib-modules\") pod \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " Dec 13 13:36:19.569696 kubelet[3370]: I1213 13:36:19.569566 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5dml\" (UniqueName: \"kubernetes.io/projected/0a0230fd-9998-4e80-9d4c-76cfd56a5999-kube-api-access-t5dml\") pod \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " Dec 13 13:36:19.569696 kubelet[3370]: I1213 13:36:19.569599 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92b6fe37-fed1-4201-a9c0-3adaa630f2a2-cilium-config-path\") pod \"92b6fe37-fed1-4201-a9c0-3adaa630f2a2\" (UID: \"92b6fe37-fed1-4201-a9c0-3adaa630f2a2\") " Dec 13 13:36:19.569696 kubelet[3370]: I1213 13:36:19.569632 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr2tq\" (UniqueName: \"kubernetes.io/projected/92b6fe37-fed1-4201-a9c0-3adaa630f2a2-kube-api-access-xr2tq\") pod \"92b6fe37-fed1-4201-a9c0-3adaa630f2a2\" (UID: \"92b6fe37-fed1-4201-a9c0-3adaa630f2a2\") " Dec 13 13:36:19.569696 kubelet[3370]: I1213 13:36:19.569664 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-hostproc\") pod \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " Dec 13 13:36:19.569929 kubelet[3370]: I1213 13:36:19.569699 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a0230fd-9998-4e80-9d4c-76cfd56a5999-clustermesh-secrets\") pod \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " Dec 13 13:36:19.569929 kubelet[3370]: I1213 13:36:19.569732 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-host-proc-sys-net\") pod \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " Dec 13 13:36:19.569929 kubelet[3370]: I1213 13:36:19.569767 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cilium-cgroup\") pod \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " Dec 13 13:36:19.569929 kubelet[3370]: I1213 13:36:19.569804 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cilium-config-path\") pod \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " Dec 13 13:36:19.569929 kubelet[3370]: I1213 13:36:19.569853 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a0230fd-9998-4e80-9d4c-76cfd56a5999-hubble-tls\") pod \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " Dec 13 13:36:19.569929 kubelet[3370]: I1213 13:36:19.569887 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-bpf-maps\") pod \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " Dec 13 13:36:19.570216 kubelet[3370]: I1213 13:36:19.569923 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-host-proc-sys-kernel\") pod \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " Dec 13 13:36:19.570216 kubelet[3370]: I1213 13:36:19.569959 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-etc-cni-netd\") pod \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " Dec 13 13:36:19.570216 kubelet[3370]: I1213 13:36:19.569993 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cilium-run\") pod \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " Dec 13 13:36:19.570216 kubelet[3370]: I1213 13:36:19.570025 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-xtables-lock\") pod \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " Dec 13 13:36:19.570216 kubelet[3370]: I1213 13:36:19.570057 3370 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cni-path\") pod \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\" (UID: \"0a0230fd-9998-4e80-9d4c-76cfd56a5999\") " Dec 13 13:36:19.570216 kubelet[3370]: I1213 13:36:19.570123 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cni-path" (OuterVolumeSpecName: "cni-path") pod "0a0230fd-9998-4e80-9d4c-76cfd56a5999" (UID: "0a0230fd-9998-4e80-9d4c-76cfd56a5999"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:36:19.570567 kubelet[3370]: I1213 13:36:19.569500 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0a0230fd-9998-4e80-9d4c-76cfd56a5999" (UID: "0a0230fd-9998-4e80-9d4c-76cfd56a5999"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:36:19.576465 kubelet[3370]: I1213 13:36:19.574298 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0a0230fd-9998-4e80-9d4c-76cfd56a5999" (UID: "0a0230fd-9998-4e80-9d4c-76cfd56a5999"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:36:19.577171 kubelet[3370]: I1213 13:36:19.577146 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0a0230fd-9998-4e80-9d4c-76cfd56a5999" (UID: "0a0230fd-9998-4e80-9d4c-76cfd56a5999"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:36:19.577295 kubelet[3370]: I1213 13:36:19.577276 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0a0230fd-9998-4e80-9d4c-76cfd56a5999" (UID: "0a0230fd-9998-4e80-9d4c-76cfd56a5999"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:36:19.577485 kubelet[3370]: I1213 13:36:19.577371 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0a0230fd-9998-4e80-9d4c-76cfd56a5999" (UID: "0a0230fd-9998-4e80-9d4c-76cfd56a5999"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:36:19.577598 kubelet[3370]: I1213 13:36:19.577579 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0a0230fd-9998-4e80-9d4c-76cfd56a5999" (UID: "0a0230fd-9998-4e80-9d4c-76cfd56a5999"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:36:19.577681 kubelet[3370]: I1213 13:36:19.577667 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0a0230fd-9998-4e80-9d4c-76cfd56a5999" (UID: "0a0230fd-9998-4e80-9d4c-76cfd56a5999"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:36:19.577826 kubelet[3370]: I1213 13:36:19.577809 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0230fd-9998-4e80-9d4c-76cfd56a5999-kube-api-access-t5dml" (OuterVolumeSpecName: "kube-api-access-t5dml") pod "0a0230fd-9998-4e80-9d4c-76cfd56a5999" (UID: "0a0230fd-9998-4e80-9d4c-76cfd56a5999"). InnerVolumeSpecName "kube-api-access-t5dml". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:36:19.578305 kubelet[3370]: I1213 13:36:19.578277 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92b6fe37-fed1-4201-a9c0-3adaa630f2a2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "92b6fe37-fed1-4201-a9c0-3adaa630f2a2" (UID: "92b6fe37-fed1-4201-a9c0-3adaa630f2a2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:36:19.578389 kubelet[3370]: I1213 13:36:19.578335 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0a0230fd-9998-4e80-9d4c-76cfd56a5999" (UID: "0a0230fd-9998-4e80-9d4c-76cfd56a5999"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:36:19.578389 kubelet[3370]: I1213 13:36:19.578361 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0a0230fd-9998-4e80-9d4c-76cfd56a5999" (UID: "0a0230fd-9998-4e80-9d4c-76cfd56a5999"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:36:19.578539 kubelet[3370]: I1213 13:36:19.578416 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-hostproc" (OuterVolumeSpecName: "hostproc") pod "0a0230fd-9998-4e80-9d4c-76cfd56a5999" (UID: "0a0230fd-9998-4e80-9d4c-76cfd56a5999"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:36:19.580984 kubelet[3370]: I1213 13:36:19.580947 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0230fd-9998-4e80-9d4c-76cfd56a5999-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0a0230fd-9998-4e80-9d4c-76cfd56a5999" (UID: "0a0230fd-9998-4e80-9d4c-76cfd56a5999"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:36:19.581363 kubelet[3370]: I1213 13:36:19.581338 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0230fd-9998-4e80-9d4c-76cfd56a5999-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0a0230fd-9998-4e80-9d4c-76cfd56a5999" (UID: "0a0230fd-9998-4e80-9d4c-76cfd56a5999"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 13:36:19.581812 kubelet[3370]: I1213 13:36:19.581774 3370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92b6fe37-fed1-4201-a9c0-3adaa630f2a2-kube-api-access-xr2tq" (OuterVolumeSpecName: "kube-api-access-xr2tq") pod "92b6fe37-fed1-4201-a9c0-3adaa630f2a2" (UID: "92b6fe37-fed1-4201-a9c0-3adaa630f2a2"). InnerVolumeSpecName "kube-api-access-xr2tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:36:19.670567 kubelet[3370]: I1213 13:36:19.670180 3370 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-bpf-maps\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.670567 kubelet[3370]: I1213 13:36:19.670229 3370 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-host-proc-sys-kernel\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.670567 kubelet[3370]: I1213 13:36:19.670247 3370 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-etc-cni-netd\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.670567 kubelet[3370]: I1213 13:36:19.670264 3370 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cilium-run\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.670567 kubelet[3370]: I1213 13:36:19.670283 3370 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-xtables-lock\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.670567 kubelet[3370]: I1213 13:36:19.670299 3370 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cni-path\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.670567 kubelet[3370]: I1213 13:36:19.670316 3370 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xr2tq\" (UniqueName: \"kubernetes.io/projected/92b6fe37-fed1-4201-a9c0-3adaa630f2a2-kube-api-access-xr2tq\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.670567 kubelet[3370]: I1213 13:36:19.670334 3370 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-lib-modules\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.671078 kubelet[3370]: I1213 13:36:19.670353 3370 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-t5dml\" (UniqueName: \"kubernetes.io/projected/0a0230fd-9998-4e80-9d4c-76cfd56a5999-kube-api-access-t5dml\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.671078 kubelet[3370]: I1213 13:36:19.670371 3370 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92b6fe37-fed1-4201-a9c0-3adaa630f2a2-cilium-config-path\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.671078 kubelet[3370]: I1213 13:36:19.670412 3370 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-hostproc\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.671078 kubelet[3370]: I1213 13:36:19.670434 3370 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a0230fd-9998-4e80-9d4c-76cfd56a5999-clustermesh-secrets\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.671078 kubelet[3370]: I1213 13:36:19.670454 3370 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-host-proc-sys-net\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.671078 kubelet[3370]: I1213 13:36:19.670475 3370 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cilium-config-path\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.671078 kubelet[3370]: I1213 13:36:19.670497 3370 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a0230fd-9998-4e80-9d4c-76cfd56a5999-hubble-tls\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.671078 kubelet[3370]: I1213 13:36:19.670516 3370 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a0230fd-9998-4e80-9d4c-76cfd56a5999-cilium-cgroup\") on node \"ci-4186.0.0-a-a6ca590029\" DevicePath \"\"" Dec 13 13:36:19.739947 systemd[1]: Removed slice kubepods-besteffort-pod92b6fe37_fed1_4201_a9c0_3adaa630f2a2.slice - libcontainer container kubepods-besteffort-pod92b6fe37_fed1_4201_a9c0_3adaa630f2a2.slice. Dec 13 13:36:19.745940 systemd[1]: Removed slice kubepods-burstable-pod0a0230fd_9998_4e80_9d4c_76cfd56a5999.slice - libcontainer container kubepods-burstable-pod0a0230fd_9998_4e80_9d4c_76cfd56a5999.slice. Dec 13 13:36:19.746211 systemd[1]: kubepods-burstable-pod0a0230fd_9998_4e80_9d4c_76cfd56a5999.slice: Consumed 7.157s CPU time. Dec 13 13:36:19.980477 kubelet[3370]: I1213 13:36:19.979465 3370 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0a0230fd-9998-4e80-9d4c-76cfd56a5999" path="/var/lib/kubelet/pods/0a0230fd-9998-4e80-9d4c-76cfd56a5999/volumes" Dec 13 13:36:19.980477 kubelet[3370]: I1213 13:36:19.980232 3370 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="92b6fe37-fed1-4201-a9c0-3adaa630f2a2" path="/var/lib/kubelet/pods/92b6fe37-fed1-4201-a9c0-3adaa630f2a2/volumes" Dec 13 13:36:20.133549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e470aec0964219de07a118bbc0e309aca414d544925741a4c96ec7b39f168fb-rootfs.mount: Deactivated successfully. Dec 13 13:36:20.133672 systemd[1]: var-lib-kubelet-pods-92b6fe37\x2dfed1\x2d4201\x2da9c0\x2d3adaa630f2a2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxr2tq.mount: Deactivated successfully. Dec 13 13:36:20.133764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94-rootfs.mount: Deactivated successfully. Dec 13 13:36:20.133849 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a798f10d6ba42c9fd2749f3c4de4c50cf2a438e66c1b475a295dbbe0debf8f94-shm.mount: Deactivated successfully. Dec 13 13:36:20.133937 systemd[1]: var-lib-kubelet-pods-0a0230fd\x2d9998\x2d4e80\x2d9d4c\x2d76cfd56a5999-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt5dml.mount: Deactivated successfully. Dec 13 13:36:20.134024 systemd[1]: var-lib-kubelet-pods-0a0230fd\x2d9998\x2d4e80\x2d9d4c\x2d76cfd56a5999-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 13:36:20.134116 systemd[1]: var-lib-kubelet-pods-0a0230fd\x2d9998\x2d4e80\x2d9d4c\x2d76cfd56a5999-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 13:36:21.165147 sshd[4956]: Connection closed by 10.200.16.10 port 51096 Dec 13 13:36:21.166024 sshd-session[4954]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:21.169271 systemd[1]: sshd@22-10.200.8.13:22-10.200.16.10:51096.service: Deactivated successfully. Dec 13 13:36:21.171739 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 13:36:21.173525 systemd-logind[1688]: Session 25 logged out. Waiting for processes to exit. Dec 13 13:36:21.174775 systemd-logind[1688]: Removed session 25. Dec 13 13:36:21.293701 systemd[1]: Started sshd@23-10.200.8.13:22-10.200.16.10:59526.service - OpenSSH per-connection server daemon (10.200.16.10:59526). Dec 13 13:36:22.005439 sshd[5116]: Accepted publickey for core from 10.200.16.10 port 59526 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:22.007159 sshd-session[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:22.012281 systemd-logind[1688]: New session 26 of user core. Dec 13 13:36:22.015552 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 13:36:22.916929 kubelet[3370]: I1213 13:36:22.916821 3370 topology_manager.go:215] "Topology Admit Handler" podUID="ce90ed10-38c2-4ce2-ad87-30031f3e8cb1" podNamespace="kube-system" podName="cilium-q2g8q" Dec 13 13:36:22.919131 kubelet[3370]: E1213 13:36:22.917535 3370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a0230fd-9998-4e80-9d4c-76cfd56a5999" containerName="apply-sysctl-overwrites" Dec 13 13:36:22.919131 kubelet[3370]: E1213 13:36:22.917568 3370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a0230fd-9998-4e80-9d4c-76cfd56a5999" containerName="mount-bpf-fs" Dec 13 13:36:22.919131 kubelet[3370]: E1213 13:36:22.917599 3370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92b6fe37-fed1-4201-a9c0-3adaa630f2a2" containerName="cilium-operator" Dec 13 13:36:22.919131 kubelet[3370]: E1213 13:36:22.917613 3370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a0230fd-9998-4e80-9d4c-76cfd56a5999" containerName="mount-cgroup" Dec 13 13:36:22.919131 kubelet[3370]: E1213 13:36:22.917623 3370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a0230fd-9998-4e80-9d4c-76cfd56a5999" containerName="clean-cilium-state" Dec 13 13:36:22.919131 kubelet[3370]: E1213 13:36:22.917632 3370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a0230fd-9998-4e80-9d4c-76cfd56a5999" containerName="cilium-agent" Dec 13 13:36:22.919131 kubelet[3370]: I1213 13:36:22.917693 3370 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0230fd-9998-4e80-9d4c-76cfd56a5999" containerName="cilium-agent" Dec 13 13:36:22.919131 kubelet[3370]: I1213 13:36:22.917704 3370 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b6fe37-fed1-4201-a9c0-3adaa630f2a2" containerName="cilium-operator" Dec 13 13:36:22.931443 systemd[1]: Created slice kubepods-burstable-podce90ed10_38c2_4ce2_ad87_30031f3e8cb1.slice - libcontainer container kubepods-burstable-podce90ed10_38c2_4ce2_ad87_30031f3e8cb1.slice. Dec 13 13:36:22.976932 sshd[5118]: Connection closed by 10.200.16.10 port 59526 Dec 13 13:36:22.978081 sshd-session[5116]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:22.982623 systemd[1]: sshd@23-10.200.8.13:22-10.200.16.10:59526.service: Deactivated successfully. Dec 13 13:36:22.984738 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 13:36:22.985599 systemd-logind[1688]: Session 26 logged out. Waiting for processes to exit. Dec 13 13:36:22.987003 systemd-logind[1688]: Removed session 26. Dec 13 13:36:23.086542 kubelet[3370]: I1213 13:36:23.085912 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-etc-cni-netd\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.086542 kubelet[3370]: I1213 13:36:23.085970 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-clustermesh-secrets\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.086542 kubelet[3370]: I1213 13:36:23.086000 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-hostproc\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.086542 kubelet[3370]: I1213 13:36:23.086027 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-xtables-lock\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.086542 kubelet[3370]: I1213 13:36:23.086053 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2rgw\" (UniqueName: \"kubernetes.io/projected/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-kube-api-access-s2rgw\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.086542 kubelet[3370]: I1213 13:36:23.086079 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-bpf-maps\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.086949 kubelet[3370]: I1213 13:36:23.086102 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-cni-path\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.086949 kubelet[3370]: I1213 13:36:23.086128 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-cilium-run\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.086949 kubelet[3370]: I1213 13:36:23.086155 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-cilium-ipsec-secrets\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.086949 kubelet[3370]: I1213 13:36:23.086183 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-host-proc-sys-net\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.086949 kubelet[3370]: I1213 13:36:23.086209 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-lib-modules\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.086949 kubelet[3370]: I1213 13:36:23.086234 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-host-proc-sys-kernel\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.087138 kubelet[3370]: I1213 13:36:23.086257 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-cilium-cgroup\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.087138 kubelet[3370]: I1213 13:36:23.086286 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-cilium-config-path\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.087138 kubelet[3370]: I1213 13:36:23.086309 3370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce90ed10-38c2-4ce2-ad87-30031f3e8cb1-hubble-tls\") pod \"cilium-q2g8q\" (UID: \"ce90ed10-38c2-4ce2-ad87-30031f3e8cb1\") " pod="kube-system/cilium-q2g8q" Dec 13 13:36:23.109694 systemd[1]: Started sshd@24-10.200.8.13:22-10.200.16.10:59530.service - OpenSSH per-connection server daemon (10.200.16.10:59530). Dec 13 13:36:23.235558 containerd[1699]: time="2024-12-13T13:36:23.235022363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2g8q,Uid:ce90ed10-38c2-4ce2-ad87-30031f3e8cb1,Namespace:kube-system,Attempt:0,}" Dec 13 13:36:23.277089 containerd[1699]: time="2024-12-13T13:36:23.276989647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:36:23.277089 containerd[1699]: time="2024-12-13T13:36:23.277035749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:36:23.277089 containerd[1699]: time="2024-12-13T13:36:23.277049449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:36:23.277496 containerd[1699]: time="2024-12-13T13:36:23.277159052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:36:23.294561 systemd[1]: Started cri-containerd-85a7c69cebec41b84a5547f406b85684aa2d8c46af4ce6cec944ccab79a98fd6.scope - libcontainer container 85a7c69cebec41b84a5547f406b85684aa2d8c46af4ce6cec944ccab79a98fd6. Dec 13 13:36:23.317648 containerd[1699]: time="2024-12-13T13:36:23.317511995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2g8q,Uid:ce90ed10-38c2-4ce2-ad87-30031f3e8cb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"85a7c69cebec41b84a5547f406b85684aa2d8c46af4ce6cec944ccab79a98fd6\"" Dec 13 13:36:23.320867 containerd[1699]: time="2024-12-13T13:36:23.320625975Z" level=info msg="CreateContainer within sandbox \"85a7c69cebec41b84a5547f406b85684aa2d8c46af4ce6cec944ccab79a98fd6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:36:23.354686 containerd[1699]: time="2024-12-13T13:36:23.354630654Z" level=info msg="CreateContainer within sandbox \"85a7c69cebec41b84a5547f406b85684aa2d8c46af4ce6cec944ccab79a98fd6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"338eb3f788d434376178d6a5ceac4ccfba5570d6ba833e5c5d5549d2127f66a7\"" Dec 13 13:36:23.355524 containerd[1699]: time="2024-12-13T13:36:23.355180568Z" level=info msg="StartContainer for \"338eb3f788d434376178d6a5ceac4ccfba5570d6ba833e5c5d5549d2127f66a7\"" Dec 13 13:36:23.384553 systemd[1]: Started cri-containerd-338eb3f788d434376178d6a5ceac4ccfba5570d6ba833e5c5d5549d2127f66a7.scope - libcontainer container 338eb3f788d434376178d6a5ceac4ccfba5570d6ba833e5c5d5549d2127f66a7. Dec 13 13:36:23.423329 containerd[1699]: time="2024-12-13T13:36:23.423274028Z" level=info msg="StartContainer for \"338eb3f788d434376178d6a5ceac4ccfba5570d6ba833e5c5d5549d2127f66a7\" returns successfully" Dec 13 13:36:23.427524 systemd[1]: cri-containerd-338eb3f788d434376178d6a5ceac4ccfba5570d6ba833e5c5d5549d2127f66a7.scope: Deactivated successfully. Dec 13 13:36:23.534146 containerd[1699]: time="2024-12-13T13:36:23.534067491Z" level=info msg="shim disconnected" id=338eb3f788d434376178d6a5ceac4ccfba5570d6ba833e5c5d5549d2127f66a7 namespace=k8s.io Dec 13 13:36:23.534405 containerd[1699]: time="2024-12-13T13:36:23.534225395Z" level=warning msg="cleaning up after shim disconnected" id=338eb3f788d434376178d6a5ceac4ccfba5570d6ba833e5c5d5549d2127f66a7 namespace=k8s.io Dec 13 13:36:23.534405 containerd[1699]: time="2024-12-13T13:36:23.534245095Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:36:23.830471 sshd[5130]: Accepted publickey for core from 10.200.16.10 port 59530 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:23.832051 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:23.837102 systemd-logind[1688]: New session 27 of user core. Dec 13 13:36:23.843576 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 13:36:24.097320 kubelet[3370]: E1213 13:36:24.097199 3370 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 13:36:24.327995 sshd[5241]: Connection closed by 10.200.16.10 port 59530 Dec 13 13:36:24.329080 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:24.332507 systemd[1]: sshd@24-10.200.8.13:22-10.200.16.10:59530.service: Deactivated successfully. Dec 13 13:36:24.335149 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 13:36:24.337295 systemd-logind[1688]: Session 27 logged out. Waiting for processes to exit. Dec 13 13:36:24.338663 systemd-logind[1688]: Removed session 27. Dec 13 13:36:24.459323 containerd[1699]: time="2024-12-13T13:36:24.459014993Z" level=info msg="CreateContainer within sandbox \"85a7c69cebec41b84a5547f406b85684aa2d8c46af4ce6cec944ccab79a98fd6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:36:24.466491 systemd[1]: Started sshd@25-10.200.8.13:22-10.200.16.10:59544.service - OpenSSH per-connection server daemon (10.200.16.10:59544). Dec 13 13:36:24.538538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2328148637.mount: Deactivated successfully. Dec 13 13:36:24.549662 containerd[1699]: time="2024-12-13T13:36:24.549614134Z" level=info msg="CreateContainer within sandbox \"85a7c69cebec41b84a5547f406b85684aa2d8c46af4ce6cec944ccab79a98fd6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"212d6d77a0ae6719c78b7fe6f6ea944f8699fbb591733e04f35ad06e55134d15\"" Dec 13 13:36:24.550710 containerd[1699]: time="2024-12-13T13:36:24.550677361Z" level=info msg="StartContainer for \"212d6d77a0ae6719c78b7fe6f6ea944f8699fbb591733e04f35ad06e55134d15\"" Dec 13 13:36:24.590548 systemd[1]: Started cri-containerd-212d6d77a0ae6719c78b7fe6f6ea944f8699fbb591733e04f35ad06e55134d15.scope - libcontainer container 212d6d77a0ae6719c78b7fe6f6ea944f8699fbb591733e04f35ad06e55134d15. Dec 13 13:36:24.628583 containerd[1699]: time="2024-12-13T13:36:24.628373369Z" level=info msg="StartContainer for \"212d6d77a0ae6719c78b7fe6f6ea944f8699fbb591733e04f35ad06e55134d15\" returns successfully" Dec 13 13:36:24.632467 systemd[1]: cri-containerd-212d6d77a0ae6719c78b7fe6f6ea944f8699fbb591733e04f35ad06e55134d15.scope: Deactivated successfully. Dec 13 13:36:24.674309 containerd[1699]: time="2024-12-13T13:36:24.674154352Z" level=info msg="shim disconnected" id=212d6d77a0ae6719c78b7fe6f6ea944f8699fbb591733e04f35ad06e55134d15 namespace=k8s.io Dec 13 13:36:24.674309 containerd[1699]: time="2024-12-13T13:36:24.674242554Z" level=warning msg="cleaning up after shim disconnected" id=212d6d77a0ae6719c78b7fe6f6ea944f8699fbb591733e04f35ad06e55134d15 namespace=k8s.io Dec 13 13:36:24.674309 containerd[1699]: time="2024-12-13T13:36:24.674271155Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:36:24.975778 kubelet[3370]: E1213 13:36:24.975687 3370 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-vxxdr" podUID="9c3c3287-1ca6-4898-91c3-f175caa2fdb0" Dec 13 13:36:25.188590 sshd[5248]: Accepted publickey for core from 10.200.16.10 port 59544 ssh2: RSA SHA256:wsnkSdHpjFYzphJ5WvtH4ivsqXum96h1Xr1m8Hh3RYg Dec 13 13:36:25.190296 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:36:25.194969 systemd[1]: run-containerd-runc-k8s.io-212d6d77a0ae6719c78b7fe6f6ea944f8699fbb591733e04f35ad06e55134d15-runc.rmZTWG.mount: Deactivated successfully. Dec 13 13:36:25.195087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-212d6d77a0ae6719c78b7fe6f6ea944f8699fbb591733e04f35ad06e55134d15-rootfs.mount: Deactivated successfully. Dec 13 13:36:25.198910 systemd-logind[1688]: New session 28 of user core. Dec 13 13:36:25.207775 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 13:36:25.462263 containerd[1699]: time="2024-12-13T13:36:25.462042512Z" level=info msg="CreateContainer within sandbox \"85a7c69cebec41b84a5547f406b85684aa2d8c46af4ce6cec944ccab79a98fd6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:36:25.505835 containerd[1699]: time="2024-12-13T13:36:25.505786743Z" level=info msg="CreateContainer within sandbox \"85a7c69cebec41b84a5547f406b85684aa2d8c46af4ce6cec944ccab79a98fd6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f70114c8679943949f419b887beb3acf6a4a681a90dcbe563ffaa5e58ece66b4\"" Dec 13 13:36:25.507714 containerd[1699]: time="2024-12-13T13:36:25.506317756Z" level=info msg="StartContainer for \"f70114c8679943949f419b887beb3acf6a4a681a90dcbe563ffaa5e58ece66b4\"" Dec 13 13:36:25.539533 systemd[1]: Started cri-containerd-f70114c8679943949f419b887beb3acf6a4a681a90dcbe563ffaa5e58ece66b4.scope - libcontainer container f70114c8679943949f419b887beb3acf6a4a681a90dcbe563ffaa5e58ece66b4. Dec 13 13:36:25.574780 systemd[1]: cri-containerd-f70114c8679943949f419b887beb3acf6a4a681a90dcbe563ffaa5e58ece66b4.scope: Deactivated successfully. Dec 13 13:36:25.579833 containerd[1699]: time="2024-12-13T13:36:25.579769355Z" level=info msg="StartContainer for \"f70114c8679943949f419b887beb3acf6a4a681a90dcbe563ffaa5e58ece66b4\" returns successfully" Dec 13 13:36:25.624616 containerd[1699]: time="2024-12-13T13:36:25.624202603Z" level=info msg="shim disconnected" id=f70114c8679943949f419b887beb3acf6a4a681a90dcbe563ffaa5e58ece66b4 namespace=k8s.io Dec 13 13:36:25.624616 containerd[1699]: time="2024-12-13T13:36:25.624525211Z" level=warning msg="cleaning up after shim disconnected" id=f70114c8679943949f419b887beb3acf6a4a681a90dcbe563ffaa5e58ece66b4 namespace=k8s.io Dec 13 13:36:25.624616 containerd[1699]: time="2024-12-13T13:36:25.624538911Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:36:25.643075 containerd[1699]: time="2024-12-13T13:36:25.642921287Z" level=warning msg="cleanup warnings time=\"2024-12-13T13:36:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 13:36:26.195765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f70114c8679943949f419b887beb3acf6a4a681a90dcbe563ffaa5e58ece66b4-rootfs.mount: Deactivated successfully. Dec 13 13:36:26.467252 containerd[1699]: time="2024-12-13T13:36:26.466951981Z" level=info msg="CreateContainer within sandbox \"85a7c69cebec41b84a5547f406b85684aa2d8c46af4ce6cec944ccab79a98fd6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:36:26.498710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount810374170.mount: Deactivated successfully. Dec 13 13:36:26.507400 containerd[1699]: time="2024-12-13T13:36:26.507160620Z" level=info msg="CreateContainer within sandbox \"85a7c69cebec41b84a5547f406b85684aa2d8c46af4ce6cec944ccab79a98fd6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1cd9dcd348853901cac9e86da2a9a2559668002aa10b7d55cf372689f44edc8d\"" Dec 13 13:36:26.508765 containerd[1699]: time="2024-12-13T13:36:26.508730560Z" level=info msg="StartContainer for \"1cd9dcd348853901cac9e86da2a9a2559668002aa10b7d55cf372689f44edc8d\"" Dec 13 13:36:26.541563 systemd[1]: Started cri-containerd-1cd9dcd348853901cac9e86da2a9a2559668002aa10b7d55cf372689f44edc8d.scope - libcontainer container 1cd9dcd348853901cac9e86da2a9a2559668002aa10b7d55cf372689f44edc8d. Dec 13 13:36:26.563967 systemd[1]: cri-containerd-1cd9dcd348853901cac9e86da2a9a2559668002aa10b7d55cf372689f44edc8d.scope: Deactivated successfully. Dec 13 13:36:26.571424 containerd[1699]: time="2024-12-13T13:36:26.571205675Z" level=info msg="StartContainer for \"1cd9dcd348853901cac9e86da2a9a2559668002aa10b7d55cf372689f44edc8d\" returns successfully" Dec 13 13:36:26.602653 containerd[1699]: time="2024-12-13T13:36:26.602579385Z" level=info msg="shim disconnected" id=1cd9dcd348853901cac9e86da2a9a2559668002aa10b7d55cf372689f44edc8d namespace=k8s.io Dec 13 13:36:26.602653 containerd[1699]: time="2024-12-13T13:36:26.602651587Z" level=warning msg="cleaning up after shim disconnected" id=1cd9dcd348853901cac9e86da2a9a2559668002aa10b7d55cf372689f44edc8d namespace=k8s.io Dec 13 13:36:26.602927 containerd[1699]: time="2024-12-13T13:36:26.602662288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:36:26.976203 kubelet[3370]: E1213 13:36:26.976147 3370 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-vxxdr" podUID="9c3c3287-1ca6-4898-91c3-f175caa2fdb0" Dec 13 13:36:27.195849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cd9dcd348853901cac9e86da2a9a2559668002aa10b7d55cf372689f44edc8d-rootfs.mount: Deactivated successfully. Dec 13 13:36:27.474227 containerd[1699]: time="2024-12-13T13:36:27.474152202Z" level=info msg="CreateContainer within sandbox \"85a7c69cebec41b84a5547f406b85684aa2d8c46af4ce6cec944ccab79a98fd6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:36:27.512444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2944952461.mount: Deactivated successfully. Dec 13 13:36:27.521867 containerd[1699]: time="2024-12-13T13:36:27.521820533Z" level=info msg="CreateContainer within sandbox \"85a7c69cebec41b84a5547f406b85684aa2d8c46af4ce6cec944ccab79a98fd6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"14c7b98073167118e217e5f851e6bc10d7c7df6db201f47e5631f7f96144967e\"" Dec 13 13:36:27.522631 containerd[1699]: time="2024-12-13T13:36:27.522366947Z" level=info msg="StartContainer for \"14c7b98073167118e217e5f851e6bc10d7c7df6db201f47e5631f7f96144967e\"" Dec 13 13:36:27.561533 systemd[1]: Started cri-containerd-14c7b98073167118e217e5f851e6bc10d7c7df6db201f47e5631f7f96144967e.scope - libcontainer container 14c7b98073167118e217e5f851e6bc10d7c7df6db201f47e5631f7f96144967e. Dec 13 13:36:27.592954 containerd[1699]: time="2024-12-13T13:36:27.592899468Z" level=info msg="StartContainer for \"14c7b98073167118e217e5f851e6bc10d7c7df6db201f47e5631f7f96144967e\" returns successfully" Dec 13 13:36:27.970461 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 13:36:28.492602 kubelet[3370]: I1213 13:36:28.491698 3370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-q2g8q" podStartSLOduration=6.491644279 podStartE2EDuration="6.491644279s" podCreationTimestamp="2024-12-13 13:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:36:28.491207267 +0000 UTC m=+194.626897112" watchObservedRunningTime="2024-12-13 13:36:28.491644279 +0000 UTC m=+194.627334224" Dec 13 13:36:28.975811 kubelet[3370]: E1213 13:36:28.974980 3370 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-vxxdr" podUID="9c3c3287-1ca6-4898-91c3-f175caa2fdb0" Dec 13 13:36:28.976239 kubelet[3370]: I1213 13:36:28.976208 3370 setters.go:568] "Node became not ready" node="ci-4186.0.0-a-a6ca590029" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T13:36:28Z","lastTransitionTime":"2024-12-13T13:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 13:36:29.907572 kubelet[3370]: E1213 13:36:29.907406 3370 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55488->127.0.0.1:35339: write tcp 127.0.0.1:55488->127.0.0.1:35339: write: broken pipe Dec 13 13:36:30.794679 systemd-networkd[1333]: lxc_health: Link UP Dec 13 13:36:30.803047 systemd-networkd[1333]: lxc_health: Gained carrier Dec 13 13:36:32.629934 systemd-networkd[1333]: lxc_health: Gained IPv6LL Dec 13 13:36:34.243225 systemd[1]: run-containerd-runc-k8s.io-14c7b98073167118e217e5f851e6bc10d7c7df6db201f47e5631f7f96144967e-runc.lfrYZ8.mount: Deactivated successfully. Dec 13 13:36:36.541281 sshd[5309]: Connection closed by 10.200.16.10 port 59544 Dec 13 13:36:36.542270 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Dec 13 13:36:36.547190 systemd[1]: sshd@25-10.200.8.13:22-10.200.16.10:59544.service: Deactivated successfully. Dec 13 13:36:36.549999 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 13:36:36.551056 systemd-logind[1688]: Session 28 logged out. Waiting for processes to exit. Dec 13 13:36:36.552210 systemd-logind[1688]: Removed session 28.