May 27 17:45:57.939190 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 15:32:02 -00 2025 May 27 17:45:57.939218 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:45:57.939229 kernel: BIOS-provided physical RAM map: May 27 17:45:57.939236 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 17:45:57.939243 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 27 17:45:57.939250 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 27 17:45:57.939259 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc4fff] reserved May 27 17:45:57.939266 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd0fff] usable May 27 17:45:57.939273 kernel: BIOS-e820: [mem 0x000000003ffd1000-0x000000003fffafff] ACPI data May 27 17:45:57.939280 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 27 17:45:57.939287 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 27 17:45:57.939294 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 27 17:45:57.939301 kernel: printk: legacy bootconsole [earlyser0] enabled May 27 17:45:57.939309 kernel: NX (Execute Disable) protection: active May 27 17:45:57.939317 kernel: APIC: Static calls initialized May 27 17:45:57.939324 kernel: efi: EFI v2.7 by Microsoft May 27 17:45:57.939331 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ebaca98 RNG=0x3ffd2018 May 27 17:45:57.939338 kernel: random: crng init done May 27 17:45:57.939345 kernel: secureboot: Secure boot disabled May 27 17:45:57.939354 kernel: SMBIOS 3.1.0 present. May 27 17:45:57.939365 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/21/2024 May 27 17:45:57.939377 kernel: DMI: Memory slots populated: 2/2 May 27 17:45:57.939386 kernel: Hypervisor detected: Microsoft Hyper-V May 27 17:45:57.939394 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 May 27 17:45:57.939401 kernel: Hyper-V: Nested features: 0x3e0101 May 27 17:45:57.939413 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 27 17:45:57.939421 kernel: Hyper-V: Using hypercall for remote TLB flush May 27 17:45:57.939429 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 27 17:45:57.939436 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 27 17:45:57.939444 kernel: tsc: Detected 2300.000 MHz processor May 27 17:45:57.939452 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 17:45:57.939461 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 17:45:57.939469 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 May 27 17:45:57.939479 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 17:45:57.939487 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 17:45:57.939495 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved May 27 17:45:57.939503 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 May 27 17:45:57.939511 kernel: Using GB pages for direct mapping May 27 17:45:57.939519 kernel: ACPI: Early table checksum verification disabled May 27 17:45:57.939532 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 27 17:45:57.939543 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:45:57.939553 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:45:57.939561 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 27 17:45:57.939569 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 27 17:45:57.939577 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:45:57.939586 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:45:57.939595 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:45:57.939603 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) May 27 17:45:57.939611 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) May 27 17:45:57.939618 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 27 17:45:57.939625 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 27 17:45:57.939632 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] May 27 17:45:57.939640 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 27 17:45:57.939647 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 27 17:45:57.939654 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 27 17:45:57.939662 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 27 17:45:57.939669 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] May 27 17:45:57.939677 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] May 27 17:45:57.939684 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 27 17:45:57.939691 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] May 27 17:45:57.939698 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] May 27 17:45:57.939706 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] May 27 17:45:57.939713 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] May 27 17:45:57.939720 kernel: Zone ranges: May 27 17:45:57.939729 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 17:45:57.939736 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 27 17:45:57.939743 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 27 17:45:57.939750 kernel: Device empty May 27 17:45:57.939757 kernel: Movable zone start for each node May 27 17:45:57.939764 kernel: Early memory node ranges May 27 17:45:57.940627 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 27 17:45:57.940635 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 27 17:45:57.940642 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd0fff] May 27 17:45:57.940651 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 27 17:45:57.940658 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 27 17:45:57.940665 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 27 17:45:57.940672 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 17:45:57.940679 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 27 17:45:57.940686 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges May 27 17:45:57.940692 kernel: On node 0, zone DMA32: 46 pages in unavailable ranges May 27 17:45:57.940699 kernel: ACPI: PM-Timer IO Port: 0x408 May 27 17:45:57.940706 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 17:45:57.940714 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 17:45:57.940720 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 17:45:57.940727 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 27 17:45:57.940734 kernel: TSC deadline timer available May 27 17:45:57.940741 kernel: CPU topo: Max. logical packages: 1 May 27 17:45:57.940748 kernel: CPU topo: Max. logical dies: 1 May 27 17:45:57.940754 kernel: CPU topo: Max. dies per package: 1 May 27 17:45:57.940761 kernel: CPU topo: Max. threads per core: 2 May 27 17:45:57.940802 kernel: CPU topo: Num. cores per package: 1 May 27 17:45:57.940811 kernel: CPU topo: Num. threads per package: 2 May 27 17:45:57.940818 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 17:45:57.940825 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 27 17:45:57.940831 kernel: Booting paravirtualized kernel on Hyper-V May 27 17:45:57.940838 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 17:45:57.940846 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 17:45:57.940852 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 17:45:57.940859 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 17:45:57.940865 kernel: pcpu-alloc: [0] 0 1 May 27 17:45:57.940873 kernel: Hyper-V: PV spinlocks enabled May 27 17:45:57.940880 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 17:45:57.940889 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:45:57.940897 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:45:57.940904 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 27 17:45:57.940911 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 17:45:57.940918 kernel: Fallback order for Node 0: 0 May 27 17:45:57.940924 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2096877 May 27 17:45:57.940932 kernel: Policy zone: Normal May 27 17:45:57.940939 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:45:57.940945 kernel: software IO TLB: area num 2. May 27 17:45:57.940952 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 17:45:57.940959 kernel: ftrace: allocating 40081 entries in 157 pages May 27 17:45:57.940966 kernel: ftrace: allocated 157 pages with 5 groups May 27 17:45:57.940973 kernel: Dynamic Preempt: voluntary May 27 17:45:57.940980 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:45:57.940987 kernel: rcu: RCU event tracing is enabled. May 27 17:45:57.940996 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 17:45:57.941008 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:45:57.941015 kernel: Rude variant of Tasks RCU enabled. May 27 17:45:57.941024 kernel: Tracing variant of Tasks RCU enabled. May 27 17:45:57.941032 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:45:57.941039 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 17:45:57.941046 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:45:57.941054 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:45:57.941061 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:45:57.941068 kernel: Using NULL legacy PIC May 27 17:45:57.941076 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 27 17:45:57.941084 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:45:57.941091 kernel: Console: colour dummy device 80x25 May 27 17:45:57.941098 kernel: printk: legacy console [tty1] enabled May 27 17:45:57.941106 kernel: printk: legacy console [ttyS0] enabled May 27 17:45:57.941113 kernel: printk: legacy bootconsole [earlyser0] disabled May 27 17:45:57.941121 kernel: ACPI: Core revision 20240827 May 27 17:45:57.941129 kernel: Failed to register legacy timer interrupt May 27 17:45:57.941137 kernel: APIC: Switch to symmetric I/O mode setup May 27 17:45:57.941144 kernel: x2apic enabled May 27 17:45:57.941151 kernel: APIC: Switched APIC routing to: physical x2apic May 27 17:45:57.941158 kernel: Hyper-V: Host Build 10.0.26100.1221-1-0 May 27 17:45:57.941165 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 27 17:45:57.941172 kernel: Hyper-V: Disabling IBT because of Hyper-V bug May 27 17:45:57.941180 kernel: Hyper-V: Using IPI hypercalls May 27 17:45:57.941187 kernel: APIC: send_IPI() replaced with hv_send_ipi() May 27 17:45:57.941196 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() May 27 17:45:57.941203 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() May 27 17:45:57.941211 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() May 27 17:45:57.941218 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() May 27 17:45:57.941225 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() May 27 17:45:57.941232 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns May 27 17:45:57.941239 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) May 27 17:45:57.941247 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 17:45:57.941254 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 27 17:45:57.941263 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 27 17:45:57.941271 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 17:45:57.941278 kernel: Spectre V2 : Mitigation: Retpolines May 27 17:45:57.941285 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 17:45:57.941293 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 27 17:45:57.941300 kernel: RETBleed: Vulnerable May 27 17:45:57.941307 kernel: Speculative Store Bypass: Vulnerable May 27 17:45:57.941314 kernel: ITS: Mitigation: Aligned branch/return thunks May 27 17:45:57.941320 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 17:45:57.941328 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 17:45:57.941335 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 17:45:57.941343 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 27 17:45:57.941351 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 27 17:45:57.941358 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 27 17:45:57.941365 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' May 27 17:45:57.941372 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' May 27 17:45:57.941379 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' May 27 17:45:57.941386 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 17:45:57.941393 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 27 17:45:57.941399 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 27 17:45:57.941407 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 27 17:45:57.941415 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 May 27 17:45:57.941422 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 May 27 17:45:57.941430 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 May 27 17:45:57.941437 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. May 27 17:45:57.941444 kernel: Freeing SMP alternatives memory: 32K May 27 17:45:57.941451 kernel: pid_max: default: 32768 minimum: 301 May 27 17:45:57.941458 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:45:57.941465 kernel: landlock: Up and running. May 27 17:45:57.941472 kernel: SELinux: Initializing. May 27 17:45:57.941479 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 17:45:57.941486 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 17:45:57.941494 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) May 27 17:45:57.941502 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. May 27 17:45:57.941510 kernel: signal: max sigframe size: 11952 May 27 17:45:57.941517 kernel: rcu: Hierarchical SRCU implementation. May 27 17:45:57.941525 kernel: rcu: Max phase no-delay instances is 400. May 27 17:45:57.941532 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:45:57.941540 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 27 17:45:57.941547 kernel: smp: Bringing up secondary CPUs ... May 27 17:45:57.941554 kernel: smpboot: x86: Booting SMP configuration: May 27 17:45:57.941561 kernel: .... node #0, CPUs: #1 May 27 17:45:57.941570 kernel: smp: Brought up 1 node, 2 CPUs May 27 17:45:57.941577 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) May 27 17:45:57.941585 kernel: Memory: 8082308K/8387508K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 299992K reserved, 0K cma-reserved) May 27 17:45:57.941592 kernel: devtmpfs: initialized May 27 17:45:57.941600 kernel: x86/mm: Memory block size: 128MB May 27 17:45:57.941607 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 27 17:45:57.941614 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:45:57.941621 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 17:45:57.941629 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:45:57.941638 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:45:57.941645 kernel: audit: initializing netlink subsys (disabled) May 27 17:45:57.941652 kernel: audit: type=2000 audit(1748367954.031:1): state=initialized audit_enabled=0 res=1 May 27 17:45:57.941660 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:45:57.941667 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 17:45:57.941674 kernel: cpuidle: using governor menu May 27 17:45:57.941681 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:45:57.941688 kernel: dca service started, version 1.12.1 May 27 17:45:57.941696 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] May 27 17:45:57.941704 kernel: e820: reserve RAM buffer [mem 0x3ffd1000-0x3fffffff] May 27 17:45:57.941712 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 17:45:57.941719 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:45:57.941727 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:45:57.941734 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:45:57.941741 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:45:57.941748 kernel: ACPI: Added _OSI(Module Device) May 27 17:45:57.941756 kernel: ACPI: Added _OSI(Processor Device) May 27 17:45:57.941763 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:45:57.941779 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:45:57.941787 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 17:45:57.941795 kernel: ACPI: Interpreter enabled May 27 17:45:57.941802 kernel: ACPI: PM: (supports S0 S5) May 27 17:45:57.941809 kernel: ACPI: Using IOAPIC for interrupt routing May 27 17:45:57.941817 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 17:45:57.941824 kernel: PCI: Ignoring E820 reservations for host bridge windows May 27 17:45:57.941831 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 27 17:45:57.941838 kernel: iommu: Default domain type: Translated May 27 17:45:57.941846 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 17:45:57.941853 kernel: efivars: Registered efivars operations May 27 17:45:57.941862 kernel: PCI: Using ACPI for IRQ routing May 27 17:45:57.941872 kernel: PCI: System does not support PCI May 27 17:45:57.941879 kernel: vgaarb: loaded May 27 17:45:57.941885 kernel: clocksource: Switched to clocksource tsc-early May 27 17:45:57.941892 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:45:57.941898 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:45:57.941906 kernel: pnp: PnP ACPI init May 27 17:45:57.941915 kernel: pnp: PnP ACPI: found 3 devices May 27 17:45:57.941921 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 17:45:57.941928 kernel: NET: Registered PF_INET protocol family May 27 17:45:57.941935 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 27 17:45:57.941942 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 27 17:45:57.941950 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:45:57.941957 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 17:45:57.941965 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 27 17:45:57.941972 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 27 17:45:57.941981 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 27 17:45:57.941989 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 27 17:45:57.941996 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:45:57.942004 kernel: NET: Registered PF_XDP protocol family May 27 17:45:57.942012 kernel: PCI: CLS 0 bytes, default 64 May 27 17:45:57.942020 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 27 17:45:57.942028 kernel: software IO TLB: mapped [mem 0x000000003aa59000-0x000000003ea59000] (64MB) May 27 17:45:57.942036 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer May 27 17:45:57.942044 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules May 27 17:45:57.942053 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns May 27 17:45:57.942061 kernel: clocksource: Switched to clocksource tsc May 27 17:45:57.942068 kernel: Initialise system trusted keyrings May 27 17:45:57.942076 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 27 17:45:57.942083 kernel: Key type asymmetric registered May 27 17:45:57.942091 kernel: Asymmetric key parser 'x509' registered May 27 17:45:57.942099 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 17:45:57.942106 kernel: io scheduler mq-deadline registered May 27 17:45:57.942114 kernel: io scheduler kyber registered May 27 17:45:57.942123 kernel: io scheduler bfq registered May 27 17:45:57.942131 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 17:45:57.942139 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:45:57.942147 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 17:45:57.942155 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 27 17:45:57.942162 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A May 27 17:45:57.942170 kernel: i8042: PNP: No PS/2 controller found. May 27 17:45:57.942297 kernel: rtc_cmos 00:02: registered as rtc0 May 27 17:45:57.942387 kernel: rtc_cmos 00:02: setting system clock to 2025-05-27T17:45:57 UTC (1748367957) May 27 17:45:57.942448 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 27 17:45:57.942457 kernel: intel_pstate: Intel P-state driver initializing May 27 17:45:57.942465 kernel: efifb: probing for efifb May 27 17:45:57.942473 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 27 17:45:57.942480 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 27 17:45:57.942488 kernel: efifb: scrolling: redraw May 27 17:45:57.942495 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 17:45:57.942504 kernel: Console: switching to colour frame buffer device 128x48 May 27 17:45:57.942511 kernel: fb0: EFI VGA frame buffer device May 27 17:45:57.942519 kernel: pstore: Using crash dump compression: deflate May 27 17:45:57.942526 kernel: pstore: Registered efi_pstore as persistent store backend May 27 17:45:57.942534 kernel: NET: Registered PF_INET6 protocol family May 27 17:45:57.942541 kernel: Segment Routing with IPv6 May 27 17:45:57.942549 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:45:57.942556 kernel: NET: Registered PF_PACKET protocol family May 27 17:45:57.942564 kernel: Key type dns_resolver registered May 27 17:45:57.942572 kernel: IPI shorthand broadcast: enabled May 27 17:45:57.942580 kernel: sched_clock: Marking stable (2800085590, 90830558)->(3182702724, -291786576) May 27 17:45:57.942587 kernel: registered taskstats version 1 May 27 17:45:57.942595 kernel: Loading compiled-in X.509 certificates May 27 17:45:57.942602 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 9507e5c390e18536b38d58c90da64baf0ac9837c' May 27 17:45:57.942610 kernel: Demotion targets for Node 0: null May 27 17:45:57.942617 kernel: Key type .fscrypt registered May 27 17:45:57.942624 kernel: Key type fscrypt-provisioning registered May 27 17:45:57.942632 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:45:57.942641 kernel: ima: Allocated hash algorithm: sha1 May 27 17:45:57.942648 kernel: ima: No architecture policies found May 27 17:45:57.942656 kernel: clk: Disabling unused clocks May 27 17:45:57.942663 kernel: Warning: unable to open an initial console. May 27 17:45:57.942671 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 17:45:57.942678 kernel: Write protecting the kernel read-only data: 24576k May 27 17:45:57.942686 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 17:45:57.942693 kernel: Run /init as init process May 27 17:45:57.942700 kernel: with arguments: May 27 17:45:57.942708 kernel: /init May 27 17:45:57.942716 kernel: with environment: May 27 17:45:57.942723 kernel: HOME=/ May 27 17:45:57.942730 kernel: TERM=linux May 27 17:45:57.942737 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:45:57.942746 systemd[1]: Successfully made /usr/ read-only. May 27 17:45:57.942756 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:45:57.942765 systemd[1]: Detected virtualization microsoft. May 27 17:45:57.942957 systemd[1]: Detected architecture x86-64. May 27 17:45:57.942974 systemd[1]: Running in initrd. May 27 17:45:57.942982 systemd[1]: No hostname configured, using default hostname. May 27 17:45:57.942990 systemd[1]: Hostname set to . May 27 17:45:57.942997 systemd[1]: Initializing machine ID from random generator. May 27 17:45:57.943003 systemd[1]: Queued start job for default target initrd.target. May 27 17:45:57.943011 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:45:57.943019 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:45:57.943030 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:45:57.943041 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:45:57.943049 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:45:57.943057 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:45:57.943066 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:45:57.943074 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:45:57.943084 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:45:57.943093 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:45:57.943101 systemd[1]: Reached target paths.target - Path Units. May 27 17:45:57.943109 systemd[1]: Reached target slices.target - Slice Units. May 27 17:45:57.943117 systemd[1]: Reached target swap.target - Swaps. May 27 17:45:57.943125 systemd[1]: Reached target timers.target - Timer Units. May 27 17:45:57.943133 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:45:57.943142 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:45:57.943150 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:45:57.943159 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:45:57.943168 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:45:57.943175 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:45:57.943184 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:45:57.943192 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:45:57.943200 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:45:57.943209 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:45:57.943216 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:45:57.943224 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:45:57.943234 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:45:57.943242 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:45:57.943250 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:45:57.943265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:57.943291 systemd-journald[205]: Collecting audit messages is disabled. May 27 17:45:57.943314 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:45:57.943324 systemd-journald[205]: Journal started May 27 17:45:57.943346 systemd-journald[205]: Runtime Journal (/run/log/journal/7ac4aeed471640c1a9a44d181a90eab2) is 8M, max 159M, 151M free. May 27 17:45:57.946796 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:45:57.941885 systemd-modules-load[206]: Inserted module 'overlay' May 27 17:45:57.952900 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:45:57.954868 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:45:57.963426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:45:57.969871 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:45:57.977788 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:45:57.979400 systemd-modules-load[206]: Inserted module 'br_netfilter' May 27 17:45:57.982849 kernel: Bridge firewalling registered May 27 17:45:57.980860 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:57.986690 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:45:57.987075 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:45:57.990100 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:45:57.990219 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:45:57.992878 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:45:57.993869 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:45:57.995915 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:45:58.008684 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:45:58.018607 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:45:58.024937 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:45:58.026334 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:45:58.036868 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:45:58.044485 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=daa3e2d55cc4a7ff0ec15aa9bb0c07df9999cb4e3041f3adad1b1101efdea101 May 27 17:45:58.078850 systemd-resolved[244]: Positive Trust Anchors: May 27 17:45:58.078861 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:45:58.078891 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:45:58.096162 systemd-resolved[244]: Defaulting to hostname 'linux'. May 27 17:45:58.098408 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:45:58.104139 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:45:58.112788 kernel: SCSI subsystem initialized May 27 17:45:58.118783 kernel: Loading iSCSI transport class v2.0-870. May 27 17:45:58.126791 kernel: iscsi: registered transport (tcp) May 27 17:45:58.142785 kernel: iscsi: registered transport (qla4xxx) May 27 17:45:58.142820 kernel: QLogic iSCSI HBA Driver May 27 17:45:58.153503 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:45:58.161440 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:45:58.162892 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:45:58.190977 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:45:58.192428 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:45:58.233784 kernel: raid6: avx512x4 gen() 33436 MB/s May 27 17:45:58.250780 kernel: raid6: avx512x2 gen() 33692 MB/s May 27 17:45:58.267778 kernel: raid6: avx512x1 gen() 30260 MB/s May 27 17:45:58.285779 kernel: raid6: avx2x4 gen() 31338 MB/s May 27 17:45:58.302780 kernel: raid6: avx2x2 gen() 32166 MB/s May 27 17:45:58.320133 kernel: raid6: avx2x1 gen() 21260 MB/s May 27 17:45:58.320153 kernel: raid6: using algorithm avx512x2 gen() 33692 MB/s May 27 17:45:58.338035 kernel: raid6: .... xor() 37085 MB/s, rmw enabled May 27 17:45:58.338061 kernel: raid6: using avx512x2 recovery algorithm May 27 17:45:58.354788 kernel: xor: automatically using best checksumming function avx May 27 17:45:58.456784 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:45:58.461002 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:45:58.463888 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:45:58.478709 systemd-udevd[453]: Using default interface naming scheme 'v255'. May 27 17:45:58.482178 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:45:58.489195 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:45:58.508693 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation May 27 17:45:58.523851 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:45:58.527132 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:45:58.569365 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:45:58.573937 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:45:58.615797 kernel: cryptd: max_cpu_qlen set to 1000 May 27 17:45:58.618077 kernel: hv_vmbus: Vmbus version:5.3 May 27 17:45:58.633791 kernel: AES CTR mode by8 optimization enabled May 27 17:45:58.634253 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:45:58.634392 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:58.641892 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:58.646065 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:58.651612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:45:58.652405 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:58.663195 kernel: pps_core: LinuxPPS API ver. 1 registered May 27 17:45:58.663225 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 27 17:45:58.662472 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:45:58.672803 kernel: PTP clock support registered May 27 17:45:58.676791 kernel: hv_vmbus: registering driver hyperv_keyboard May 27 17:45:58.681903 kernel: hv_vmbus: registering driver hv_pci May 27 17:45:58.687808 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 27 17:45:58.691389 kernel: hv_utils: Registering HyperV Utility Driver May 27 17:45:58.691419 kernel: hv_vmbus: registering driver hv_utils May 27 17:45:58.696140 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:45:58.699305 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 May 27 17:45:58.742836 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 May 27 17:45:58.742990 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] May 27 17:45:58.743095 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] May 27 17:45:58.743181 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint May 27 17:45:58.743288 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] May 27 17:45:58.743377 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) May 27 17:45:58.743470 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 May 27 17:45:58.743547 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned May 27 17:45:58.743637 kernel: hv_utils: Shutdown IC version 3.2 May 27 17:45:58.746855 kernel: hv_utils: Heartbeat IC version 3.0 May 27 17:45:58.748788 kernel: hv_utils: TimeSync IC version 4.0 May 27 17:45:58.429854 systemd-resolved[244]: Clock change detected. Flushing caches. May 27 17:45:58.436096 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 17:45:58.436113 systemd-journald[205]: Time jumped backwards, rotating. May 27 17:45:58.438579 kernel: hv_vmbus: registering driver hv_storvsc May 27 17:45:58.445329 kernel: hv_vmbus: registering driver hv_netvsc May 27 17:45:58.447880 kernel: scsi host0: storvsc_host_t May 27 17:45:58.448029 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 May 27 17:45:58.455726 kernel: hv_vmbus: registering driver hid_hyperv May 27 17:45:58.455823 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 27 17:45:58.458799 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 27 17:45:58.465561 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5234e0c1 (unnamed net_device) (uninitialized): VF slot 1 added May 27 17:45:58.487647 kernel: nvme nvme0: pci function c05b:00:00.0 May 27 17:45:58.487809 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) May 27 17:45:58.709560 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 27 17:45:58.719563 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 17:45:58.722755 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 27 17:45:58.722967 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 17:45:58.724570 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 27 17:45:58.734567 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#110 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:45:58.747602 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#77 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:45:58.956565 kernel: nvme nvme0: using unchecked data buffer May 27 17:45:59.116254 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. May 27 17:45:59.165505 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. May 27 17:45:59.174903 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. May 27 17:45:59.192722 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. May 27 17:45:59.197615 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. May 27 17:45:59.201769 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:45:59.202685 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:45:59.202886 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:45:59.202914 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:45:59.205667 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:45:59.214037 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:45:59.226766 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:45:59.230574 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 17:45:59.494266 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 May 27 17:45:59.494436 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 May 27 17:45:59.496896 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] May 27 17:45:59.498290 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] May 27 17:45:59.502582 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint May 27 17:45:59.506568 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] May 27 17:45:59.510957 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] May 27 17:45:59.510980 kernel: pci 7870:00:00.0: enabling Extended Tags May 27 17:45:59.524592 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 May 27 17:45:59.524754 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned May 27 17:45:59.528685 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned May 27 17:45:59.532457 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) May 27 17:46:00.241232 disk-uuid[676]: The operation has completed successfully. May 27 17:46:00.244726 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 17:46:00.289345 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:46:00.289433 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:46:00.319178 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:46:00.336497 sh[718]: Success May 27 17:46:00.363683 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:46:00.363738 kernel: device-mapper: uevent: version 1.0.3 May 27 17:46:00.365000 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:46:00.372562 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 17:46:00.566303 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:46:00.570307 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:46:00.580157 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:46:00.603250 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:46:00.603283 kernel: BTRFS: device fsid 7caef027-0915-4c01-a3d5-28eff70f7ebd devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (731) May 27 17:46:00.607098 kernel: BTRFS info (device dm-0): first mount of filesystem 7caef027-0915-4c01-a3d5-28eff70f7ebd May 27 17:46:00.607124 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 17:46:00.608676 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:46:00.880026 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:46:00.881420 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:46:00.881855 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:46:00.883652 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:46:00.888751 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:46:00.914584 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (764) May 27 17:46:00.918902 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:46:00.918934 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 17:46:00.920513 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 17:46:00.951954 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:46:00.955349 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:46:00.963564 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:46:00.964638 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:46:00.966436 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:46:00.986256 systemd-networkd[894]: lo: Link UP May 27 17:46:00.986264 systemd-networkd[894]: lo: Gained carrier May 27 17:46:00.987040 systemd-networkd[894]: Enumeration completed May 27 17:46:00.987298 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:46:00.987301 systemd-networkd[894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:46:00.987628 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:46:00.988340 systemd-networkd[894]: eth0: Link UP May 27 17:46:00.988479 systemd-networkd[894]: eth0: Gained carrier May 27 17:46:00.988488 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:46:00.988721 systemd[1]: Reached target network.target - Network. May 27 17:46:01.003594 systemd-networkd[894]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 27 17:46:01.684818 ignition[902]: Ignition 2.21.0 May 27 17:46:01.684831 ignition[902]: Stage: fetch-offline May 27 17:46:01.684925 ignition[902]: no configs at "/usr/lib/ignition/base.d" May 27 17:46:01.684932 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:46:01.685015 ignition[902]: parsed url from cmdline: "" May 27 17:46:01.685017 ignition[902]: no config URL provided May 27 17:46:01.685022 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:46:01.690423 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:46:01.685027 ignition[902]: no config at "/usr/lib/ignition/user.ign" May 27 17:46:01.695282 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 17:46:01.685032 ignition[902]: failed to fetch config: resource requires networking May 27 17:46:01.685201 ignition[902]: Ignition finished successfully May 27 17:46:01.713362 ignition[911]: Ignition 2.21.0 May 27 17:46:01.713371 ignition[911]: Stage: fetch May 27 17:46:01.713573 ignition[911]: no configs at "/usr/lib/ignition/base.d" May 27 17:46:01.713582 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:46:01.713648 ignition[911]: parsed url from cmdline: "" May 27 17:46:01.713651 ignition[911]: no config URL provided May 27 17:46:01.713655 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:46:01.713661 ignition[911]: no config at "/usr/lib/ignition/user.ign" May 27 17:46:01.713692 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 27 17:46:01.776216 ignition[911]: GET result: OK May 27 17:46:01.776278 ignition[911]: config has been read from IMDS userdata May 27 17:46:01.776303 ignition[911]: parsing config with SHA512: d3ed759be0717288a70629d78c9580dd1ee2e1e69f3aafcc295479a5cc9ef46351fd2e07658abd2692d755f0e80ccf4112e7b85ef104dd1f9644ebec6507b2a2 May 27 17:46:01.782372 unknown[911]: fetched base config from "system" May 27 17:46:01.782381 unknown[911]: fetched base config from "system" May 27 17:46:01.782712 ignition[911]: fetch: fetch complete May 27 17:46:01.782385 unknown[911]: fetched user config from "azure" May 27 17:46:01.782716 ignition[911]: fetch: fetch passed May 27 17:46:01.785091 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 17:46:01.782754 ignition[911]: Ignition finished successfully May 27 17:46:01.790678 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:46:01.809154 ignition[918]: Ignition 2.21.0 May 27 17:46:01.809162 ignition[918]: Stage: kargs May 27 17:46:01.809366 ignition[918]: no configs at "/usr/lib/ignition/base.d" May 27 17:46:01.809373 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:46:01.812305 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:46:01.810608 ignition[918]: kargs: kargs passed May 27 17:46:01.810651 ignition[918]: Ignition finished successfully May 27 17:46:01.819047 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:46:01.836517 ignition[925]: Ignition 2.21.0 May 27 17:46:01.836527 ignition[925]: Stage: disks May 27 17:46:01.836721 ignition[925]: no configs at "/usr/lib/ignition/base.d" May 27 17:46:01.838677 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:46:01.836727 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:46:01.841518 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:46:01.837480 ignition[925]: disks: disks passed May 27 17:46:01.845128 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:46:01.837512 ignition[925]: Ignition finished successfully May 27 17:46:01.849852 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:46:01.852797 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:46:01.857791 systemd[1]: Reached target basic.target - Basic System. May 27 17:46:01.860606 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:46:01.942327 systemd-fsck[934]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks May 27 17:46:01.945589 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:46:01.951627 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:46:02.193562 kernel: EXT4-fs (nvme0n1p9): mounted filesystem bf93e767-f532-4480-b210-a196f7ac181e r/w with ordered data mode. Quota mode: none. May 27 17:46:02.193760 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:46:02.195833 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:46:02.217560 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:46:02.227625 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:46:02.230678 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 27 17:46:02.235300 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:46:02.235334 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:46:02.240023 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:46:02.242332 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:46:02.244620 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (943) May 27 17:46:02.252103 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:46:02.252137 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 17:46:02.252150 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 17:46:02.258815 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:46:02.460952 systemd-networkd[894]: eth0: Gained IPv6LL May 27 17:46:02.681109 coreos-metadata[945]: May 27 17:46:02.681 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 27 17:46:02.694143 coreos-metadata[945]: May 27 17:46:02.694 INFO Fetch successful May 27 17:46:02.695303 coreos-metadata[945]: May 27 17:46:02.694 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 27 17:46:02.702379 coreos-metadata[945]: May 27 17:46:02.702 INFO Fetch successful May 27 17:46:02.716654 coreos-metadata[945]: May 27 17:46:02.716 INFO wrote hostname ci-4344.0.0-a-927e686d84 to /sysroot/etc/hostname May 27 17:46:02.719989 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 17:46:02.827587 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:46:02.858680 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory May 27 17:46:02.875728 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:46:02.879262 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:46:03.651776 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:46:03.654635 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:46:03.658826 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:46:03.675074 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:46:03.678032 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:46:03.694838 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:46:03.701304 ignition[1062]: INFO : Ignition 2.21.0 May 27 17:46:03.701304 ignition[1062]: INFO : Stage: mount May 27 17:46:03.706632 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:46:03.706632 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:46:03.706632 ignition[1062]: INFO : mount: mount passed May 27 17:46:03.706632 ignition[1062]: INFO : Ignition finished successfully May 27 17:46:03.704622 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:46:03.711625 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:46:03.724660 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:46:03.743558 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (1074) May 27 17:46:03.745559 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem be856aed-e34b-4b7b-be8a-0716b27db212 May 27 17:46:03.745586 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 27 17:46:03.747558 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 17:46:03.752214 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:46:03.773525 ignition[1091]: INFO : Ignition 2.21.0 May 27 17:46:03.774958 ignition[1091]: INFO : Stage: files May 27 17:46:03.774958 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:46:03.774958 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:46:03.774958 ignition[1091]: DEBUG : files: compiled without relabeling support, skipping May 27 17:46:03.780903 ignition[1091]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:46:03.780903 ignition[1091]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:46:03.808388 ignition[1091]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:46:03.810244 ignition[1091]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:46:03.810244 ignition[1091]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:46:03.808734 unknown[1091]: wrote ssh authorized keys file for user: core May 27 17:46:03.815589 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 17:46:03.815589 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 27 17:46:04.125123 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:46:04.319407 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 17:46:04.321616 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 17:46:04.321616 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 27 17:46:04.946948 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 17:46:05.313781 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 17:46:05.313781 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 17:46:05.317739 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:46:05.317739 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:46:05.317739 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:46:05.317739 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:46:05.317739 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:46:05.317739 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:46:05.317739 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:46:05.335584 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:46:05.335584 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:46:05.335584 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:46:05.335584 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:46:05.335584 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:46:05.335584 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 27 17:46:06.191341 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 17:46:07.746743 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 17:46:07.746743 ignition[1091]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 17:46:07.795138 ignition[1091]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:46:07.880294 ignition[1091]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:46:07.880294 ignition[1091]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 17:46:07.880294 ignition[1091]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 27 17:46:07.889348 ignition[1091]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:46:07.889348 ignition[1091]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:46:07.889348 ignition[1091]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:46:07.889348 ignition[1091]: INFO : files: files passed May 27 17:46:07.889348 ignition[1091]: INFO : Ignition finished successfully May 27 17:46:07.886866 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:46:07.890466 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:46:07.897676 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:46:07.909346 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:46:07.909428 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:46:07.915631 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:46:07.915631 initrd-setup-root-after-ignition[1120]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:46:07.920412 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:46:07.925613 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:46:07.921228 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:46:07.929978 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:46:07.965945 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:46:07.966034 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:46:07.969744 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:46:07.973602 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:46:07.975626 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:46:07.976661 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:46:07.993822 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:46:07.997683 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:46:08.008525 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:46:08.009230 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:46:08.015600 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:46:08.016398 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:46:08.016505 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:46:08.023633 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:46:08.024751 systemd[1]: Stopped target basic.target - Basic System. May 27 17:46:08.026806 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:46:08.028774 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:46:08.032696 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:46:08.036691 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:46:08.040682 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:46:08.044682 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:46:08.048667 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:46:08.052709 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:46:08.055159 systemd[1]: Stopped target swap.target - Swaps. May 27 17:46:08.058656 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:46:08.058784 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:46:08.064636 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:46:08.068696 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:46:08.072637 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:46:08.072786 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:46:08.075122 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:46:08.075231 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:46:08.077589 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:46:08.077698 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:46:08.077906 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:46:08.077997 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:46:08.078221 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 27 17:46:08.078307 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 27 17:46:08.080653 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:46:08.088327 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:46:08.091055 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:46:08.091209 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:46:08.097749 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:46:08.097854 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:46:08.116121 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:46:08.116189 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:46:08.123381 ignition[1145]: INFO : Ignition 2.21.0 May 27 17:46:08.123381 ignition[1145]: INFO : Stage: umount May 27 17:46:08.123381 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:46:08.123381 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 27 17:46:08.125202 ignition[1145]: INFO : umount: umount passed May 27 17:46:08.125202 ignition[1145]: INFO : Ignition finished successfully May 27 17:46:08.124699 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:46:08.124789 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:46:08.134858 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:46:08.134935 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:46:08.135433 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:46:08.135471 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:46:08.135691 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 17:46:08.135721 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 17:46:08.135954 systemd[1]: Stopped target network.target - Network. May 27 17:46:08.142433 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:46:08.143325 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:46:08.145049 systemd[1]: Stopped target paths.target - Path Units. May 27 17:46:08.148964 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:46:08.153230 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:46:08.155727 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:46:08.157231 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:46:08.162253 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:46:08.162314 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:46:08.164619 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:46:08.164641 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:46:08.177097 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:46:08.177977 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:46:08.182427 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:46:08.182497 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:46:08.185718 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:46:08.187985 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:46:08.197393 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:46:08.197488 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:46:08.206124 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:46:08.206309 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:46:08.206395 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:46:08.211809 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:46:08.212900 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:46:08.215860 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:46:08.215899 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:46:08.216496 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:46:08.221592 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:46:08.221644 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:46:08.224218 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:46:08.224250 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:46:08.226414 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:46:08.226455 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:46:08.232487 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:46:08.232539 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:46:08.236515 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:46:08.240079 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:46:08.240156 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:46:08.240192 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:46:08.250262 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:46:08.250385 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:46:08.254875 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:46:08.254958 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:46:08.255692 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:46:08.255765 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:46:08.275041 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:46:08.275091 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:46:08.276425 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:46:08.276452 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:46:08.276674 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:46:08.276708 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:46:08.276990 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:46:08.277024 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:46:08.277196 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:46:08.277226 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:46:08.277642 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:46:08.277673 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:46:08.279652 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:46:08.279825 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:46:08.279870 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:46:08.280495 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:46:08.280532 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:46:08.292398 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 17:46:08.292474 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:46:08.294135 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:46:08.294175 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:46:08.294328 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:46:08.294358 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:46:08.295578 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 17:46:08.295618 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 17:46:08.295644 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 17:46:08.295672 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:46:08.302712 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:46:08.302771 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:46:08.306657 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:46:08.311168 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:46:08.329990 systemd[1]: Switching root. May 27 17:46:08.383973 systemd-journald[205]: Journal stopped May 27 17:46:11.742724 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). May 27 17:46:11.742757 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:46:11.742770 kernel: SELinux: policy capability open_perms=1 May 27 17:46:11.742779 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:46:11.742788 kernel: SELinux: policy capability always_check_network=0 May 27 17:46:11.742797 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:46:11.742810 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:46:11.742820 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:46:11.742829 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:46:11.742837 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:46:11.742852 kernel: audit: type=1403 audit(1748367969.231:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:46:11.742862 systemd[1]: Successfully loaded SELinux policy in 118.265ms. May 27 17:46:11.742877 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.706ms. May 27 17:46:11.742891 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:46:11.742902 systemd[1]: Detected virtualization microsoft. May 27 17:46:11.742912 systemd[1]: Detected architecture x86-64. May 27 17:46:11.742922 systemd[1]: Detected first boot. May 27 17:46:11.742932 systemd[1]: Hostname set to . May 27 17:46:11.742944 systemd[1]: Initializing machine ID from random generator. May 27 17:46:11.742954 zram_generator::config[1189]: No configuration found. May 27 17:46:11.742966 kernel: Guest personality initialized and is inactive May 27 17:46:11.742976 kernel: VMCI host device registered (name=vmci, major=10, minor=124) May 27 17:46:11.742985 kernel: Initialized host personality May 27 17:46:11.742995 kernel: NET: Registered PF_VSOCK protocol family May 27 17:46:11.743005 systemd[1]: Populated /etc with preset unit settings. May 27 17:46:11.743018 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:46:11.743028 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:46:11.743038 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:46:11.743048 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:46:11.743058 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:46:11.743068 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:46:11.743078 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:46:11.743114 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:46:11.743125 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:46:11.743135 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:46:11.743145 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:46:11.743155 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:46:11.743165 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:46:11.743174 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:46:11.743184 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:46:11.743201 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:46:11.743216 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:46:11.743238 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:46:11.743250 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 17:46:11.743263 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:46:11.743274 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:46:11.743284 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:46:11.743294 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:46:11.743307 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:46:11.743318 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:46:11.743329 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:46:11.743339 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:46:11.743350 systemd[1]: Reached target slices.target - Slice Units. May 27 17:46:11.743360 systemd[1]: Reached target swap.target - Swaps. May 27 17:46:11.743370 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:46:11.743380 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:46:11.743392 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:46:11.743403 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:46:11.743413 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:46:11.743424 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:46:11.743435 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:46:11.743447 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:46:11.743459 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:46:11.743469 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:46:11.743479 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:46:11.743489 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:46:11.743500 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:46:11.743510 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:46:11.743523 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:46:11.743540 systemd[1]: Reached target machines.target - Containers. May 27 17:46:11.743644 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:46:11.743657 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:46:11.743668 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:46:11.743682 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:46:11.743692 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:46:11.743704 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:46:11.743717 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:46:11.743732 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:46:11.743746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:46:11.743757 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:46:11.743770 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:46:11.743782 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:46:11.743793 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:46:11.743806 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:46:11.743820 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:46:11.743834 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:46:11.743849 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:46:11.743861 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:46:11.743873 kernel: fuse: init (API version 7.41) May 27 17:46:11.743896 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:46:11.743908 kernel: loop: module loaded May 27 17:46:11.743920 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:46:11.743934 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:46:11.743946 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:46:11.743962 systemd[1]: Stopped verity-setup.service. May 27 17:46:11.743973 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:46:11.743984 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:46:11.743994 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:46:11.744005 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:46:11.744017 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:46:11.744048 systemd-journald[1289]: Collecting audit messages is disabled. May 27 17:46:11.744081 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:46:11.744095 systemd-journald[1289]: Journal started May 27 17:46:11.744122 systemd-journald[1289]: Runtime Journal (/run/log/journal/08ae5a2fffc742da8ca324575bf1558d) is 8M, max 159M, 151M free. May 27 17:46:11.378505 systemd[1]: Queued start job for default target multi-user.target. May 27 17:46:11.383010 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 27 17:46:11.383313 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:46:11.750889 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:46:11.751919 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:46:11.754806 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:46:11.757866 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:46:11.763358 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:46:11.763521 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:46:11.767894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:46:11.768062 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:46:11.770103 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:46:11.770248 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:46:11.774054 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:46:11.774772 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:46:11.777449 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:46:11.777686 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:46:11.781584 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:46:11.783690 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:46:11.787687 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:46:11.792585 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:46:11.810158 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:46:11.817621 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:46:11.826897 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:46:11.829011 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:46:11.829045 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:46:11.833761 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:46:11.838593 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:46:11.840272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:46:11.843771 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:46:11.844965 kernel: ACPI: bus type drm_connector registered May 27 17:46:11.849471 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:46:11.851633 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:46:11.853669 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:46:11.854924 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:46:11.857253 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:46:11.861658 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:46:11.865039 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:46:11.868839 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:46:11.869005 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:46:11.870806 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:46:11.873823 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:46:11.876724 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:46:11.880654 systemd-journald[1289]: Time spent on flushing to /var/log/journal/08ae5a2fffc742da8ca324575bf1558d is 29.311ms for 981 entries. May 27 17:46:11.880654 systemd-journald[1289]: System Journal (/var/log/journal/08ae5a2fffc742da8ca324575bf1558d) is 11.8M, max 2.6G, 2.6G free. May 27 17:46:11.934986 systemd-journald[1289]: Received client request to flush runtime journal. May 27 17:46:11.935019 systemd-journald[1289]: /var/log/journal/08ae5a2fffc742da8ca324575bf1558d/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. May 27 17:46:11.935043 systemd-journald[1289]: Rotating system journal. May 27 17:46:11.935062 kernel: loop0: detected capacity change from 0 to 28496 May 27 17:46:11.893916 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:46:11.895574 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:46:11.904069 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:46:11.925421 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:46:11.935967 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:46:11.954731 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:46:11.970044 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. May 27 17:46:11.970060 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. May 27 17:46:11.972911 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:46:11.975063 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:46:12.069331 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:46:12.072651 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:46:12.089319 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. May 27 17:46:12.089336 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. May 27 17:46:12.091760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:46:12.199576 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:46:12.271564 kernel: loop1: detected capacity change from 0 to 113872 May 27 17:46:12.384816 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:46:12.573566 kernel: loop2: detected capacity change from 0 to 224512 May 27 17:46:12.657154 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:46:12.660903 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:46:12.689834 systemd-udevd[1357]: Using default interface naming scheme 'v255'. May 27 17:46:12.765567 kernel: loop3: detected capacity change from 0 to 146240 May 27 17:46:12.832569 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:46:12.837669 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:46:12.892685 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:46:12.915538 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 17:46:12.985022 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:46:12.990568 kernel: hv_vmbus: registering driver hyperv_fb May 27 17:46:12.995445 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 27 17:46:12.995490 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 27 17:46:12.996618 kernel: Console: switching to colour dummy device 80x25 May 27 17:46:13.002959 kernel: Console: switching to colour frame buffer device 128x48 May 27 17:46:13.030562 kernel: mousedev: PS/2 mouse device common for all mice May 27 17:46:13.037596 kernel: hv_vmbus: registering driver hv_balloon May 27 17:46:13.041588 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#120 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 May 27 17:46:13.043036 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 27 17:46:13.089562 kernel: loop4: detected capacity change from 0 to 28496 May 27 17:46:13.112853 kernel: loop5: detected capacity change from 0 to 113872 May 27 17:46:13.140721 kernel: loop6: detected capacity change from 0 to 224512 May 27 17:46:13.165574 kernel: loop7: detected capacity change from 0 to 146240 May 27 17:46:13.188142 (sd-merge)[1422]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 27 17:46:13.188771 (sd-merge)[1422]: Merged extensions into '/usr'. May 27 17:46:13.189670 systemd-networkd[1363]: lo: Link UP May 27 17:46:13.191128 systemd-networkd[1363]: lo: Gained carrier May 27 17:46:13.194755 systemd-networkd[1363]: Enumeration completed May 27 17:46:13.195081 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:46:13.195133 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:46:13.196985 systemd-networkd[1363]: eth0: Link UP May 27 17:46:13.197028 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:46:13.199071 systemd-networkd[1363]: eth0: Gained carrier May 27 17:46:13.199481 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:46:13.200282 systemd[1]: Reload requested from client PID 1329 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:46:13.200294 systemd[1]: Reloading... May 27 17:46:13.219573 systemd-networkd[1363]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 27 17:46:13.308587 kernel: kvm_intel: Using Hyper-V Enlightened VMCS May 27 17:46:13.316578 zram_generator::config[1466]: No configuration found. May 27 17:46:13.415950 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:46:13.496291 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. May 27 17:46:13.499891 systemd[1]: Reloading finished in 299 ms. May 27 17:46:13.516461 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:46:13.552415 systemd[1]: Starting ensure-sysext.service... May 27 17:46:13.554893 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:46:13.558853 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:46:13.563670 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:46:13.567661 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:46:13.570377 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:46:13.593731 systemd[1]: Reload requested from client PID 1525 ('systemctl') (unit ensure-sysext.service)... May 27 17:46:13.593745 systemd[1]: Reloading... May 27 17:46:13.612457 systemd-tmpfiles[1529]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:46:13.612737 systemd-tmpfiles[1529]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:46:13.613491 systemd-tmpfiles[1529]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:46:13.614677 systemd-tmpfiles[1529]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:46:13.615381 systemd-tmpfiles[1529]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:46:13.615709 systemd-tmpfiles[1529]: ACLs are not supported, ignoring. May 27 17:46:13.615801 systemd-tmpfiles[1529]: ACLs are not supported, ignoring. May 27 17:46:13.636687 systemd-tmpfiles[1529]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:46:13.636773 systemd-tmpfiles[1529]: Skipping /boot May 27 17:46:13.646874 systemd-tmpfiles[1529]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:46:13.646959 systemd-tmpfiles[1529]: Skipping /boot May 27 17:46:13.657597 zram_generator::config[1567]: No configuration found. May 27 17:46:13.734643 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:46:13.816006 systemd[1]: Reloading finished in 222 ms. May 27 17:46:13.841783 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:46:13.842281 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:46:13.842659 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:46:13.848060 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:46:13.851464 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:46:13.855420 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:46:13.858254 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:46:13.861489 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:46:13.868123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:46:13.868258 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:46:13.869311 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:46:13.878625 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:46:13.882746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:46:13.883180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:46:13.883273 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:46:13.883347 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:46:13.887178 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:46:13.888678 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:46:13.892455 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:46:13.893621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:46:13.895914 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:46:13.896185 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:46:13.910282 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:46:13.915214 systemd[1]: Finished ensure-sysext.service. May 27 17:46:13.916149 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:46:13.916411 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:46:13.917580 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:46:13.919754 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:46:13.921643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:46:13.925676 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:46:13.926141 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:46:13.926170 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:46:13.926219 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:46:13.926296 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 17:46:13.937409 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:46:13.937536 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:46:13.942575 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:46:13.943620 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:46:13.944209 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:46:13.944326 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:46:13.945354 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:46:13.945580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:46:13.945944 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:46:13.947674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:46:13.947808 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:46:13.989963 systemd-resolved[1633]: Positive Trust Anchors: May 27 17:46:13.989974 systemd-resolved[1633]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:46:13.990003 systemd-resolved[1633]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:46:13.992599 augenrules[1672]: No rules May 27 17:46:13.993206 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:46:13.993367 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:46:13.995335 systemd-resolved[1633]: Using system hostname 'ci-4344.0.0-a-927e686d84'. May 27 17:46:13.996861 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:46:13.997860 systemd[1]: Reached target network.target - Network. May 27 17:46:13.997987 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:46:14.196975 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:46:14.358942 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:46:14.362786 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:46:15.132740 systemd-networkd[1363]: eth0: Gained IPv6LL May 27 17:46:15.135701 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:46:15.145845 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:46:15.527021 ldconfig[1324]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:46:15.536034 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:46:15.538619 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:46:15.560824 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:46:15.563762 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:46:15.564885 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:46:15.566025 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:46:15.568592 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 17:46:15.569688 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:46:15.572645 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:46:15.575607 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:46:15.576927 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:46:15.576958 systemd[1]: Reached target paths.target - Path Units. May 27 17:46:15.579586 systemd[1]: Reached target timers.target - Timer Units. May 27 17:46:15.581412 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:46:15.585634 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:46:15.589506 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:46:15.590853 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:46:15.593630 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:46:15.607009 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:46:15.609890 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:46:15.611604 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:46:15.615277 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:46:15.617597 systemd[1]: Reached target basic.target - Basic System. May 27 17:46:15.618526 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:46:15.618556 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:46:15.620353 systemd[1]: Starting chronyd.service - NTP client/server... May 27 17:46:15.624256 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:46:15.628247 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 17:46:15.632706 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:46:15.638659 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:46:15.641995 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:46:15.645959 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:46:15.647653 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:46:15.651158 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 17:46:15.652863 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). May 27 17:46:15.654493 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 27 17:46:15.656202 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 27 17:46:15.659940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:15.666566 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:46:15.670013 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:46:15.672990 jq[1692]: false May 27 17:46:15.676963 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:46:15.682198 KVP[1698]: KVP starting; pid is:1698 May 27 17:46:15.685697 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:46:15.689366 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:46:15.695724 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:46:15.697951 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:46:15.702566 kernel: hv_utils: KVP IC version 4.0 May 27 17:46:15.700590 KVP[1698]: KVP LIC Version: 3.1 May 27 17:46:15.703982 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:46:15.705087 (chronyd)[1687]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 27 17:46:15.707453 google_oslogin_nss_cache[1697]: oslogin_cache_refresh[1697]: Refreshing passwd entry cache May 27 17:46:15.710006 oslogin_cache_refresh[1697]: Refreshing passwd entry cache May 27 17:46:15.711326 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:46:15.717963 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:46:15.721071 chronyd[1713]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 27 17:46:15.723123 extend-filesystems[1696]: Found loop4 May 27 17:46:15.725653 extend-filesystems[1696]: Found loop5 May 27 17:46:15.725653 extend-filesystems[1696]: Found loop6 May 27 17:46:15.725653 extend-filesystems[1696]: Found loop7 May 27 17:46:15.725653 extend-filesystems[1696]: Found sr0 May 27 17:46:15.725653 extend-filesystems[1696]: Found nvme0n1 May 27 17:46:15.725653 extend-filesystems[1696]: Found nvme0n1p1 May 27 17:46:15.725653 extend-filesystems[1696]: Found nvme0n1p2 May 27 17:46:15.725653 extend-filesystems[1696]: Found nvme0n1p3 May 27 17:46:15.725653 extend-filesystems[1696]: Found usr May 27 17:46:15.725653 extend-filesystems[1696]: Found nvme0n1p4 May 27 17:46:15.748872 extend-filesystems[1696]: Found nvme0n1p6 May 27 17:46:15.748872 extend-filesystems[1696]: Found nvme0n1p7 May 27 17:46:15.748872 extend-filesystems[1696]: Found nvme0n1p9 May 27 17:46:15.748872 extend-filesystems[1696]: Checking size of /dev/nvme0n1p9 May 27 17:46:15.738279 oslogin_cache_refresh[1697]: Failure getting users, quitting May 27 17:46:15.725673 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:46:15.754506 google_oslogin_nss_cache[1697]: oslogin_cache_refresh[1697]: Failure getting users, quitting May 27 17:46:15.754506 google_oslogin_nss_cache[1697]: oslogin_cache_refresh[1697]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:46:15.754506 google_oslogin_nss_cache[1697]: oslogin_cache_refresh[1697]: Refreshing group entry cache May 27 17:46:15.738297 oslogin_cache_refresh[1697]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 17:46:15.728691 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:46:15.738333 oslogin_cache_refresh[1697]: Refreshing group entry cache May 27 17:46:15.728890 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:46:15.732294 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:46:15.732463 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:46:15.755486 chronyd[1713]: Timezone right/UTC failed leap second check, ignoring May 27 17:46:15.756951 systemd[1]: Started chronyd.service - NTP client/server. May 27 17:46:15.755637 chronyd[1713]: Loaded seccomp filter (level 2) May 27 17:46:15.763801 google_oslogin_nss_cache[1697]: oslogin_cache_refresh[1697]: Failure getting groups, quitting May 27 17:46:15.763801 google_oslogin_nss_cache[1697]: oslogin_cache_refresh[1697]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:46:15.762675 oslogin_cache_refresh[1697]: Failure getting groups, quitting May 27 17:46:15.762684 oslogin_cache_refresh[1697]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 17:46:15.767332 jq[1711]: true May 27 17:46:15.767789 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 17:46:15.771418 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 17:46:15.774414 (ntainerd)[1729]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:46:15.779308 extend-filesystems[1696]: Old size kept for /dev/nvme0n1p9 May 27 17:46:15.781788 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:46:15.781964 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:46:15.784107 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:46:15.784268 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:46:15.789016 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:46:15.824575 jq[1742]: true May 27 17:46:15.842343 tar[1719]: linux-amd64/LICENSE May 27 17:46:15.844085 tar[1719]: linux-amd64/helm May 27 17:46:15.853222 update_engine[1708]: I20250527 17:46:15.853152 1708 main.cc:92] Flatcar Update Engine starting May 27 17:46:15.859782 dbus-daemon[1690]: [system] SELinux support is enabled May 27 17:46:15.859896 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:46:15.864393 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:46:15.864423 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:46:15.865981 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:46:15.865997 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:46:15.876338 systemd[1]: Started update-engine.service - Update Engine. May 27 17:46:15.879050 update_engine[1708]: I20250527 17:46:15.879014 1708 update_check_scheduler.cc:74] Next update check in 3m51s May 27 17:46:15.883162 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:46:15.908899 systemd-logind[1706]: New seat seat0. May 27 17:46:15.914099 systemd-logind[1706]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 17:46:15.914277 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:46:15.954611 coreos-metadata[1689]: May 27 17:46:15.954 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 27 17:46:15.962054 coreos-metadata[1689]: May 27 17:46:15.962 INFO Fetch successful May 27 17:46:15.962811 bash[1777]: Updated "/home/core/.ssh/authorized_keys" May 27 17:46:15.962997 coreos-metadata[1689]: May 27 17:46:15.962 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 27 17:46:15.965159 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:46:15.967698 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 17:46:15.969441 coreos-metadata[1689]: May 27 17:46:15.969 INFO Fetch successful May 27 17:46:15.969441 coreos-metadata[1689]: May 27 17:46:15.969 INFO Fetching http://168.63.129.16/machine/f0c501d8-9dd3-4730-a945-90cc0e6d4ede/27444e46%2D0357%2D4224%2D879d%2D9beca75e4f87.%5Fci%2D4344.0.0%2Da%2D927e686d84?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 27 17:46:15.970351 coreos-metadata[1689]: May 27 17:46:15.970 INFO Fetch successful May 27 17:46:15.971200 coreos-metadata[1689]: May 27 17:46:15.971 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 27 17:46:15.984936 coreos-metadata[1689]: May 27 17:46:15.984 INFO Fetch successful May 27 17:46:16.061964 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 17:46:16.099021 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:46:16.184136 locksmithd[1776]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:46:16.431979 sshd_keygen[1720]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:46:16.473801 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:46:16.478437 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:46:16.482656 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 27 17:46:16.495961 containerd[1729]: time="2025-05-27T17:46:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:46:16.495961 containerd[1729]: time="2025-05-27T17:46:16.495696997Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:46:16.516087 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:46:16.518451 containerd[1729]: time="2025-05-27T17:46:16.518420127Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.262µs" May 27 17:46:16.518531 containerd[1729]: time="2025-05-27T17:46:16.518519272Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:46:16.518594 containerd[1729]: time="2025-05-27T17:46:16.518586076Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:46:16.518741 containerd[1729]: time="2025-05-27T17:46:16.518730665Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:46:16.518791 containerd[1729]: time="2025-05-27T17:46:16.518783290Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:46:16.518835 containerd[1729]: time="2025-05-27T17:46:16.518828767Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:46:16.518917 containerd[1729]: time="2025-05-27T17:46:16.518908109Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:46:16.518947 containerd[1729]: time="2025-05-27T17:46:16.518940888Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:46:16.519197 containerd[1729]: time="2025-05-27T17:46:16.519186701Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:46:16.519232 containerd[1729]: time="2025-05-27T17:46:16.519225861Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:46:16.519263 containerd[1729]: time="2025-05-27T17:46:16.519256840Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:46:16.519292 containerd[1729]: time="2025-05-27T17:46:16.519286553Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:46:16.519598 containerd[1729]: time="2025-05-27T17:46:16.519587278Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:46:16.519887 containerd[1729]: time="2025-05-27T17:46:16.519875595Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:46:16.519967 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:46:16.521510 containerd[1729]: time="2025-05-27T17:46:16.521434806Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:46:16.521510 containerd[1729]: time="2025-05-27T17:46:16.521483117Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:46:16.521825 containerd[1729]: time="2025-05-27T17:46:16.521625313Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:46:16.521942 containerd[1729]: time="2025-05-27T17:46:16.521931031Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:46:16.522033 containerd[1729]: time="2025-05-27T17:46:16.522025043Z" level=info msg="metadata content store policy set" policy=shared May 27 17:46:16.526961 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:46:16.529354 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 27 17:46:16.537661 containerd[1729]: time="2025-05-27T17:46:16.537637678Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:46:16.538565 containerd[1729]: time="2025-05-27T17:46:16.537754485Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:46:16.538565 containerd[1729]: time="2025-05-27T17:46:16.537772211Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:46:16.538565 containerd[1729]: time="2025-05-27T17:46:16.537785108Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:46:16.538565 containerd[1729]: time="2025-05-27T17:46:16.537846905Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:46:16.538565 containerd[1729]: time="2025-05-27T17:46:16.537859930Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:46:16.538565 containerd[1729]: time="2025-05-27T17:46:16.537871444Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:46:16.538565 containerd[1729]: time="2025-05-27T17:46:16.537883929Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:46:16.538565 containerd[1729]: time="2025-05-27T17:46:16.537900197Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:46:16.538565 containerd[1729]: time="2025-05-27T17:46:16.537909727Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:46:16.538565 containerd[1729]: time="2025-05-27T17:46:16.537918712Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:46:16.538565 containerd[1729]: time="2025-05-27T17:46:16.537930602Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:46:16.538565 containerd[1729]: time="2025-05-27T17:46:16.538031138Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:46:16.538565 containerd[1729]: time="2025-05-27T17:46:16.538048859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:46:16.538565 containerd[1729]: time="2025-05-27T17:46:16.538063078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:46:16.538854 containerd[1729]: time="2025-05-27T17:46:16.538073756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:46:16.538854 containerd[1729]: time="2025-05-27T17:46:16.538083658Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:46:16.538854 containerd[1729]: time="2025-05-27T17:46:16.538093720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:46:16.538854 containerd[1729]: time="2025-05-27T17:46:16.538106120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:46:16.538854 containerd[1729]: time="2025-05-27T17:46:16.538115512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:46:16.538854 containerd[1729]: time="2025-05-27T17:46:16.538126296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:46:16.538854 containerd[1729]: time="2025-05-27T17:46:16.538136218Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:46:16.538854 containerd[1729]: time="2025-05-27T17:46:16.538147228Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:46:16.538854 containerd[1729]: time="2025-05-27T17:46:16.538213100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:46:16.538854 containerd[1729]: time="2025-05-27T17:46:16.538227040Z" level=info msg="Start snapshots syncer" May 27 17:46:16.538854 containerd[1729]: time="2025-05-27T17:46:16.538247255Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:46:16.539730 containerd[1729]: time="2025-05-27T17:46:16.538474999Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:46:16.539730 containerd[1729]: time="2025-05-27T17:46:16.538525328Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:46:16.541151 containerd[1729]: time="2025-05-27T17:46:16.540948051Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:46:16.541151 containerd[1729]: time="2025-05-27T17:46:16.541096692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:46:16.541151 containerd[1729]: time="2025-05-27T17:46:16.541125232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:46:16.541359 containerd[1729]: time="2025-05-27T17:46:16.541139452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:46:16.541359 containerd[1729]: time="2025-05-27T17:46:16.541268021Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:46:16.541359 containerd[1729]: time="2025-05-27T17:46:16.541289073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:46:16.541359 containerd[1729]: time="2025-05-27T17:46:16.541303525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:46:16.541359 containerd[1729]: time="2025-05-27T17:46:16.541317946Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:46:16.541537 containerd[1729]: time="2025-05-27T17:46:16.541487890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:46:16.541537 containerd[1729]: time="2025-05-27T17:46:16.541502953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:46:16.541537 containerd[1729]: time="2025-05-27T17:46:16.541516467Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:46:16.541684 containerd[1729]: time="2025-05-27T17:46:16.541654831Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:46:16.543077 containerd[1729]: time="2025-05-27T17:46:16.541671411Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:46:16.543077 containerd[1729]: time="2025-05-27T17:46:16.542531297Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:46:16.543077 containerd[1729]: time="2025-05-27T17:46:16.542560561Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:46:16.543077 containerd[1729]: time="2025-05-27T17:46:16.542570852Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:46:16.543077 containerd[1729]: time="2025-05-27T17:46:16.542581023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:46:16.543077 containerd[1729]: time="2025-05-27T17:46:16.542593732Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:46:16.543077 containerd[1729]: time="2025-05-27T17:46:16.542611127Z" level=info msg="runtime interface created" May 27 17:46:16.543077 containerd[1729]: time="2025-05-27T17:46:16.542616862Z" level=info msg="created NRI interface" May 27 17:46:16.543077 containerd[1729]: time="2025-05-27T17:46:16.542625812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:46:16.543077 containerd[1729]: time="2025-05-27T17:46:16.542648746Z" level=info msg="Connect containerd service" May 27 17:46:16.543077 containerd[1729]: time="2025-05-27T17:46:16.542682574Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:46:16.545053 containerd[1729]: time="2025-05-27T17:46:16.545031448Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:46:16.550287 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:46:16.556786 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:46:16.561823 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 17:46:16.564831 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:46:16.637098 tar[1719]: linux-amd64/README.md May 27 17:46:16.656511 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:46:17.081298 containerd[1729]: time="2025-05-27T17:46:17.081157205Z" level=info msg="Start subscribing containerd event" May 27 17:46:17.081298 containerd[1729]: time="2025-05-27T17:46:17.081246159Z" level=info msg="Start recovering state" May 27 17:46:17.081483 containerd[1729]: time="2025-05-27T17:46:17.081385254Z" level=info msg="Start event monitor" May 27 17:46:17.081483 containerd[1729]: time="2025-05-27T17:46:17.081401376Z" level=info msg="Start cni network conf syncer for default" May 27 17:46:17.081483 containerd[1729]: time="2025-05-27T17:46:17.081408565Z" level=info msg="Start streaming server" May 27 17:46:17.081483 containerd[1729]: time="2025-05-27T17:46:17.081423870Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:46:17.081483 containerd[1729]: time="2025-05-27T17:46:17.081432465Z" level=info msg="runtime interface starting up..." May 27 17:46:17.081483 containerd[1729]: time="2025-05-27T17:46:17.081447914Z" level=info msg="starting plugins..." May 27 17:46:17.081483 containerd[1729]: time="2025-05-27T17:46:17.081462001Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:46:17.081934 containerd[1729]: time="2025-05-27T17:46:17.081870458Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:46:17.081934 containerd[1729]: time="2025-05-27T17:46:17.081919170Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:46:17.084749 containerd[1729]: time="2025-05-27T17:46:17.082259539Z" level=info msg="containerd successfully booted in 0.587509s" May 27 17:46:17.082418 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:46:17.164415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:17.166709 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:46:17.169789 systemd[1]: Startup finished in 2.927s (kernel) + 11.730s (initrd) + 8.054s (userspace) = 22.712s. May 27 17:46:17.176721 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:46:17.317934 login[1836]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 27 17:46:17.320902 login[1837]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 27 17:46:17.324706 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:46:17.325682 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:46:17.336195 systemd-logind[1706]: New session 2 of user core. May 27 17:46:17.341750 systemd-logind[1706]: New session 1 of user core. May 27 17:46:17.347636 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:46:17.350206 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:46:17.362234 (systemd)[1867]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:46:17.364274 systemd-logind[1706]: New session c1 of user core. May 27 17:46:17.535231 systemd[1867]: Queued start job for default target default.target. May 27 17:46:17.540781 systemd[1867]: Created slice app.slice - User Application Slice. May 27 17:46:17.540809 systemd[1867]: Reached target paths.target - Paths. May 27 17:46:17.540860 systemd[1867]: Reached target timers.target - Timers. May 27 17:46:17.541941 systemd[1867]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:46:17.551154 systemd[1867]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:46:17.551204 systemd[1867]: Reached target sockets.target - Sockets. May 27 17:46:17.551389 systemd[1867]: Reached target basic.target - Basic System. May 27 17:46:17.551420 systemd[1867]: Reached target default.target - Main User Target. May 27 17:46:17.551440 systemd[1867]: Startup finished in 182ms. May 27 17:46:17.551487 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:46:17.553389 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:46:17.554540 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:46:17.751730 waagent[1831]: 2025-05-27T17:46:17.751628Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 27 17:46:17.752638 waagent[1831]: 2025-05-27T17:46:17.752599Z INFO Daemon Daemon OS: flatcar 4344.0.0 May 27 17:46:17.752881 waagent[1831]: 2025-05-27T17:46:17.752861Z INFO Daemon Daemon Python: 3.11.12 May 27 17:46:17.753322 waagent[1831]: 2025-05-27T17:46:17.753243Z INFO Daemon Daemon Run daemon May 27 17:46:17.753507 waagent[1831]: 2025-05-27T17:46:17.753488Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.0.0' May 27 17:46:17.753653 waagent[1831]: 2025-05-27T17:46:17.753636Z INFO Daemon Daemon Using waagent for provisioning May 27 17:46:17.754755 waagent[1831]: 2025-05-27T17:46:17.754147Z INFO Daemon Daemon Activate resource disk May 27 17:46:17.754755 waagent[1831]: 2025-05-27T17:46:17.754372Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 27 17:46:17.756283 waagent[1831]: 2025-05-27T17:46:17.756250Z INFO Daemon Daemon Found device: None May 27 17:46:17.756596 waagent[1831]: 2025-05-27T17:46:17.756574Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 27 17:46:17.756845 waagent[1831]: 2025-05-27T17:46:17.756828Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 27 17:46:17.757500 waagent[1831]: 2025-05-27T17:46:17.757471Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 27 17:46:17.757792 waagent[1831]: 2025-05-27T17:46:17.757774Z INFO Daemon Daemon Running default provisioning handler May 27 17:46:17.763891 waagent[1831]: 2025-05-27T17:46:17.763851Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 27 17:46:17.764614 waagent[1831]: 2025-05-27T17:46:17.764583Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 27 17:46:17.764766 waagent[1831]: 2025-05-27T17:46:17.764749Z INFO Daemon Daemon cloud-init is enabled: False May 27 17:46:17.765015 waagent[1831]: 2025-05-27T17:46:17.765000Z INFO Daemon Daemon Copying ovf-env.xml May 27 17:46:17.811760 waagent[1831]: 2025-05-27T17:46:17.811717Z INFO Daemon Daemon Successfully mounted dvd May 27 17:46:17.815658 kubelet[1856]: E0527 17:46:17.815609 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:46:17.817522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:46:17.817658 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:46:17.817923 systemd[1]: kubelet.service: Consumed 869ms CPU time, 263.4M memory peak. May 27 17:46:17.841043 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 27 17:46:17.842578 waagent[1831]: 2025-05-27T17:46:17.841326Z INFO Daemon Daemon Detect protocol endpoint May 27 17:46:17.842578 waagent[1831]: 2025-05-27T17:46:17.841965Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 27 17:46:17.842578 waagent[1831]: 2025-05-27T17:46:17.842215Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 27 17:46:17.842578 waagent[1831]: 2025-05-27T17:46:17.842446Z INFO Daemon Daemon Test for route to 168.63.129.16 May 27 17:46:17.842738 waagent[1831]: 2025-05-27T17:46:17.842720Z INFO Daemon Daemon Route to 168.63.129.16 exists May 27 17:46:17.842942 waagent[1831]: 2025-05-27T17:46:17.842928Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 27 17:46:17.855702 waagent[1831]: 2025-05-27T17:46:17.855672Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 27 17:46:17.857931 waagent[1831]: 2025-05-27T17:46:17.856223Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 27 17:46:17.857931 waagent[1831]: 2025-05-27T17:46:17.856444Z INFO Daemon Daemon Server preferred version:2015-04-05 May 27 17:46:17.907442 waagent[1831]: 2025-05-27T17:46:17.907396Z INFO Daemon Daemon Initializing goal state during protocol detection May 27 17:46:17.908463 waagent[1831]: 2025-05-27T17:46:17.907974Z INFO Daemon Daemon Forcing an update of the goal state. May 27 17:46:17.916046 waagent[1831]: 2025-05-27T17:46:17.916012Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 27 17:46:17.932498 waagent[1831]: 2025-05-27T17:46:17.932466Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 27 17:46:17.933319 waagent[1831]: 2025-05-27T17:46:17.933175Z INFO Daemon May 27 17:46:17.933319 waagent[1831]: 2025-05-27T17:46:17.933325Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: da8ae746-cbb9-4419-9f9f-56bc2865f902 eTag: 9950828996936274684 source: Fabric] May 27 17:46:17.933319 waagent[1831]: 2025-05-27T17:46:17.933576Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 27 17:46:17.933319 waagent[1831]: 2025-05-27T17:46:17.933841Z INFO Daemon May 27 17:46:17.933319 waagent[1831]: 2025-05-27T17:46:17.933980Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 27 17:46:17.940001 waagent[1831]: 2025-05-27T17:46:17.939972Z INFO Daemon Daemon Downloading artifacts profile blob May 27 17:46:18.107608 waagent[1831]: 2025-05-27T17:46:18.107518Z INFO Daemon Downloaded certificate {'thumbprint': 'BC6121F4936450D9CA9921E3EA369F5ECF79F748', 'hasPrivateKey': True} May 27 17:46:18.109606 waagent[1831]: 2025-05-27T17:46:18.109541Z INFO Daemon Fetch goal state completed May 27 17:46:18.116357 waagent[1831]: 2025-05-27T17:46:18.116323Z INFO Daemon Daemon Starting provisioning May 27 17:46:18.116928 waagent[1831]: 2025-05-27T17:46:18.116757Z INFO Daemon Daemon Handle ovf-env.xml. May 27 17:46:18.117742 waagent[1831]: 2025-05-27T17:46:18.117720Z INFO Daemon Daemon Set hostname [ci-4344.0.0-a-927e686d84] May 27 17:46:18.133081 waagent[1831]: 2025-05-27T17:46:18.133043Z INFO Daemon Daemon Publish hostname [ci-4344.0.0-a-927e686d84] May 27 17:46:18.134859 waagent[1831]: 2025-05-27T17:46:18.133603Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 27 17:46:18.134859 waagent[1831]: 2025-05-27T17:46:18.133848Z INFO Daemon Daemon Primary interface is [eth0] May 27 17:46:18.140974 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:46:18.140980 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:46:18.141039 systemd-networkd[1363]: eth0: DHCP lease lost May 27 17:46:18.141889 waagent[1831]: 2025-05-27T17:46:18.141846Z INFO Daemon Daemon Create user account if not exists May 27 17:46:18.142115 waagent[1831]: 2025-05-27T17:46:18.142092Z INFO Daemon Daemon User core already exists, skip useradd May 27 17:46:18.142182 waagent[1831]: 2025-05-27T17:46:18.142166Z INFO Daemon Daemon Configure sudoer May 27 17:46:18.145634 waagent[1831]: 2025-05-27T17:46:18.145592Z INFO Daemon Daemon Configure sshd May 27 17:46:18.148862 waagent[1831]: 2025-05-27T17:46:18.148825Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 27 17:46:18.149412 waagent[1831]: 2025-05-27T17:46:18.149201Z INFO Daemon Daemon Deploy ssh public key. May 27 17:46:18.157606 systemd-networkd[1363]: eth0: DHCPv4 address 10.200.8.45/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 27 17:46:19.226896 waagent[1831]: 2025-05-27T17:46:19.226794Z INFO Daemon Daemon Provisioning complete May 27 17:46:19.242334 waagent[1831]: 2025-05-27T17:46:19.242297Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 27 17:46:19.243490 waagent[1831]: 2025-05-27T17:46:19.243461Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 27 17:46:19.244981 waagent[1831]: 2025-05-27T17:46:19.244956Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 27 17:46:19.337869 waagent[1921]: 2025-05-27T17:46:19.337801Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 27 17:46:19.338183 waagent[1921]: 2025-05-27T17:46:19.337897Z INFO ExtHandler ExtHandler OS: flatcar 4344.0.0 May 27 17:46:19.338183 waagent[1921]: 2025-05-27T17:46:19.337938Z INFO ExtHandler ExtHandler Python: 3.11.12 May 27 17:46:19.338183 waagent[1921]: 2025-05-27T17:46:19.337999Z INFO ExtHandler ExtHandler CPU Arch: x86_64 May 27 17:46:19.357238 waagent[1921]: 2025-05-27T17:46:19.357194Z INFO ExtHandler ExtHandler Distro: flatcar-4344.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; May 27 17:46:19.357358 waagent[1921]: 2025-05-27T17:46:19.357336Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:46:19.357403 waagent[1921]: 2025-05-27T17:46:19.357381Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:46:19.365482 waagent[1921]: 2025-05-27T17:46:19.365433Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 27 17:46:19.373318 waagent[1921]: 2025-05-27T17:46:19.373287Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 27 17:46:19.373639 waagent[1921]: 2025-05-27T17:46:19.373610Z INFO ExtHandler May 27 17:46:19.373680 waagent[1921]: 2025-05-27T17:46:19.373659Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2e7455c3-4c01-4e09-92db-98cb34ee5261 eTag: 9950828996936274684 source: Fabric] May 27 17:46:19.373845 waagent[1921]: 2025-05-27T17:46:19.373825Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 27 17:46:19.374135 waagent[1921]: 2025-05-27T17:46:19.374114Z INFO ExtHandler May 27 17:46:19.374180 waagent[1921]: 2025-05-27T17:46:19.374149Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 27 17:46:19.378140 waagent[1921]: 2025-05-27T17:46:19.378113Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 27 17:46:19.438157 waagent[1921]: 2025-05-27T17:46:19.438114Z INFO ExtHandler Downloaded certificate {'thumbprint': 'BC6121F4936450D9CA9921E3EA369F5ECF79F748', 'hasPrivateKey': True} May 27 17:46:19.438472 waagent[1921]: 2025-05-27T17:46:19.438446Z INFO ExtHandler Fetch goal state completed May 27 17:46:19.459218 waagent[1921]: 2025-05-27T17:46:19.459178Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 27 17:46:19.462977 waagent[1921]: 2025-05-27T17:46:19.462934Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1921 May 27 17:46:19.463065 waagent[1921]: 2025-05-27T17:46:19.463043Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 27 17:46:19.463294 waagent[1921]: 2025-05-27T17:46:19.463274Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 27 17:46:19.464209 waagent[1921]: 2025-05-27T17:46:19.464178Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 27 17:46:19.464455 waagent[1921]: 2025-05-27T17:46:19.464433Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 27 17:46:19.464596 waagent[1921]: 2025-05-27T17:46:19.464531Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 27 17:46:19.464949 waagent[1921]: 2025-05-27T17:46:19.464920Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 27 17:46:19.480490 waagent[1921]: 2025-05-27T17:46:19.480437Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 27 17:46:19.480601 waagent[1921]: 2025-05-27T17:46:19.480581Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 27 17:46:19.485709 waagent[1921]: 2025-05-27T17:46:19.485587Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 27 17:46:19.490154 systemd[1]: Reload requested from client PID 1936 ('systemctl') (unit waagent.service)... May 27 17:46:19.490166 systemd[1]: Reloading... May 27 17:46:19.560577 zram_generator::config[1977]: No configuration found. May 27 17:46:19.634857 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:46:19.716712 systemd[1]: Reloading finished in 226 ms. May 27 17:46:19.740801 waagent[1921]: 2025-05-27T17:46:19.740710Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 27 17:46:19.740854 waagent[1921]: 2025-05-27T17:46:19.740818Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 27 17:46:19.930644 waagent[1921]: 2025-05-27T17:46:19.930585Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 27 17:46:19.930874 waagent[1921]: 2025-05-27T17:46:19.930852Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 27 17:46:19.931475 waagent[1921]: 2025-05-27T17:46:19.931436Z INFO ExtHandler ExtHandler Starting env monitor service. May 27 17:46:19.931538 waagent[1921]: 2025-05-27T17:46:19.931502Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:46:19.931610 waagent[1921]: 2025-05-27T17:46:19.931573Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:46:19.932024 waagent[1921]: 2025-05-27T17:46:19.931999Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 27 17:46:19.932077 waagent[1921]: 2025-05-27T17:46:19.932048Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 27 17:46:19.932282 waagent[1921]: 2025-05-27T17:46:19.932252Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 27 17:46:19.932335 waagent[1921]: 2025-05-27T17:46:19.932311Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 27 17:46:19.932383 waagent[1921]: 2025-05-27T17:46:19.932353Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 27 17:46:19.932641 waagent[1921]: 2025-05-27T17:46:19.932619Z INFO EnvHandler ExtHandler Configure routes May 27 17:46:19.932704 waagent[1921]: 2025-05-27T17:46:19.932648Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 27 17:46:19.932859 waagent[1921]: 2025-05-27T17:46:19.932823Z INFO EnvHandler ExtHandler Gateway:None May 27 17:46:19.932905 waagent[1921]: 2025-05-27T17:46:19.932882Z INFO EnvHandler ExtHandler Routes:None May 27 17:46:19.932996 waagent[1921]: 2025-05-27T17:46:19.932974Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 27 17:46:19.933096 waagent[1921]: 2025-05-27T17:46:19.933076Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 27 17:46:19.933158 waagent[1921]: 2025-05-27T17:46:19.933139Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 27 17:46:19.933503 waagent[1921]: 2025-05-27T17:46:19.933485Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 27 17:46:19.933503 waagent[1921]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 27 17:46:19.933503 waagent[1921]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 May 27 17:46:19.933503 waagent[1921]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 27 17:46:19.933503 waagent[1921]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 27 17:46:19.933503 waagent[1921]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 27 17:46:19.933503 waagent[1921]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 27 17:46:19.949602 waagent[1921]: 2025-05-27T17:46:19.949301Z INFO ExtHandler ExtHandler May 27 17:46:19.949602 waagent[1921]: 2025-05-27T17:46:19.949368Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d5671933-1d2a-4108-a65e-672ff1a1c93b correlation 51d27aa3-eae3-4cf0-a262-d8afe45d5072 created: 2025-05-27T17:45:27.357697Z] May 27 17:46:19.949693 waagent[1921]: 2025-05-27T17:46:19.949669Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 27 17:46:19.950166 waagent[1921]: 2025-05-27T17:46:19.950137Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] May 27 17:46:19.968139 waagent[1921]: 2025-05-27T17:46:19.968089Z INFO MonitorHandler ExtHandler Network interfaces: May 27 17:46:19.968139 waagent[1921]: Executing ['ip', '-a', '-o', 'link']: May 27 17:46:19.968139 waagent[1921]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 27 17:46:19.968139 waagent[1921]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:e0:c1 brd ff:ff:ff:ff:ff:ff\ alias Network Device May 27 17:46:19.968139 waagent[1921]: Executing ['ip', '-4', '-a', '-o', 'address']: May 27 17:46:19.968139 waagent[1921]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 27 17:46:19.968139 waagent[1921]: 2: eth0 inet 10.200.8.45/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever May 27 17:46:19.968139 waagent[1921]: Executing ['ip', '-6', '-a', '-o', 'address']: May 27 17:46:19.968139 waagent[1921]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 27 17:46:19.968139 waagent[1921]: 2: eth0 inet6 fe80::7e1e:52ff:fe34:e0c1/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 27 17:46:19.993374 waagent[1921]: 2025-05-27T17:46:19.993305Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command May 27 17:46:19.993374 waagent[1921]: Try `iptables -h' or 'iptables --help' for more information.) May 27 17:46:19.993671 waagent[1921]: 2025-05-27T17:46:19.993651Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 44C323F8-BBDC-413F-901A-DDAADB633D83;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 27 17:46:20.041836 waagent[1921]: 2025-05-27T17:46:20.041793Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 27 17:46:20.041836 waagent[1921]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:46:20.041836 waagent[1921]: pkts bytes target prot opt in out source destination May 27 17:46:20.041836 waagent[1921]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 27 17:46:20.041836 waagent[1921]: pkts bytes target prot opt in out source destination May 27 17:46:20.041836 waagent[1921]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:46:20.041836 waagent[1921]: pkts bytes target prot opt in out source destination May 27 17:46:20.041836 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 27 17:46:20.041836 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 27 17:46:20.041836 waagent[1921]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 27 17:46:20.044051 waagent[1921]: 2025-05-27T17:46:20.044011Z INFO EnvHandler ExtHandler Current Firewall rules: May 27 17:46:20.044051 waagent[1921]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:46:20.044051 waagent[1921]: pkts bytes target prot opt in out source destination May 27 17:46:20.044051 waagent[1921]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 27 17:46:20.044051 waagent[1921]: pkts bytes target prot opt in out source destination May 27 17:46:20.044051 waagent[1921]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 27 17:46:20.044051 waagent[1921]: pkts bytes target prot opt in out source destination May 27 17:46:20.044051 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 27 17:46:20.044051 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 27 17:46:20.044051 waagent[1921]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 27 17:46:28.015766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:46:28.017939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:28.513484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:28.521739 (kubelet)[2072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:46:28.553969 kubelet[2072]: E0527 17:46:28.553932 2072 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:46:28.556539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:46:28.556676 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:46:28.556977 systemd[1]: kubelet.service: Consumed 134ms CPU time, 108.8M memory peak. May 27 17:46:38.765969 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 17:46:38.768136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:39.218574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:39.222855 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:46:39.256686 kubelet[2087]: E0527 17:46:39.256646 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:46:39.258185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:46:39.258306 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:46:39.258626 systemd[1]: kubelet.service: Consumed 129ms CPU time, 109M memory peak. May 27 17:46:39.543869 chronyd[1713]: Selected source PHC0 May 27 17:46:47.864828 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:46:47.866179 systemd[1]: Started sshd@0-10.200.8.45:22-10.200.16.10:42130.service - OpenSSH per-connection server daemon (10.200.16.10:42130). May 27 17:46:48.586579 sshd[2096]: Accepted publickey for core from 10.200.16.10 port 42130 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:46:48.588128 sshd-session[2096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:48.592782 systemd-logind[1706]: New session 3 of user core. May 27 17:46:48.598701 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:46:49.135157 systemd[1]: Started sshd@1-10.200.8.45:22-10.200.16.10:46468.service - OpenSSH per-connection server daemon (10.200.16.10:46468). May 27 17:46:49.265386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 27 17:46:49.266933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:49.714222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:46:49.721796 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:46:49.755147 kubelet[2111]: E0527 17:46:49.755025 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:46:49.756730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:46:49.756850 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:46:49.757143 systemd[1]: kubelet.service: Consumed 131ms CPU time, 108.8M memory peak. May 27 17:46:49.761929 sshd[2101]: Accepted publickey for core from 10.200.16.10 port 46468 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:46:49.763047 sshd-session[2101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:49.767144 systemd-logind[1706]: New session 4 of user core. May 27 17:46:49.772666 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:46:50.221111 sshd[2118]: Connection closed by 10.200.16.10 port 46468 May 27 17:46:50.221742 sshd-session[2101]: pam_unix(sshd:session): session closed for user core May 27 17:46:50.224812 systemd[1]: sshd@1-10.200.8.45:22-10.200.16.10:46468.service: Deactivated successfully. May 27 17:46:50.226420 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:46:50.227882 systemd-logind[1706]: Session 4 logged out. Waiting for processes to exit. May 27 17:46:50.228528 systemd-logind[1706]: Removed session 4. May 27 17:46:50.336436 systemd[1]: Started sshd@2-10.200.8.45:22-10.200.16.10:46480.service - OpenSSH per-connection server daemon (10.200.16.10:46480). May 27 17:46:50.966621 sshd[2124]: Accepted publickey for core from 10.200.16.10 port 46480 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:46:50.968057 sshd-session[2124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:50.972443 systemd-logind[1706]: New session 5 of user core. May 27 17:46:50.978703 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:46:51.406750 sshd[2126]: Connection closed by 10.200.16.10 port 46480 May 27 17:46:51.407305 sshd-session[2124]: pam_unix(sshd:session): session closed for user core May 27 17:46:51.410985 systemd[1]: sshd@2-10.200.8.45:22-10.200.16.10:46480.service: Deactivated successfully. May 27 17:46:51.412477 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:46:51.413198 systemd-logind[1706]: Session 5 logged out. Waiting for processes to exit. May 27 17:46:51.414150 systemd-logind[1706]: Removed session 5. May 27 17:46:51.516531 systemd[1]: Started sshd@3-10.200.8.45:22-10.200.16.10:46482.service - OpenSSH per-connection server daemon (10.200.16.10:46482). May 27 17:46:52.146628 sshd[2132]: Accepted publickey for core from 10.200.16.10 port 46482 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:46:52.148009 sshd-session[2132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:52.152203 systemd-logind[1706]: New session 6 of user core. May 27 17:46:52.161664 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:46:52.589270 sshd[2134]: Connection closed by 10.200.16.10 port 46482 May 27 17:46:52.589851 sshd-session[2132]: pam_unix(sshd:session): session closed for user core May 27 17:46:52.593364 systemd[1]: sshd@3-10.200.8.45:22-10.200.16.10:46482.service: Deactivated successfully. May 27 17:46:52.594810 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:46:52.595345 systemd-logind[1706]: Session 6 logged out. Waiting for processes to exit. May 27 17:46:52.596389 systemd-logind[1706]: Removed session 6. May 27 17:46:52.709827 systemd[1]: Started sshd@4-10.200.8.45:22-10.200.16.10:46496.service - OpenSSH per-connection server daemon (10.200.16.10:46496). May 27 17:46:53.337609 sshd[2140]: Accepted publickey for core from 10.200.16.10 port 46496 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:46:53.339114 sshd-session[2140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:53.343756 systemd-logind[1706]: New session 7 of user core. May 27 17:46:53.350686 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:46:53.754245 sudo[2143]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:46:53.754465 sudo[2143]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:46:53.781938 sudo[2143]: pam_unix(sudo:session): session closed for user root May 27 17:46:53.900490 sshd[2142]: Connection closed by 10.200.16.10 port 46496 May 27 17:46:53.901203 sshd-session[2140]: pam_unix(sshd:session): session closed for user core May 27 17:46:53.903726 systemd[1]: sshd@4-10.200.8.45:22-10.200.16.10:46496.service: Deactivated successfully. May 27 17:46:53.905408 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:46:53.906980 systemd-logind[1706]: Session 7 logged out. Waiting for processes to exit. May 27 17:46:53.907735 systemd-logind[1706]: Removed session 7. May 27 17:46:54.010706 systemd[1]: Started sshd@5-10.200.8.45:22-10.200.16.10:46506.service - OpenSSH per-connection server daemon (10.200.16.10:46506). May 27 17:46:54.636849 sshd[2149]: Accepted publickey for core from 10.200.16.10 port 46506 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:46:54.638344 sshd-session[2149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:54.642882 systemd-logind[1706]: New session 8 of user core. May 27 17:46:54.652680 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:46:54.979989 sudo[2153]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:46:54.980196 sudo[2153]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:46:54.985739 sudo[2153]: pam_unix(sudo:session): session closed for user root May 27 17:46:54.989422 sudo[2152]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:46:54.989627 sudo[2152]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:46:54.996651 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:46:55.026628 augenrules[2175]: No rules May 27 17:46:55.027594 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:46:55.027788 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:46:55.028475 sudo[2152]: pam_unix(sudo:session): session closed for user root May 27 17:46:55.129143 sshd[2151]: Connection closed by 10.200.16.10 port 46506 May 27 17:46:55.129620 sshd-session[2149]: pam_unix(sshd:session): session closed for user core May 27 17:46:55.132214 systemd[1]: sshd@5-10.200.8.45:22-10.200.16.10:46506.service: Deactivated successfully. May 27 17:46:55.133731 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:46:55.134802 systemd-logind[1706]: Session 8 logged out. Waiting for processes to exit. May 27 17:46:55.135767 systemd-logind[1706]: Removed session 8. May 27 17:46:55.246524 systemd[1]: Started sshd@6-10.200.8.45:22-10.200.16.10:46516.service - OpenSSH per-connection server daemon (10.200.16.10:46516). May 27 17:46:55.871224 sshd[2184]: Accepted publickey for core from 10.200.16.10 port 46516 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:46:55.872653 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:46:55.877014 systemd-logind[1706]: New session 9 of user core. May 27 17:46:55.881697 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:46:56.213280 sudo[2187]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:46:56.213491 sudo[2187]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:46:57.664243 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:46:57.680850 (dockerd)[2204]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:46:58.303856 dockerd[2204]: time="2025-05-27T17:46:58.303804348Z" level=info msg="Starting up" May 27 17:46:58.305073 dockerd[2204]: time="2025-05-27T17:46:58.305025743Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:46:58.408556 dockerd[2204]: time="2025-05-27T17:46:58.408521877Z" level=info msg="Loading containers: start." May 27 17:46:58.443568 kernel: Initializing XFRM netlink socket May 27 17:46:58.670767 systemd-networkd[1363]: docker0: Link UP May 27 17:46:58.681544 dockerd[2204]: time="2025-05-27T17:46:58.681518151Z" level=info msg="Loading containers: done." May 27 17:46:58.700027 dockerd[2204]: time="2025-05-27T17:46:58.699997169Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:46:58.700116 dockerd[2204]: time="2025-05-27T17:46:58.700059632Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:46:58.700140 dockerd[2204]: time="2025-05-27T17:46:58.700135193Z" level=info msg="Initializing buildkit" May 27 17:46:58.732458 dockerd[2204]: time="2025-05-27T17:46:58.732437055Z" level=info msg="Completed buildkit initialization" May 27 17:46:58.737553 dockerd[2204]: time="2025-05-27T17:46:58.737528126Z" level=info msg="Daemon has completed initialization" May 27 17:46:58.737735 dockerd[2204]: time="2025-05-27T17:46:58.737579232Z" level=info msg="API listen on /run/docker.sock" May 27 17:46:58.737705 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:46:59.765639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 27 17:46:59.767952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:46:59.850345 containerd[1729]: time="2025-05-27T17:46:59.850304782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 17:47:00.181616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:47:00.184370 (kubelet)[2409]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:47:00.214708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:47:00.288299 kubelet[2409]: E0527 17:47:00.213817 2409 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:47:00.214803 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:47:00.215089 systemd[1]: kubelet.service: Consumed 128ms CPU time, 108.2M memory peak. May 27 17:47:00.711404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3875253703.mount: Deactivated successfully. May 27 17:47:01.129569 kernel: hv_balloon: Max. dynamic memory size: 8192 MB May 27 17:47:01.460719 update_engine[1708]: I20250527 17:47:01.460594 1708 update_attempter.cc:509] Updating boot flags... May 27 17:47:01.534563 kernel: mana 7870:00:00.0: Failed to establish HWC: -110 May 27 17:47:01.540762 kernel: mana 7870:00:00.0: gdma probe failed: err = -110 May 27 17:47:01.543673 kernel: mana 7870:00:00.0: probe with driver mana failed with error -110 May 27 17:47:01.786274 containerd[1729]: time="2025-05-27T17:47:01.786228271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:01.788200 containerd[1729]: time="2025-05-27T17:47:01.788163296Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797819" May 27 17:47:01.790509 containerd[1729]: time="2025-05-27T17:47:01.790473176Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:01.795565 containerd[1729]: time="2025-05-27T17:47:01.794146848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:01.796260 containerd[1729]: time="2025-05-27T17:47:01.796232446Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.945879718s" May 27 17:47:01.796300 containerd[1729]: time="2025-05-27T17:47:01.796279401Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 27 17:47:01.798803 containerd[1729]: time="2025-05-27T17:47:01.798778048Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 17:47:02.951810 containerd[1729]: time="2025-05-27T17:47:02.951761498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:02.953907 containerd[1729]: time="2025-05-27T17:47:02.953873163Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782531" May 27 17:47:02.956500 containerd[1729]: time="2025-05-27T17:47:02.956463740Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:02.959876 containerd[1729]: time="2025-05-27T17:47:02.959824889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:02.960589 containerd[1729]: time="2025-05-27T17:47:02.960429955Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.161534692s" May 27 17:47:02.960589 containerd[1729]: time="2025-05-27T17:47:02.960463251Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 27 17:47:02.961229 containerd[1729]: time="2025-05-27T17:47:02.961173256Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 17:47:03.990391 containerd[1729]: time="2025-05-27T17:47:03.990336437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:03.992529 containerd[1729]: time="2025-05-27T17:47:03.992491368Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176071" May 27 17:47:03.995025 containerd[1729]: time="2025-05-27T17:47:03.994978209Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:04.001523 containerd[1729]: time="2025-05-27T17:47:04.001468226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:04.002298 containerd[1729]: time="2025-05-27T17:47:04.002117253Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.040913583s" May 27 17:47:04.002298 containerd[1729]: time="2025-05-27T17:47:04.002151487Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 27 17:47:04.002831 containerd[1729]: time="2025-05-27T17:47:04.002814466Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 17:47:04.818278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267098382.mount: Deactivated successfully. May 27 17:47:05.136427 containerd[1729]: time="2025-05-27T17:47:05.136328953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:05.138397 containerd[1729]: time="2025-05-27T17:47:05.138361360Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892880" May 27 17:47:05.140826 containerd[1729]: time="2025-05-27T17:47:05.140786969Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:05.143798 containerd[1729]: time="2025-05-27T17:47:05.143751894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:05.144248 containerd[1729]: time="2025-05-27T17:47:05.144022355Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.141095973s" May 27 17:47:05.144248 containerd[1729]: time="2025-05-27T17:47:05.144050694Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 27 17:47:05.144726 containerd[1729]: time="2025-05-27T17:47:05.144707837Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 17:47:05.634098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount119799159.mount: Deactivated successfully. May 27 17:47:06.398048 containerd[1729]: time="2025-05-27T17:47:06.398005971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:06.400123 containerd[1729]: time="2025-05-27T17:47:06.400095612Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" May 27 17:47:06.402615 containerd[1729]: time="2025-05-27T17:47:06.402578239Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:06.407702 containerd[1729]: time="2025-05-27T17:47:06.407661571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:06.408381 containerd[1729]: time="2025-05-27T17:47:06.408259566Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.263526932s" May 27 17:47:06.408381 containerd[1729]: time="2025-05-27T17:47:06.408288404Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 17:47:06.408993 containerd[1729]: time="2025-05-27T17:47:06.408974360Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:47:06.915824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2258072568.mount: Deactivated successfully. May 27 17:47:06.933793 containerd[1729]: time="2025-05-27T17:47:06.933760313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:47:06.935961 containerd[1729]: time="2025-05-27T17:47:06.935936835Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 27 17:47:06.938477 containerd[1729]: time="2025-05-27T17:47:06.938441498Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:47:06.942055 containerd[1729]: time="2025-05-27T17:47:06.942018745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:47:06.942557 containerd[1729]: time="2025-05-27T17:47:06.942377577Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 533.340852ms" May 27 17:47:06.942557 containerd[1729]: time="2025-05-27T17:47:06.942405910Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 17:47:06.943063 containerd[1729]: time="2025-05-27T17:47:06.943039872Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 17:47:07.456634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1149065477.mount: Deactivated successfully. May 27 17:47:08.961181 containerd[1729]: time="2025-05-27T17:47:08.961124147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:08.963170 containerd[1729]: time="2025-05-27T17:47:08.963136371Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" May 27 17:47:08.965660 containerd[1729]: time="2025-05-27T17:47:08.965620826Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:08.969219 containerd[1729]: time="2025-05-27T17:47:08.969164895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:08.969990 containerd[1729]: time="2025-05-27T17:47:08.969785954Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.026716794s" May 27 17:47:08.969990 containerd[1729]: time="2025-05-27T17:47:08.969816640Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 17:47:10.224694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 27 17:47:10.226605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:47:10.663865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:47:10.667138 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:47:10.705951 kubelet[2659]: E0527 17:47:10.705911 2659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:47:10.707880 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:47:10.708080 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:47:10.708644 systemd[1]: kubelet.service: Consumed 146ms CPU time, 110M memory peak. May 27 17:47:11.765312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:47:11.765460 systemd[1]: kubelet.service: Consumed 146ms CPU time, 110M memory peak. May 27 17:47:11.767609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:47:11.791052 systemd[1]: Reload requested from client PID 2673 ('systemctl') (unit session-9.scope)... May 27 17:47:11.791155 systemd[1]: Reloading... May 27 17:47:11.887574 zram_generator::config[2725]: No configuration found. May 27 17:47:12.018936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:47:12.101488 systemd[1]: Reloading finished in 310 ms. May 27 17:47:12.139256 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 17:47:12.139320 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 17:47:12.139584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:47:12.139621 systemd[1]: kubelet.service: Consumed 76ms CPU time, 74.4M memory peak. May 27 17:47:12.141364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:47:12.604470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:47:12.608882 (kubelet)[2786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:47:12.643990 kubelet[2786]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:47:12.643990 kubelet[2786]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:47:12.643990 kubelet[2786]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:47:12.644240 kubelet[2786]: I0527 17:47:12.644047 2786 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:47:12.960295 kubelet[2786]: I0527 17:47:12.960222 2786 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 17:47:12.960295 kubelet[2786]: I0527 17:47:12.960249 2786 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:47:12.961860 kubelet[2786]: I0527 17:47:12.960649 2786 server.go:954] "Client rotation is on, will bootstrap in background" May 27 17:47:12.990447 kubelet[2786]: E0527 17:47:12.990423 2786 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" May 27 17:47:12.991488 kubelet[2786]: I0527 17:47:12.991472 2786 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:47:12.998382 kubelet[2786]: I0527 17:47:12.998367 2786 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:47:13.000923 kubelet[2786]: I0527 17:47:13.000907 2786 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:47:13.002238 kubelet[2786]: I0527 17:47:13.002207 2786 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:47:13.002374 kubelet[2786]: I0527 17:47:13.002235 2786 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-a-927e686d84","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:47:13.002498 kubelet[2786]: I0527 17:47:13.002382 2786 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:47:13.002498 kubelet[2786]: I0527 17:47:13.002391 2786 container_manager_linux.go:304] "Creating device plugin manager" May 27 17:47:13.002498 kubelet[2786]: I0527 17:47:13.002497 2786 state_mem.go:36] "Initialized new in-memory state store" May 27 17:47:13.005822 kubelet[2786]: I0527 17:47:13.005809 2786 kubelet.go:446] "Attempting to sync node with API server" May 27 17:47:13.005868 kubelet[2786]: I0527 17:47:13.005837 2786 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:47:13.005868 kubelet[2786]: I0527 17:47:13.005861 2786 kubelet.go:352] "Adding apiserver pod source" May 27 17:47:13.006033 kubelet[2786]: I0527 17:47:13.005871 2786 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:47:13.013359 kubelet[2786]: W0527 17:47:13.012875 2786 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused May 27 17:47:13.013359 kubelet[2786]: E0527 17:47:13.012924 2786 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" May 27 17:47:13.013359 kubelet[2786]: W0527 17:47:13.013130 2786 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-a-927e686d84&limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused May 27 17:47:13.013359 kubelet[2786]: E0527 17:47:13.013155 2786 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.0.0-a-927e686d84&limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" May 27 17:47:13.013595 kubelet[2786]: I0527 17:47:13.013582 2786 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:47:13.014075 kubelet[2786]: I0527 17:47:13.014060 2786 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:47:13.014510 kubelet[2786]: W0527 17:47:13.014491 2786 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:47:13.016394 kubelet[2786]: I0527 17:47:13.016379 2786 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:47:13.016504 kubelet[2786]: I0527 17:47:13.016497 2786 server.go:1287] "Started kubelet" May 27 17:47:13.018666 kubelet[2786]: I0527 17:47:13.018231 2786 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:47:13.019716 kubelet[2786]: I0527 17:47:13.019586 2786 server.go:479] "Adding debug handlers to kubelet server" May 27 17:47:13.023135 kubelet[2786]: I0527 17:47:13.023049 2786 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:47:13.023263 kubelet[2786]: I0527 17:47:13.023215 2786 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:47:13.023462 kubelet[2786]: I0527 17:47:13.023450 2786 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:47:13.026524 kubelet[2786]: E0527 17:47:13.025281 2786 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.45:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.45:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.0.0-a-927e686d84.18437376789871f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.0.0-a-927e686d84,UID:ci-4344.0.0-a-927e686d84,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.0.0-a-927e686d84,},FirstTimestamp:2025-05-27 17:47:13.016476148 +0000 UTC m=+0.404121255,LastTimestamp:2025-05-27 17:47:13.016476148 +0000 UTC m=+0.404121255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.0.0-a-927e686d84,}" May 27 17:47:13.027165 kubelet[2786]: I0527 17:47:13.027146 2786 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:47:13.028375 kubelet[2786]: I0527 17:47:13.027903 2786 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:47:13.028375 kubelet[2786]: E0527 17:47:13.028089 2786 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-a-927e686d84\" not found" May 27 17:47:13.029588 kubelet[2786]: E0527 17:47:13.029542 2786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-927e686d84?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="200ms" May 27 17:47:13.029739 kubelet[2786]: I0527 17:47:13.029726 2786 factory.go:221] Registration of the systemd container factory successfully May 27 17:47:13.029811 kubelet[2786]: I0527 17:47:13.029799 2786 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:47:13.030416 kubelet[2786]: I0527 17:47:13.030402 2786 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:47:13.030514 kubelet[2786]: I0527 17:47:13.030509 2786 reconciler.go:26] "Reconciler: start to sync state" May 27 17:47:13.031205 kubelet[2786]: W0527 17:47:13.031173 2786 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused May 27 17:47:13.031530 kubelet[2786]: E0527 17:47:13.031514 2786 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" May 27 17:47:13.031707 kubelet[2786]: E0527 17:47:13.031697 2786 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:47:13.031820 kubelet[2786]: I0527 17:47:13.031814 2786 factory.go:221] Registration of the containerd container factory successfully May 27 17:47:13.049706 kubelet[2786]: I0527 17:47:13.049693 2786 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:47:13.049769 kubelet[2786]: I0527 17:47:13.049757 2786 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:47:13.049793 kubelet[2786]: I0527 17:47:13.049770 2786 state_mem.go:36] "Initialized new in-memory state store" May 27 17:47:13.054406 kubelet[2786]: I0527 17:47:13.054263 2786 policy_none.go:49] "None policy: Start" May 27 17:47:13.054406 kubelet[2786]: I0527 17:47:13.054283 2786 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:47:13.054406 kubelet[2786]: I0527 17:47:13.054293 2786 state_mem.go:35] "Initializing new in-memory state store" May 27 17:47:13.055668 kubelet[2786]: I0527 17:47:13.055639 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:47:13.057246 kubelet[2786]: I0527 17:47:13.057044 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:47:13.057246 kubelet[2786]: I0527 17:47:13.057063 2786 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 17:47:13.057246 kubelet[2786]: I0527 17:47:13.057081 2786 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:47:13.057351 kubelet[2786]: I0527 17:47:13.057273 2786 kubelet.go:2382] "Starting kubelet main sync loop" May 27 17:47:13.057351 kubelet[2786]: E0527 17:47:13.057308 2786 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:47:13.060423 kubelet[2786]: W0527 17:47:13.060294 2786 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.45:6443: connect: connection refused May 27 17:47:13.060485 kubelet[2786]: E0527 17:47:13.060435 2786 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.45:6443: connect: connection refused" logger="UnhandledError" May 27 17:47:13.063521 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:47:13.076480 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:47:13.087151 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:47:13.088294 kubelet[2786]: I0527 17:47:13.088282 2786 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:47:13.088776 kubelet[2786]: I0527 17:47:13.088767 2786 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:47:13.088776 kubelet[2786]: I0527 17:47:13.088794 2786 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:47:13.089091 kubelet[2786]: I0527 17:47:13.089080 2786 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:47:13.089928 kubelet[2786]: E0527 17:47:13.089913 2786 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:47:13.089991 kubelet[2786]: E0527 17:47:13.089949 2786 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.0.0-a-927e686d84\" not found" May 27 17:47:13.164626 systemd[1]: Created slice kubepods-burstable-pod9d4bf0e3a3e7403ec29b7232292a8f77.slice - libcontainer container kubepods-burstable-pod9d4bf0e3a3e7403ec29b7232292a8f77.slice. May 27 17:47:13.185692 kubelet[2786]: E0527 17:47:13.185534 2786 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-927e686d84\" not found" node="ci-4344.0.0-a-927e686d84" May 27 17:47:13.187062 systemd[1]: Created slice kubepods-burstable-pod8d8185979cad04647e13c690b6251ffd.slice - libcontainer container kubepods-burstable-pod8d8185979cad04647e13c690b6251ffd.slice. May 27 17:47:13.189963 kubelet[2786]: I0527 17:47:13.189950 2786 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-927e686d84" May 27 17:47:13.190220 kubelet[2786]: E0527 17:47:13.190188 2786 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4344.0.0-a-927e686d84" May 27 17:47:13.192511 kubelet[2786]: E0527 17:47:13.192387 2786 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-927e686d84\" not found" node="ci-4344.0.0-a-927e686d84" May 27 17:47:13.194350 systemd[1]: Created slice kubepods-burstable-pod613b136b5cf1da404eba4bf9b9cb803f.slice - libcontainer container kubepods-burstable-pod613b136b5cf1da404eba4bf9b9cb803f.slice. May 27 17:47:13.195993 kubelet[2786]: E0527 17:47:13.195978 2786 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-927e686d84\" not found" node="ci-4344.0.0-a-927e686d84" May 27 17:47:13.230444 kubelet[2786]: E0527 17:47:13.230383 2786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-927e686d84?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="400ms" May 27 17:47:13.332051 kubelet[2786]: I0527 17:47:13.332020 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d4bf0e3a3e7403ec29b7232292a8f77-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-a-927e686d84\" (UID: \"9d4bf0e3a3e7403ec29b7232292a8f77\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-927e686d84" May 27 17:47:13.332227 kubelet[2786]: I0527 17:47:13.332060 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d8185979cad04647e13c690b6251ffd-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-927e686d84\" (UID: \"8d8185979cad04647e13c690b6251ffd\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:13.332227 kubelet[2786]: I0527 17:47:13.332088 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d8185979cad04647e13c690b6251ffd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-a-927e686d84\" (UID: \"8d8185979cad04647e13c690b6251ffd\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:13.332227 kubelet[2786]: I0527 17:47:13.332111 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d4bf0e3a3e7403ec29b7232292a8f77-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-a-927e686d84\" (UID: \"9d4bf0e3a3e7403ec29b7232292a8f77\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-927e686d84" May 27 17:47:13.332227 kubelet[2786]: I0527 17:47:13.332135 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d4bf0e3a3e7403ec29b7232292a8f77-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-a-927e686d84\" (UID: \"9d4bf0e3a3e7403ec29b7232292a8f77\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-927e686d84" May 27 17:47:13.332227 kubelet[2786]: I0527 17:47:13.332158 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d8185979cad04647e13c690b6251ffd-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-927e686d84\" (UID: \"8d8185979cad04647e13c690b6251ffd\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:13.332413 kubelet[2786]: I0527 17:47:13.332179 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8d8185979cad04647e13c690b6251ffd-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-a-927e686d84\" (UID: \"8d8185979cad04647e13c690b6251ffd\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:13.332413 kubelet[2786]: I0527 17:47:13.332229 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d8185979cad04647e13c690b6251ffd-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-a-927e686d84\" (UID: \"8d8185979cad04647e13c690b6251ffd\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:13.332413 kubelet[2786]: I0527 17:47:13.332286 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/613b136b5cf1da404eba4bf9b9cb803f-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-a-927e686d84\" (UID: \"613b136b5cf1da404eba4bf9b9cb803f\") " pod="kube-system/kube-scheduler-ci-4344.0.0-a-927e686d84" May 27 17:47:13.392356 kubelet[2786]: I0527 17:47:13.392329 2786 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-927e686d84" May 27 17:47:13.392656 kubelet[2786]: E0527 17:47:13.392634 2786 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4344.0.0-a-927e686d84" May 27 17:47:13.487090 containerd[1729]: time="2025-05-27T17:47:13.487004348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-a-927e686d84,Uid:9d4bf0e3a3e7403ec29b7232292a8f77,Namespace:kube-system,Attempt:0,}" May 27 17:47:13.493599 containerd[1729]: time="2025-05-27T17:47:13.493508871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-a-927e686d84,Uid:8d8185979cad04647e13c690b6251ffd,Namespace:kube-system,Attempt:0,}" May 27 17:47:13.497311 containerd[1729]: time="2025-05-27T17:47:13.497284819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-a-927e686d84,Uid:613b136b5cf1da404eba4bf9b9cb803f,Namespace:kube-system,Attempt:0,}" May 27 17:47:13.552921 containerd[1729]: time="2025-05-27T17:47:13.552895489Z" level=info msg="connecting to shim 6acdac6b925931f989578d6b3befd154e3a70afbfe829dee1f5ed5f7b4eeeb80" address="unix:///run/containerd/s/0c91935c47d13fa1efc9c57df7819601c9abbb8118d271e424a36d6282c803e4" namespace=k8s.io protocol=ttrpc version=3 May 27 17:47:13.579558 containerd[1729]: time="2025-05-27T17:47:13.579423499Z" level=info msg="connecting to shim 3f2e9b1ad6b5447c89c80dbe680942b2511010615c677f34ce9515e5ececae40" address="unix:///run/containerd/s/a3662ecb819c8698e30554edfcca86afe81c3427357d9431bd6efb03cd6d9187" namespace=k8s.io protocol=ttrpc version=3 May 27 17:47:13.583721 systemd[1]: Started cri-containerd-6acdac6b925931f989578d6b3befd154e3a70afbfe829dee1f5ed5f7b4eeeb80.scope - libcontainer container 6acdac6b925931f989578d6b3befd154e3a70afbfe829dee1f5ed5f7b4eeeb80. May 27 17:47:13.586075 containerd[1729]: time="2025-05-27T17:47:13.585626876Z" level=info msg="connecting to shim 4295268ff5f9712d2310374406927a1016f9e9b0f23d6840cea1b84acec00aec" address="unix:///run/containerd/s/5eaebf65a6edc68e63026744feab6016ddfef30f058e00d4c7a04f7977757bf7" namespace=k8s.io protocol=ttrpc version=3 May 27 17:47:13.615661 systemd[1]: Started cri-containerd-4295268ff5f9712d2310374406927a1016f9e9b0f23d6840cea1b84acec00aec.scope - libcontainer container 4295268ff5f9712d2310374406927a1016f9e9b0f23d6840cea1b84acec00aec. May 27 17:47:13.621750 systemd[1]: Started cri-containerd-3f2e9b1ad6b5447c89c80dbe680942b2511010615c677f34ce9515e5ececae40.scope - libcontainer container 3f2e9b1ad6b5447c89c80dbe680942b2511010615c677f34ce9515e5ececae40. May 27 17:47:13.632367 kubelet[2786]: E0527 17:47:13.631432 2786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.0.0-a-927e686d84?timeout=10s\": dial tcp 10.200.8.45:6443: connect: connection refused" interval="800ms" May 27 17:47:13.683808 containerd[1729]: time="2025-05-27T17:47:13.683781898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.0.0-a-927e686d84,Uid:9d4bf0e3a3e7403ec29b7232292a8f77,Namespace:kube-system,Attempt:0,} returns sandbox id \"6acdac6b925931f989578d6b3befd154e3a70afbfe829dee1f5ed5f7b4eeeb80\"" May 27 17:47:13.688233 containerd[1729]: time="2025-05-27T17:47:13.688214421Z" level=info msg="CreateContainer within sandbox \"6acdac6b925931f989578d6b3befd154e3a70afbfe829dee1f5ed5f7b4eeeb80\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:47:13.700164 containerd[1729]: time="2025-05-27T17:47:13.700141140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.0.0-a-927e686d84,Uid:8d8185979cad04647e13c690b6251ffd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f2e9b1ad6b5447c89c80dbe680942b2511010615c677f34ce9515e5ececae40\"" May 27 17:47:13.700250 containerd[1729]: time="2025-05-27T17:47:13.700155426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.0.0-a-927e686d84,Uid:613b136b5cf1da404eba4bf9b9cb803f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4295268ff5f9712d2310374406927a1016f9e9b0f23d6840cea1b84acec00aec\"" May 27 17:47:13.701834 containerd[1729]: time="2025-05-27T17:47:13.701817093Z" level=info msg="CreateContainer within sandbox \"4295268ff5f9712d2310374406927a1016f9e9b0f23d6840cea1b84acec00aec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:47:13.701946 containerd[1729]: time="2025-05-27T17:47:13.701820630Z" level=info msg="CreateContainer within sandbox \"3f2e9b1ad6b5447c89c80dbe680942b2511010615c677f34ce9515e5ececae40\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:47:13.703441 containerd[1729]: time="2025-05-27T17:47:13.703417226Z" level=info msg="Container 5dcf5a1dda9a51aebafc9bea13cf886b760114338627e85a87216b1a297406e7: CDI devices from CRI Config.CDIDevices: []" May 27 17:47:13.725803 containerd[1729]: time="2025-05-27T17:47:13.725779966Z" level=info msg="CreateContainer within sandbox \"6acdac6b925931f989578d6b3befd154e3a70afbfe829dee1f5ed5f7b4eeeb80\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5dcf5a1dda9a51aebafc9bea13cf886b760114338627e85a87216b1a297406e7\"" May 27 17:47:13.726150 containerd[1729]: time="2025-05-27T17:47:13.726133630Z" level=info msg="StartContainer for \"5dcf5a1dda9a51aebafc9bea13cf886b760114338627e85a87216b1a297406e7\"" May 27 17:47:13.726826 containerd[1729]: time="2025-05-27T17:47:13.726800351Z" level=info msg="connecting to shim 5dcf5a1dda9a51aebafc9bea13cf886b760114338627e85a87216b1a297406e7" address="unix:///run/containerd/s/0c91935c47d13fa1efc9c57df7819601c9abbb8118d271e424a36d6282c803e4" protocol=ttrpc version=3 May 27 17:47:13.729132 containerd[1729]: time="2025-05-27T17:47:13.729101846Z" level=info msg="Container 31e1928ef2771bf4e0c068a5b0b9b9567cdeff8430ee91d0c869985c100da9a6: CDI devices from CRI Config.CDIDevices: []" May 27 17:47:13.734724 containerd[1729]: time="2025-05-27T17:47:13.734704799Z" level=info msg="Container 136b4842e5b6524b4dcf9e4ba3330f457284b1e4a29317213bacd3a3202e32d3: CDI devices from CRI Config.CDIDevices: []" May 27 17:47:13.741667 systemd[1]: Started cri-containerd-5dcf5a1dda9a51aebafc9bea13cf886b760114338627e85a87216b1a297406e7.scope - libcontainer container 5dcf5a1dda9a51aebafc9bea13cf886b760114338627e85a87216b1a297406e7. May 27 17:47:13.744641 containerd[1729]: time="2025-05-27T17:47:13.744534506Z" level=info msg="CreateContainer within sandbox \"3f2e9b1ad6b5447c89c80dbe680942b2511010615c677f34ce9515e5ececae40\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"31e1928ef2771bf4e0c068a5b0b9b9567cdeff8430ee91d0c869985c100da9a6\"" May 27 17:47:13.745025 containerd[1729]: time="2025-05-27T17:47:13.745005542Z" level=info msg="StartContainer for \"31e1928ef2771bf4e0c068a5b0b9b9567cdeff8430ee91d0c869985c100da9a6\"" May 27 17:47:13.746980 containerd[1729]: time="2025-05-27T17:47:13.746920283Z" level=info msg="connecting to shim 31e1928ef2771bf4e0c068a5b0b9b9567cdeff8430ee91d0c869985c100da9a6" address="unix:///run/containerd/s/a3662ecb819c8698e30554edfcca86afe81c3427357d9431bd6efb03cd6d9187" protocol=ttrpc version=3 May 27 17:47:13.756092 containerd[1729]: time="2025-05-27T17:47:13.755985458Z" level=info msg="CreateContainer within sandbox \"4295268ff5f9712d2310374406927a1016f9e9b0f23d6840cea1b84acec00aec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"136b4842e5b6524b4dcf9e4ba3330f457284b1e4a29317213bacd3a3202e32d3\"" May 27 17:47:13.757838 containerd[1729]: time="2025-05-27T17:47:13.756674780Z" level=info msg="StartContainer for \"136b4842e5b6524b4dcf9e4ba3330f457284b1e4a29317213bacd3a3202e32d3\"" May 27 17:47:13.757838 containerd[1729]: time="2025-05-27T17:47:13.757383861Z" level=info msg="connecting to shim 136b4842e5b6524b4dcf9e4ba3330f457284b1e4a29317213bacd3a3202e32d3" address="unix:///run/containerd/s/5eaebf65a6edc68e63026744feab6016ddfef30f058e00d4c7a04f7977757bf7" protocol=ttrpc version=3 May 27 17:47:13.765798 systemd[1]: Started cri-containerd-31e1928ef2771bf4e0c068a5b0b9b9567cdeff8430ee91d0c869985c100da9a6.scope - libcontainer container 31e1928ef2771bf4e0c068a5b0b9b9567cdeff8430ee91d0c869985c100da9a6. May 27 17:47:13.792621 systemd[1]: Started cri-containerd-136b4842e5b6524b4dcf9e4ba3330f457284b1e4a29317213bacd3a3202e32d3.scope - libcontainer container 136b4842e5b6524b4dcf9e4ba3330f457284b1e4a29317213bacd3a3202e32d3. May 27 17:47:13.794704 kubelet[2786]: I0527 17:47:13.794689 2786 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-927e686d84" May 27 17:47:13.795839 kubelet[2786]: E0527 17:47:13.795790 2786 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.45:6443/api/v1/nodes\": dial tcp 10.200.8.45:6443: connect: connection refused" node="ci-4344.0.0-a-927e686d84" May 27 17:47:13.809281 containerd[1729]: time="2025-05-27T17:47:13.809255583Z" level=info msg="StartContainer for \"5dcf5a1dda9a51aebafc9bea13cf886b760114338627e85a87216b1a297406e7\" returns successfully" May 27 17:47:13.833035 containerd[1729]: time="2025-05-27T17:47:13.832921719Z" level=info msg="StartContainer for \"31e1928ef2771bf4e0c068a5b0b9b9567cdeff8430ee91d0c869985c100da9a6\" returns successfully" May 27 17:47:13.874678 containerd[1729]: time="2025-05-27T17:47:13.874572283Z" level=info msg="StartContainer for \"136b4842e5b6524b4dcf9e4ba3330f457284b1e4a29317213bacd3a3202e32d3\" returns successfully" May 27 17:47:14.067050 kubelet[2786]: E0527 17:47:14.066887 2786 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-927e686d84\" not found" node="ci-4344.0.0-a-927e686d84" May 27 17:47:14.068012 kubelet[2786]: E0527 17:47:14.067788 2786 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-927e686d84\" not found" node="ci-4344.0.0-a-927e686d84" May 27 17:47:14.071909 kubelet[2786]: E0527 17:47:14.071896 2786 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-927e686d84\" not found" node="ci-4344.0.0-a-927e686d84" May 27 17:47:14.598988 kubelet[2786]: I0527 17:47:14.598192 2786 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-927e686d84" May 27 17:47:15.076121 kubelet[2786]: E0527 17:47:15.075928 2786 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-927e686d84\" not found" node="ci-4344.0.0-a-927e686d84" May 27 17:47:15.077216 kubelet[2786]: E0527 17:47:15.077081 2786 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.0.0-a-927e686d84\" not found" node="ci-4344.0.0-a-927e686d84" May 27 17:47:15.558453 kubelet[2786]: E0527 17:47:15.558362 2786 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.0.0-a-927e686d84\" not found" node="ci-4344.0.0-a-927e686d84" May 27 17:47:15.718264 kubelet[2786]: I0527 17:47:15.718234 2786 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-a-927e686d84" May 27 17:47:15.728984 kubelet[2786]: I0527 17:47:15.728949 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:15.735370 kubelet[2786]: E0527 17:47:15.735211 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.0.0-a-927e686d84\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:15.735370 kubelet[2786]: I0527 17:47:15.735236 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-927e686d84" May 27 17:47:15.737697 kubelet[2786]: E0527 17:47:15.737593 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-a-927e686d84\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.0.0-a-927e686d84" May 27 17:47:15.737697 kubelet[2786]: I0527 17:47:15.737613 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-927e686d84" May 27 17:47:15.739080 kubelet[2786]: E0527 17:47:15.739053 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-a-927e686d84\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.0.0-a-927e686d84" May 27 17:47:16.010363 kubelet[2786]: I0527 17:47:16.010263 2786 apiserver.go:52] "Watching apiserver" May 27 17:47:16.031233 kubelet[2786]: I0527 17:47:16.031193 2786 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:47:16.183278 kubelet[2786]: I0527 17:47:16.183250 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-927e686d84" May 27 17:47:16.184755 kubelet[2786]: E0527 17:47:16.184733 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-a-927e686d84\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.0.0-a-927e686d84" May 27 17:47:16.415786 kubelet[2786]: I0527 17:47:16.415661 2786 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:16.417445 kubelet[2786]: E0527 17:47:16.417421 2786 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.0.0-a-927e686d84\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:17.550384 systemd[1]: Reload requested from client PID 3053 ('systemctl') (unit session-9.scope)... May 27 17:47:17.550399 systemd[1]: Reloading... May 27 17:47:17.618630 zram_generator::config[3099]: No configuration found. May 27 17:47:17.706119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:47:17.796112 systemd[1]: Reloading finished in 245 ms. May 27 17:47:17.827343 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:47:17.844297 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:47:17.844527 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:47:17.844591 systemd[1]: kubelet.service: Consumed 705ms CPU time, 129.1M memory peak. May 27 17:47:17.845929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:47:18.353464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:47:18.359927 (kubelet)[3166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:47:18.404809 kubelet[3166]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:47:18.405017 kubelet[3166]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:47:18.405017 kubelet[3166]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:47:18.405130 kubelet[3166]: I0527 17:47:18.405086 3166 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:47:18.410517 kubelet[3166]: I0527 17:47:18.410492 3166 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 17:47:18.410517 kubelet[3166]: I0527 17:47:18.410512 3166 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:47:18.410874 kubelet[3166]: I0527 17:47:18.410858 3166 server.go:954] "Client rotation is on, will bootstrap in background" May 27 17:47:18.411697 kubelet[3166]: I0527 17:47:18.411680 3166 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 17:47:18.415267 kubelet[3166]: I0527 17:47:18.415248 3166 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:47:18.421892 kubelet[3166]: I0527 17:47:18.420689 3166 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:47:18.424238 kubelet[3166]: I0527 17:47:18.424221 3166 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:47:18.424436 kubelet[3166]: I0527 17:47:18.424415 3166 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:47:18.424770 kubelet[3166]: I0527 17:47:18.424440 3166 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.0.0-a-927e686d84","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:47:18.424888 kubelet[3166]: I0527 17:47:18.424805 3166 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:47:18.424888 kubelet[3166]: I0527 17:47:18.424818 3166 container_manager_linux.go:304] "Creating device plugin manager" May 27 17:47:18.424888 kubelet[3166]: I0527 17:47:18.424886 3166 state_mem.go:36] "Initialized new in-memory state store" May 27 17:47:18.425028 kubelet[3166]: I0527 17:47:18.425016 3166 kubelet.go:446] "Attempting to sync node with API server" May 27 17:47:18.425184 kubelet[3166]: I0527 17:47:18.425175 3166 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:47:18.425216 kubelet[3166]: I0527 17:47:18.425203 3166 kubelet.go:352] "Adding apiserver pod source" May 27 17:47:18.425216 kubelet[3166]: I0527 17:47:18.425214 3166 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:47:18.425860 kubelet[3166]: I0527 17:47:18.425747 3166 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:47:18.426089 kubelet[3166]: I0527 17:47:18.426076 3166 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:47:18.427037 kubelet[3166]: I0527 17:47:18.427021 3166 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:47:18.427109 kubelet[3166]: I0527 17:47:18.427054 3166 server.go:1287] "Started kubelet" May 27 17:47:18.429352 kubelet[3166]: I0527 17:47:18.429333 3166 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:47:18.434269 kubelet[3166]: I0527 17:47:18.434093 3166 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:47:18.434822 kubelet[3166]: I0527 17:47:18.434757 3166 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:47:18.434937 kubelet[3166]: E0527 17:47:18.434924 3166 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.0.0-a-927e686d84\" not found" May 27 17:47:18.436181 kubelet[3166]: I0527 17:47:18.436121 3166 server.go:479] "Adding debug handlers to kubelet server" May 27 17:47:18.438409 kubelet[3166]: I0527 17:47:18.437927 3166 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:47:18.439063 kubelet[3166]: I0527 17:47:18.436150 3166 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:47:18.439135 kubelet[3166]: I0527 17:47:18.438815 3166 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:47:18.439276 kubelet[3166]: I0527 17:47:18.439269 3166 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 17:47:18.439407 kubelet[3166]: I0527 17:47:18.439401 3166 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:47:18.439444 kubelet[3166]: I0527 17:47:18.439440 3166 kubelet.go:2382] "Starting kubelet main sync loop" May 27 17:47:18.439603 kubelet[3166]: E0527 17:47:18.439590 3166 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:47:18.439919 kubelet[3166]: I0527 17:47:18.439804 3166 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:47:18.440278 kubelet[3166]: I0527 17:47:18.440190 3166 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:47:18.440704 kubelet[3166]: I0527 17:47:18.440615 3166 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:47:18.442899 kubelet[3166]: I0527 17:47:18.436257 3166 reconciler.go:26] "Reconciler: start to sync state" May 27 17:47:18.447647 kubelet[3166]: I0527 17:47:18.446267 3166 factory.go:221] Registration of the systemd container factory successfully May 27 17:47:18.447841 kubelet[3166]: I0527 17:47:18.447823 3166 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:47:18.454790 kubelet[3166]: I0527 17:47:18.454772 3166 factory.go:221] Registration of the containerd container factory successfully May 27 17:47:18.459300 kubelet[3166]: E0527 17:47:18.459277 3166 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:47:18.492130 kubelet[3166]: I0527 17:47:18.492120 3166 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:47:18.492680 kubelet[3166]: I0527 17:47:18.492196 3166 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:47:18.492680 kubelet[3166]: I0527 17:47:18.492208 3166 state_mem.go:36] "Initialized new in-memory state store" May 27 17:47:18.492680 kubelet[3166]: I0527 17:47:18.492306 3166 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:47:18.492680 kubelet[3166]: I0527 17:47:18.492313 3166 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:47:18.492680 kubelet[3166]: I0527 17:47:18.492327 3166 policy_none.go:49] "None policy: Start" May 27 17:47:18.492680 kubelet[3166]: I0527 17:47:18.492335 3166 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:47:18.492680 kubelet[3166]: I0527 17:47:18.492341 3166 state_mem.go:35] "Initializing new in-memory state store" May 27 17:47:18.492680 kubelet[3166]: I0527 17:47:18.492405 3166 state_mem.go:75] "Updated machine memory state" May 27 17:47:18.495592 kubelet[3166]: I0527 17:47:18.495579 3166 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:47:18.495766 kubelet[3166]: I0527 17:47:18.495760 3166 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:47:18.495826 kubelet[3166]: I0527 17:47:18.495806 3166 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:47:18.495984 kubelet[3166]: I0527 17:47:18.495977 3166 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:47:18.497576 kubelet[3166]: E0527 17:47:18.497279 3166 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:47:18.540359 kubelet[3166]: I0527 17:47:18.540185 3166 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-927e686d84" May 27 17:47:18.540729 kubelet[3166]: I0527 17:47:18.540519 3166 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-927e686d84" May 27 17:47:18.541039 kubelet[3166]: I0527 17:47:18.540973 3166 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:18.543987 kubelet[3166]: I0527 17:47:18.543933 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d8185979cad04647e13c690b6251ffd-kubeconfig\") pod \"kube-controller-manager-ci-4344.0.0-a-927e686d84\" (UID: \"8d8185979cad04647e13c690b6251ffd\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:18.543987 kubelet[3166]: I0527 17:47:18.543963 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d4bf0e3a3e7403ec29b7232292a8f77-ca-certs\") pod \"kube-apiserver-ci-4344.0.0-a-927e686d84\" (UID: \"9d4bf0e3a3e7403ec29b7232292a8f77\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-927e686d84" May 27 17:47:18.543987 kubelet[3166]: I0527 17:47:18.543986 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d4bf0e3a3e7403ec29b7232292a8f77-k8s-certs\") pod \"kube-apiserver-ci-4344.0.0-a-927e686d84\" (UID: \"9d4bf0e3a3e7403ec29b7232292a8f77\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-927e686d84" May 27 17:47:18.544406 kubelet[3166]: I0527 17:47:18.544005 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8d8185979cad04647e13c690b6251ffd-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.0.0-a-927e686d84\" (UID: \"8d8185979cad04647e13c690b6251ffd\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:18.544406 kubelet[3166]: I0527 17:47:18.544021 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d8185979cad04647e13c690b6251ffd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.0.0-a-927e686d84\" (UID: \"8d8185979cad04647e13c690b6251ffd\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:18.544406 kubelet[3166]: I0527 17:47:18.544040 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/613b136b5cf1da404eba4bf9b9cb803f-kubeconfig\") pod \"kube-scheduler-ci-4344.0.0-a-927e686d84\" (UID: \"613b136b5cf1da404eba4bf9b9cb803f\") " pod="kube-system/kube-scheduler-ci-4344.0.0-a-927e686d84" May 27 17:47:18.544406 kubelet[3166]: I0527 17:47:18.544056 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d4bf0e3a3e7403ec29b7232292a8f77-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.0.0-a-927e686d84\" (UID: \"9d4bf0e3a3e7403ec29b7232292a8f77\") " pod="kube-system/kube-apiserver-ci-4344.0.0-a-927e686d84" May 27 17:47:18.544406 kubelet[3166]: I0527 17:47:18.544076 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d8185979cad04647e13c690b6251ffd-ca-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-927e686d84\" (UID: \"8d8185979cad04647e13c690b6251ffd\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:18.544829 kubelet[3166]: I0527 17:47:18.544093 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d8185979cad04647e13c690b6251ffd-k8s-certs\") pod \"kube-controller-manager-ci-4344.0.0-a-927e686d84\" (UID: \"8d8185979cad04647e13c690b6251ffd\") " pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" May 27 17:47:18.546654 kubelet[3166]: W0527 17:47:18.546636 3166 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:47:18.550594 kubelet[3166]: W0527 17:47:18.550511 3166 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:47:18.550594 kubelet[3166]: W0527 17:47:18.550529 3166 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:47:18.566778 sudo[3197]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 17:47:18.567009 sudo[3197]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 17:47:18.598867 kubelet[3166]: I0527 17:47:18.598850 3166 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.0.0-a-927e686d84" May 27 17:47:18.608979 kubelet[3166]: I0527 17:47:18.608871 3166 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.0.0-a-927e686d84" May 27 17:47:18.608979 kubelet[3166]: I0527 17:47:18.608921 3166 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.0.0-a-927e686d84" May 27 17:47:19.043206 sudo[3197]: pam_unix(sudo:session): session closed for user root May 27 17:47:19.426697 kubelet[3166]: I0527 17:47:19.426477 3166 apiserver.go:52] "Watching apiserver" May 27 17:47:19.439421 kubelet[3166]: I0527 17:47:19.439394 3166 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:47:19.482117 kubelet[3166]: I0527 17:47:19.482092 3166 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.0.0-a-927e686d84" May 27 17:47:19.482673 kubelet[3166]: I0527 17:47:19.482348 3166 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.0.0-a-927e686d84" May 27 17:47:19.501061 kubelet[3166]: W0527 17:47:19.501043 3166 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:47:19.501132 kubelet[3166]: E0527 17:47:19.501104 3166 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.0.0-a-927e686d84\" already exists" pod="kube-system/kube-scheduler-ci-4344.0.0-a-927e686d84" May 27 17:47:19.502122 kubelet[3166]: W0527 17:47:19.502109 3166 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 17:47:19.502185 kubelet[3166]: E0527 17:47:19.502150 3166 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.0.0-a-927e686d84\" already exists" pod="kube-system/kube-apiserver-ci-4344.0.0-a-927e686d84" May 27 17:47:19.552058 kubelet[3166]: I0527 17:47:19.552015 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.0.0-a-927e686d84" podStartSLOduration=1.551999129 podStartE2EDuration="1.551999129s" podCreationTimestamp="2025-05-27 17:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:47:19.522942147 +0000 UTC m=+1.159328006" watchObservedRunningTime="2025-05-27 17:47:19.551999129 +0000 UTC m=+1.188384991" May 27 17:47:19.564087 kubelet[3166]: I0527 17:47:19.564038 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.0.0-a-927e686d84" podStartSLOduration=1.564023434 podStartE2EDuration="1.564023434s" podCreationTimestamp="2025-05-27 17:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:47:19.552493584 +0000 UTC m=+1.188879447" watchObservedRunningTime="2025-05-27 17:47:19.564023434 +0000 UTC m=+1.200409288" May 27 17:47:19.564265 kubelet[3166]: I0527 17:47:19.564156 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.0.0-a-927e686d84" podStartSLOduration=1.564136529 podStartE2EDuration="1.564136529s" podCreationTimestamp="2025-05-27 17:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:47:19.563782117 +0000 UTC m=+1.200167977" watchObservedRunningTime="2025-05-27 17:47:19.564136529 +0000 UTC m=+1.200522391" May 27 17:47:20.255088 sudo[2187]: pam_unix(sudo:session): session closed for user root May 27 17:47:20.373053 sshd[2186]: Connection closed by 10.200.16.10 port 46516 May 27 17:47:20.373544 sshd-session[2184]: pam_unix(sshd:session): session closed for user core May 27 17:47:20.376501 systemd[1]: sshd@6-10.200.8.45:22-10.200.16.10:46516.service: Deactivated successfully. May 27 17:47:20.378535 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:47:20.378742 systemd[1]: session-9.scope: Consumed 3.296s CPU time, 268.2M memory peak. May 27 17:47:20.380822 systemd-logind[1706]: Session 9 logged out. Waiting for processes to exit. May 27 17:47:20.382241 systemd-logind[1706]: Removed session 9. May 27 17:47:23.527458 kubelet[3166]: I0527 17:47:23.527423 3166 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:47:23.527989 containerd[1729]: time="2025-05-27T17:47:23.527815660Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:47:23.528337 kubelet[3166]: I0527 17:47:23.528003 3166 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:47:24.545418 kubelet[3166]: I0527 17:47:24.544777 3166 status_manager.go:890] "Failed to get status for pod" podUID="178952ae-7daa-4f1f-8e7f-ecf6351bd342" pod="kube-system/kube-proxy-4tg8w" err="pods \"kube-proxy-4tg8w\" is forbidden: User \"system:node:ci-4344.0.0-a-927e686d84\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.0.0-a-927e686d84' and this object" May 27 17:47:24.545418 kubelet[3166]: W0527 17:47:24.544877 3166 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4344.0.0-a-927e686d84" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.0.0-a-927e686d84' and this object May 27 17:47:24.545418 kubelet[3166]: E0527 17:47:24.544910 3166 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4344.0.0-a-927e686d84\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.0.0-a-927e686d84' and this object" logger="UnhandledError" May 27 17:47:24.558387 systemd[1]: Created slice kubepods-besteffort-pod178952ae_7daa_4f1f_8e7f_ecf6351bd342.slice - libcontainer container kubepods-besteffort-pod178952ae_7daa_4f1f_8e7f_ecf6351bd342.slice. May 27 17:47:24.570306 systemd[1]: Created slice kubepods-burstable-podc6723ce1_2e6f_485e_84a6_8edd4d8d5656.slice - libcontainer container kubepods-burstable-podc6723ce1_2e6f_485e_84a6_8edd4d8d5656.slice. May 27 17:47:24.582238 kubelet[3166]: I0527 17:47:24.582217 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cilium-cgroup\") pod \"cilium-p49ql\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " pod="kube-system/cilium-p49ql" May 27 17:47:24.582320 kubelet[3166]: I0527 17:47:24.582290 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-xtables-lock\") pod \"cilium-p49ql\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " pod="kube-system/cilium-p49ql" May 27 17:47:24.582320 kubelet[3166]: I0527 17:47:24.582313 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cilium-config-path\") pod \"cilium-p49ql\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " pod="kube-system/cilium-p49ql" May 27 17:47:24.582368 kubelet[3166]: I0527 17:47:24.582337 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-host-proc-sys-kernel\") pod \"cilium-p49ql\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " pod="kube-system/cilium-p49ql" May 27 17:47:24.582368 kubelet[3166]: I0527 17:47:24.582358 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-clustermesh-secrets\") pod \"cilium-p49ql\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " pod="kube-system/cilium-p49ql" May 27 17:47:24.582411 kubelet[3166]: I0527 17:47:24.582379 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cni-path\") pod \"cilium-p49ql\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " pod="kube-system/cilium-p49ql" May 27 17:47:24.582411 kubelet[3166]: I0527 17:47:24.582397 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/178952ae-7daa-4f1f-8e7f-ecf6351bd342-kube-proxy\") pod \"kube-proxy-4tg8w\" (UID: \"178952ae-7daa-4f1f-8e7f-ecf6351bd342\") " pod="kube-system/kube-proxy-4tg8w" May 27 17:47:24.582457 kubelet[3166]: I0527 17:47:24.582416 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffdcl\" (UniqueName: \"kubernetes.io/projected/178952ae-7daa-4f1f-8e7f-ecf6351bd342-kube-api-access-ffdcl\") pod \"kube-proxy-4tg8w\" (UID: \"178952ae-7daa-4f1f-8e7f-ecf6351bd342\") " pod="kube-system/kube-proxy-4tg8w" May 27 17:47:24.582457 kubelet[3166]: I0527 17:47:24.582435 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-hostproc\") pod \"cilium-p49ql\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " pod="kube-system/cilium-p49ql" May 27 17:47:24.582494 kubelet[3166]: I0527 17:47:24.582454 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-hubble-tls\") pod \"cilium-p49ql\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " pod="kube-system/cilium-p49ql" May 27 17:47:24.582494 kubelet[3166]: I0527 17:47:24.582479 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-lib-modules\") pod \"cilium-p49ql\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " pod="kube-system/cilium-p49ql" May 27 17:47:24.582534 kubelet[3166]: I0527 17:47:24.582496 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-bpf-maps\") pod \"cilium-p49ql\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " pod="kube-system/cilium-p49ql" May 27 17:47:24.582534 kubelet[3166]: I0527 17:47:24.582512 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-etc-cni-netd\") pod \"cilium-p49ql\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " pod="kube-system/cilium-p49ql" May 27 17:47:24.582534 kubelet[3166]: I0527 17:47:24.582529 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/178952ae-7daa-4f1f-8e7f-ecf6351bd342-lib-modules\") pod \"kube-proxy-4tg8w\" (UID: \"178952ae-7daa-4f1f-8e7f-ecf6351bd342\") " pod="kube-system/kube-proxy-4tg8w" May 27 17:47:24.582879 kubelet[3166]: I0527 17:47:24.582556 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2ph9\" (UniqueName: \"kubernetes.io/projected/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-kube-api-access-r2ph9\") pod \"cilium-p49ql\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " pod="kube-system/cilium-p49ql" May 27 17:47:24.582879 kubelet[3166]: I0527 17:47:24.582571 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/178952ae-7daa-4f1f-8e7f-ecf6351bd342-xtables-lock\") pod \"kube-proxy-4tg8w\" (UID: \"178952ae-7daa-4f1f-8e7f-ecf6351bd342\") " pod="kube-system/kube-proxy-4tg8w" May 27 17:47:24.582879 kubelet[3166]: I0527 17:47:24.582585 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cilium-run\") pod \"cilium-p49ql\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " pod="kube-system/cilium-p49ql" May 27 17:47:24.582879 kubelet[3166]: I0527 17:47:24.582608 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-host-proc-sys-net\") pod \"cilium-p49ql\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " pod="kube-system/cilium-p49ql" May 27 17:47:24.662581 systemd[1]: Created slice kubepods-besteffort-pod16f23d65_ff33_43da_b401_1cdfd937a4c4.slice - libcontainer container kubepods-besteffort-pod16f23d65_ff33_43da_b401_1cdfd937a4c4.slice. May 27 17:47:24.682958 kubelet[3166]: I0527 17:47:24.682931 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16f23d65-ff33-43da-b401-1cdfd937a4c4-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-cp8w4\" (UID: \"16f23d65-ff33-43da-b401-1cdfd937a4c4\") " pod="kube-system/cilium-operator-6c4d7847fc-cp8w4" May 27 17:47:24.683042 kubelet[3166]: I0527 17:47:24.683032 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94lk7\" (UniqueName: \"kubernetes.io/projected/16f23d65-ff33-43da-b401-1cdfd937a4c4-kube-api-access-94lk7\") pod \"cilium-operator-6c4d7847fc-cp8w4\" (UID: \"16f23d65-ff33-43da-b401-1cdfd937a4c4\") " pod="kube-system/cilium-operator-6c4d7847fc-cp8w4" May 27 17:47:24.873988 containerd[1729]: time="2025-05-27T17:47:24.873648856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p49ql,Uid:c6723ce1-2e6f-485e-84a6-8edd4d8d5656,Namespace:kube-system,Attempt:0,}" May 27 17:47:24.907725 containerd[1729]: time="2025-05-27T17:47:24.907647896Z" level=info msg="connecting to shim c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846" address="unix:///run/containerd/s/93cdb8997b5bf8aaeca77a585282a92d4a91fa0ce1898f4941996ab902f54f64" namespace=k8s.io protocol=ttrpc version=3 May 27 17:47:24.931672 systemd[1]: Started cri-containerd-c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846.scope - libcontainer container c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846. May 27 17:47:24.952057 containerd[1729]: time="2025-05-27T17:47:24.952021536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p49ql,Uid:c6723ce1-2e6f-485e-84a6-8edd4d8d5656,Namespace:kube-system,Attempt:0,} returns sandbox id \"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\"" May 27 17:47:24.954564 containerd[1729]: time="2025-05-27T17:47:24.954522652Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 17:47:24.965448 containerd[1729]: time="2025-05-27T17:47:24.965425615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cp8w4,Uid:16f23d65-ff33-43da-b401-1cdfd937a4c4,Namespace:kube-system,Attempt:0,}" May 27 17:47:24.996540 containerd[1729]: time="2025-05-27T17:47:24.996513261Z" level=info msg="connecting to shim f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a" address="unix:///run/containerd/s/1cba0cb2309e3f4449aa4b03f0c48aacdd15800785adceae69334658fcf14b64" namespace=k8s.io protocol=ttrpc version=3 May 27 17:47:25.012672 systemd[1]: Started cri-containerd-f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a.scope - libcontainer container f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a. May 27 17:47:25.044801 containerd[1729]: time="2025-05-27T17:47:25.044744625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cp8w4,Uid:16f23d65-ff33-43da-b401-1cdfd937a4c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a\"" May 27 17:47:25.684289 kubelet[3166]: E0527 17:47:25.684260 3166 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition May 27 17:47:25.684752 kubelet[3166]: E0527 17:47:25.684336 3166 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/178952ae-7daa-4f1f-8e7f-ecf6351bd342-kube-proxy podName:178952ae-7daa-4f1f-8e7f-ecf6351bd342 nodeName:}" failed. No retries permitted until 2025-05-27 17:47:26.184315425 +0000 UTC m=+7.820701281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/178952ae-7daa-4f1f-8e7f-ecf6351bd342-kube-proxy") pod "kube-proxy-4tg8w" (UID: "178952ae-7daa-4f1f-8e7f-ecf6351bd342") : failed to sync configmap cache: timed out waiting for the condition May 27 17:47:26.372060 containerd[1729]: time="2025-05-27T17:47:26.371912199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4tg8w,Uid:178952ae-7daa-4f1f-8e7f-ecf6351bd342,Namespace:kube-system,Attempt:0,}" May 27 17:47:27.238405 containerd[1729]: time="2025-05-27T17:47:27.237821012Z" level=info msg="connecting to shim 722d001d1a1741d7534d242a4ba6c49878bcdc551841205f88563500aa1b9ab1" address="unix:///run/containerd/s/0c8ca06fc8d6bf08e2eab0ee3cb32b5cc8ab72b2da581b0b80923ae791100ee0" namespace=k8s.io protocol=ttrpc version=3 May 27 17:47:27.268729 systemd[1]: Started cri-containerd-722d001d1a1741d7534d242a4ba6c49878bcdc551841205f88563500aa1b9ab1.scope - libcontainer container 722d001d1a1741d7534d242a4ba6c49878bcdc551841205f88563500aa1b9ab1. May 27 17:47:27.298477 containerd[1729]: time="2025-05-27T17:47:27.298444858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4tg8w,Uid:178952ae-7daa-4f1f-8e7f-ecf6351bd342,Namespace:kube-system,Attempt:0,} returns sandbox id \"722d001d1a1741d7534d242a4ba6c49878bcdc551841205f88563500aa1b9ab1\"" May 27 17:47:27.304959 containerd[1729]: time="2025-05-27T17:47:27.304933244Z" level=info msg="CreateContainer within sandbox \"722d001d1a1741d7534d242a4ba6c49878bcdc551841205f88563500aa1b9ab1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:47:27.327118 containerd[1729]: time="2025-05-27T17:47:27.327094662Z" level=info msg="Container 17562f974ab981debcdc8ea8c74ab2aa26f34063f986293c20ac66092407226b: CDI devices from CRI Config.CDIDevices: []" May 27 17:47:27.343986 containerd[1729]: time="2025-05-27T17:47:27.343964439Z" level=info msg="CreateContainer within sandbox \"722d001d1a1741d7534d242a4ba6c49878bcdc551841205f88563500aa1b9ab1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"17562f974ab981debcdc8ea8c74ab2aa26f34063f986293c20ac66092407226b\"" May 27 17:47:27.344777 containerd[1729]: time="2025-05-27T17:47:27.344704809Z" level=info msg="StartContainer for \"17562f974ab981debcdc8ea8c74ab2aa26f34063f986293c20ac66092407226b\"" May 27 17:47:27.347128 containerd[1729]: time="2025-05-27T17:47:27.347105076Z" level=info msg="connecting to shim 17562f974ab981debcdc8ea8c74ab2aa26f34063f986293c20ac66092407226b" address="unix:///run/containerd/s/0c8ca06fc8d6bf08e2eab0ee3cb32b5cc8ab72b2da581b0b80923ae791100ee0" protocol=ttrpc version=3 May 27 17:47:27.369807 systemd[1]: Started cri-containerd-17562f974ab981debcdc8ea8c74ab2aa26f34063f986293c20ac66092407226b.scope - libcontainer container 17562f974ab981debcdc8ea8c74ab2aa26f34063f986293c20ac66092407226b. May 27 17:47:27.418111 containerd[1729]: time="2025-05-27T17:47:27.418050089Z" level=info msg="StartContainer for \"17562f974ab981debcdc8ea8c74ab2aa26f34063f986293c20ac66092407226b\" returns successfully" May 27 17:47:27.522148 kubelet[3166]: I0527 17:47:27.522094 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4tg8w" podStartSLOduration=3.522074375 podStartE2EDuration="3.522074375s" podCreationTimestamp="2025-05-27 17:47:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:47:27.521748276 +0000 UTC m=+9.158134136" watchObservedRunningTime="2025-05-27 17:47:27.522074375 +0000 UTC m=+9.158460234" May 27 17:47:28.220479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount336904631.mount: Deactivated successfully. May 27 17:47:28.693890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49987489.mount: Deactivated successfully. May 27 17:47:29.999520 containerd[1729]: time="2025-05-27T17:47:29.999472041Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:30.001324 containerd[1729]: time="2025-05-27T17:47:30.001289274Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 27 17:47:30.003489 containerd[1729]: time="2025-05-27T17:47:30.003451454Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:30.004507 containerd[1729]: time="2025-05-27T17:47:30.004405551Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.049836947s" May 27 17:47:30.004507 containerd[1729]: time="2025-05-27T17:47:30.004436978Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 27 17:47:30.005804 containerd[1729]: time="2025-05-27T17:47:30.005618886Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 17:47:30.006445 containerd[1729]: time="2025-05-27T17:47:30.006412731Z" level=info msg="CreateContainer within sandbox \"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 17:47:30.028568 containerd[1729]: time="2025-05-27T17:47:30.026816818Z" level=info msg="Container 81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac: CDI devices from CRI Config.CDIDevices: []" May 27 17:47:30.028995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316172460.mount: Deactivated successfully. May 27 17:47:30.039328 containerd[1729]: time="2025-05-27T17:47:30.039292318Z" level=info msg="CreateContainer within sandbox \"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\"" May 27 17:47:30.039753 containerd[1729]: time="2025-05-27T17:47:30.039726412Z" level=info msg="StartContainer for \"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\"" May 27 17:47:30.040614 containerd[1729]: time="2025-05-27T17:47:30.040574212Z" level=info msg="connecting to shim 81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac" address="unix:///run/containerd/s/93cdb8997b5bf8aaeca77a585282a92d4a91fa0ce1898f4941996ab902f54f64" protocol=ttrpc version=3 May 27 17:47:30.057739 systemd[1]: Started cri-containerd-81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac.scope - libcontainer container 81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac. May 27 17:47:30.083951 containerd[1729]: time="2025-05-27T17:47:30.083914250Z" level=info msg="StartContainer for \"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\" returns successfully" May 27 17:47:30.090314 systemd[1]: cri-containerd-81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac.scope: Deactivated successfully. May 27 17:47:30.092126 containerd[1729]: time="2025-05-27T17:47:30.092072086Z" level=info msg="received exit event container_id:\"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\" id:\"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\" pid:3581 exited_at:{seconds:1748368050 nanos:91524233}" May 27 17:47:30.092462 containerd[1729]: time="2025-05-27T17:47:30.092444088Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\" id:\"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\" pid:3581 exited_at:{seconds:1748368050 nanos:91524233}" May 27 17:47:30.105620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac-rootfs.mount: Deactivated successfully. May 27 17:47:34.167149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1946006829.mount: Deactivated successfully. May 27 17:47:34.524329 containerd[1729]: time="2025-05-27T17:47:34.524286399Z" level=info msg="CreateContainer within sandbox \"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 17:47:34.543409 containerd[1729]: time="2025-05-27T17:47:34.543375372Z" level=info msg="Container cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077: CDI devices from CRI Config.CDIDevices: []" May 27 17:47:34.549567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66215454.mount: Deactivated successfully. May 27 17:47:34.570133 containerd[1729]: time="2025-05-27T17:47:34.570109814Z" level=info msg="CreateContainer within sandbox \"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\"" May 27 17:47:34.570475 containerd[1729]: time="2025-05-27T17:47:34.570423717Z" level=info msg="StartContainer for \"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\"" May 27 17:47:34.571262 containerd[1729]: time="2025-05-27T17:47:34.571228521Z" level=info msg="connecting to shim cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077" address="unix:///run/containerd/s/93cdb8997b5bf8aaeca77a585282a92d4a91fa0ce1898f4941996ab902f54f64" protocol=ttrpc version=3 May 27 17:47:34.589681 systemd[1]: Started cri-containerd-cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077.scope - libcontainer container cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077. May 27 17:47:34.600536 containerd[1729]: time="2025-05-27T17:47:34.600437450Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:34.603167 containerd[1729]: time="2025-05-27T17:47:34.603012948Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 27 17:47:34.606644 containerd[1729]: time="2025-05-27T17:47:34.605685499Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:47:34.606772 containerd[1729]: time="2025-05-27T17:47:34.606745482Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.601091917s" May 27 17:47:34.606808 containerd[1729]: time="2025-05-27T17:47:34.606789757Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 27 17:47:34.612270 containerd[1729]: time="2025-05-27T17:47:34.612125056Z" level=info msg="CreateContainer within sandbox \"f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 17:47:34.618964 containerd[1729]: time="2025-05-27T17:47:34.618941964Z" level=info msg="StartContainer for \"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\" returns successfully" May 27 17:47:34.630199 containerd[1729]: time="2025-05-27T17:47:34.629384703Z" level=info msg="Container 7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132: CDI devices from CRI Config.CDIDevices: []" May 27 17:47:34.629862 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:47:34.630725 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:47:34.630976 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 17:47:34.633189 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:47:34.633472 systemd[1]: cri-containerd-cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077.scope: Deactivated successfully. May 27 17:47:34.636174 containerd[1729]: time="2025-05-27T17:47:34.636155262Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\" id:\"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\" pid:3641 exited_at:{seconds:1748368054 nanos:635920307}" May 27 17:47:34.636667 containerd[1729]: time="2025-05-27T17:47:34.636436554Z" level=info msg="received exit event container_id:\"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\" id:\"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\" pid:3641 exited_at:{seconds:1748368054 nanos:635920307}" May 27 17:47:34.643237 containerd[1729]: time="2025-05-27T17:47:34.643140136Z" level=info msg="CreateContainer within sandbox \"f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\"" May 27 17:47:34.644479 containerd[1729]: time="2025-05-27T17:47:34.644449749Z" level=info msg="StartContainer for \"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\"" May 27 17:47:34.646168 containerd[1729]: time="2025-05-27T17:47:34.646142725Z" level=info msg="connecting to shim 7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132" address="unix:///run/containerd/s/1cba0cb2309e3f4449aa4b03f0c48aacdd15800785adceae69334658fcf14b64" protocol=ttrpc version=3 May 27 17:47:34.658228 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:47:34.668853 systemd[1]: Started cri-containerd-7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132.scope - libcontainer container 7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132. May 27 17:47:34.936787 containerd[1729]: time="2025-05-27T17:47:34.936703193Z" level=info msg="StartContainer for \"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\" returns successfully" May 27 17:47:35.163000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077-rootfs.mount: Deactivated successfully. May 27 17:47:35.531253 containerd[1729]: time="2025-05-27T17:47:35.531197954Z" level=info msg="CreateContainer within sandbox \"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 17:47:35.540416 kubelet[3166]: I0527 17:47:35.539151 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-cp8w4" podStartSLOduration=1.975103483 podStartE2EDuration="11.539128636s" podCreationTimestamp="2025-05-27 17:47:24 +0000 UTC" firstStartedPulling="2025-05-27 17:47:25.045624349 +0000 UTC m=+6.682010210" lastFinishedPulling="2025-05-27 17:47:34.609649499 +0000 UTC m=+16.246035363" observedRunningTime="2025-05-27 17:47:35.538988855 +0000 UTC m=+17.175374715" watchObservedRunningTime="2025-05-27 17:47:35.539128636 +0000 UTC m=+17.175514497" May 27 17:47:35.567524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2532662142.mount: Deactivated successfully. May 27 17:47:35.568353 containerd[1729]: time="2025-05-27T17:47:35.568301714Z" level=info msg="Container ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6: CDI devices from CRI Config.CDIDevices: []" May 27 17:47:35.585410 containerd[1729]: time="2025-05-27T17:47:35.585382817Z" level=info msg="CreateContainer within sandbox \"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\"" May 27 17:47:35.585863 containerd[1729]: time="2025-05-27T17:47:35.585751142Z" level=info msg="StartContainer for \"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\"" May 27 17:47:35.587211 containerd[1729]: time="2025-05-27T17:47:35.587185325Z" level=info msg="connecting to shim ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6" address="unix:///run/containerd/s/93cdb8997b5bf8aaeca77a585282a92d4a91fa0ce1898f4941996ab902f54f64" protocol=ttrpc version=3 May 27 17:47:35.611679 systemd[1]: Started cri-containerd-ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6.scope - libcontainer container ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6. May 27 17:47:35.637615 systemd[1]: cri-containerd-ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6.scope: Deactivated successfully. May 27 17:47:35.638987 containerd[1729]: time="2025-05-27T17:47:35.638769516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\" id:\"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\" pid:3721 exited_at:{seconds:1748368055 nanos:638346799}" May 27 17:47:35.640083 containerd[1729]: time="2025-05-27T17:47:35.640054872Z" level=info msg="received exit event container_id:\"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\" id:\"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\" pid:3721 exited_at:{seconds:1748368055 nanos:638346799}" May 27 17:47:35.646219 containerd[1729]: time="2025-05-27T17:47:35.646147547Z" level=info msg="StartContainer for \"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\" returns successfully" May 27 17:47:36.161489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6-rootfs.mount: Deactivated successfully. May 27 17:47:36.535662 containerd[1729]: time="2025-05-27T17:47:36.535070785Z" level=info msg="CreateContainer within sandbox \"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 17:47:36.554092 containerd[1729]: time="2025-05-27T17:47:36.554069736Z" level=info msg="Container a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf: CDI devices from CRI Config.CDIDevices: []" May 27 17:47:36.557066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1703678695.mount: Deactivated successfully. May 27 17:47:36.566726 containerd[1729]: time="2025-05-27T17:47:36.566699580Z" level=info msg="CreateContainer within sandbox \"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\"" May 27 17:47:36.567028 containerd[1729]: time="2025-05-27T17:47:36.567013026Z" level=info msg="StartContainer for \"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\"" May 27 17:47:36.568534 containerd[1729]: time="2025-05-27T17:47:36.568511064Z" level=info msg="connecting to shim a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf" address="unix:///run/containerd/s/93cdb8997b5bf8aaeca77a585282a92d4a91fa0ce1898f4941996ab902f54f64" protocol=ttrpc version=3 May 27 17:47:36.589759 systemd[1]: Started cri-containerd-a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf.scope - libcontainer container a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf. May 27 17:47:36.608813 systemd[1]: cri-containerd-a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf.scope: Deactivated successfully. May 27 17:47:36.610599 containerd[1729]: time="2025-05-27T17:47:36.610576915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\" id:\"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\" pid:3759 exited_at:{seconds:1748368056 nanos:610192029}" May 27 17:47:36.612929 containerd[1729]: time="2025-05-27T17:47:36.612314840Z" level=info msg="received exit event container_id:\"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\" id:\"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\" pid:3759 exited_at:{seconds:1748368056 nanos:610192029}" May 27 17:47:36.617794 containerd[1729]: time="2025-05-27T17:47:36.617774656Z" level=info msg="StartContainer for \"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\" returns successfully" May 27 17:47:36.625460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf-rootfs.mount: Deactivated successfully. May 27 17:47:37.539333 containerd[1729]: time="2025-05-27T17:47:37.539287876Z" level=info msg="CreateContainer within sandbox \"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 17:47:37.562788 containerd[1729]: time="2025-05-27T17:47:37.562746779Z" level=info msg="Container b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82: CDI devices from CRI Config.CDIDevices: []" May 27 17:47:37.573832 containerd[1729]: time="2025-05-27T17:47:37.573804889Z" level=info msg="CreateContainer within sandbox \"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\"" May 27 17:47:37.574191 containerd[1729]: time="2025-05-27T17:47:37.574175031Z" level=info msg="StartContainer for \"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\"" May 27 17:47:37.575358 containerd[1729]: time="2025-05-27T17:47:37.575309247Z" level=info msg="connecting to shim b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82" address="unix:///run/containerd/s/93cdb8997b5bf8aaeca77a585282a92d4a91fa0ce1898f4941996ab902f54f64" protocol=ttrpc version=3 May 27 17:47:37.597787 systemd[1]: Started cri-containerd-b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82.scope - libcontainer container b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82. May 27 17:47:37.626257 containerd[1729]: time="2025-05-27T17:47:37.626229867Z" level=info msg="StartContainer for \"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\" returns successfully" May 27 17:47:37.678060 containerd[1729]: time="2025-05-27T17:47:37.678034546Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\" id:\"24621a7306356c2d06ef8171f4fa5d4782d11a89a3bb6e8923740cf7a2ed9213\" pid:3827 exited_at:{seconds:1748368057 nanos:677832844}" May 27 17:47:37.691235 kubelet[3166]: I0527 17:47:37.691200 3166 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 17:47:37.742326 systemd[1]: Created slice kubepods-burstable-podd12365d9_7fbd_42e5_904a_dd0c0ef2e888.slice - libcontainer container kubepods-burstable-podd12365d9_7fbd_42e5_904a_dd0c0ef2e888.slice. May 27 17:47:37.752137 systemd[1]: Created slice kubepods-burstable-pod8164b142_c3fe_44d4_bb91_00b671f994d1.slice - libcontainer container kubepods-burstable-pod8164b142_c3fe_44d4_bb91_00b671f994d1.slice. May 27 17:47:37.768892 kubelet[3166]: I0527 17:47:37.768867 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d12365d9-7fbd-42e5-904a-dd0c0ef2e888-config-volume\") pod \"coredns-668d6bf9bc-fq7lh\" (UID: \"d12365d9-7fbd-42e5-904a-dd0c0ef2e888\") " pod="kube-system/coredns-668d6bf9bc-fq7lh" May 27 17:47:37.769037 kubelet[3166]: I0527 17:47:37.769026 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrgrw\" (UniqueName: \"kubernetes.io/projected/d12365d9-7fbd-42e5-904a-dd0c0ef2e888-kube-api-access-zrgrw\") pod \"coredns-668d6bf9bc-fq7lh\" (UID: \"d12365d9-7fbd-42e5-904a-dd0c0ef2e888\") " pod="kube-system/coredns-668d6bf9bc-fq7lh" May 27 17:47:37.769164 kubelet[3166]: I0527 17:47:37.769111 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8164b142-c3fe-44d4-bb91-00b671f994d1-config-volume\") pod \"coredns-668d6bf9bc-4lfz5\" (UID: \"8164b142-c3fe-44d4-bb91-00b671f994d1\") " pod="kube-system/coredns-668d6bf9bc-4lfz5" May 27 17:47:37.769164 kubelet[3166]: I0527 17:47:37.769134 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpbb7\" (UniqueName: \"kubernetes.io/projected/8164b142-c3fe-44d4-bb91-00b671f994d1-kube-api-access-vpbb7\") pod \"coredns-668d6bf9bc-4lfz5\" (UID: \"8164b142-c3fe-44d4-bb91-00b671f994d1\") " pod="kube-system/coredns-668d6bf9bc-4lfz5" May 27 17:47:38.048855 containerd[1729]: time="2025-05-27T17:47:38.048826270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fq7lh,Uid:d12365d9-7fbd-42e5-904a-dd0c0ef2e888,Namespace:kube-system,Attempt:0,}" May 27 17:47:38.057376 containerd[1729]: time="2025-05-27T17:47:38.057088779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4lfz5,Uid:8164b142-c3fe-44d4-bb91-00b671f994d1,Namespace:kube-system,Attempt:0,}" May 27 17:47:39.575769 systemd-networkd[1363]: cilium_host: Link UP May 27 17:47:39.575872 systemd-networkd[1363]: cilium_net: Link UP May 27 17:47:39.575969 systemd-networkd[1363]: cilium_net: Gained carrier May 27 17:47:39.576051 systemd-networkd[1363]: cilium_host: Gained carrier May 27 17:47:39.694429 systemd-networkd[1363]: cilium_vxlan: Link UP May 27 17:47:39.694435 systemd-networkd[1363]: cilium_vxlan: Gained carrier May 27 17:47:39.804632 systemd-networkd[1363]: cilium_net: Gained IPv6LL May 27 17:47:39.877581 kernel: NET: Registered PF_ALG protocol family May 27 17:47:40.309464 systemd-networkd[1363]: lxc_health: Link UP May 27 17:47:40.318591 systemd-networkd[1363]: cilium_host: Gained IPv6LL May 27 17:47:40.318792 systemd-networkd[1363]: lxc_health: Gained carrier May 27 17:47:40.579195 systemd-networkd[1363]: lxc00814669ec97: Link UP May 27 17:47:40.581854 kernel: eth0: renamed from tmpad1a4 May 27 17:47:40.582419 systemd-networkd[1363]: lxc00814669ec97: Gained carrier May 27 17:47:40.597162 systemd-networkd[1363]: lxc78ba69330339: Link UP May 27 17:47:40.607014 kernel: eth0: renamed from tmpea341 May 27 17:47:40.607165 systemd-networkd[1363]: lxc78ba69330339: Gained carrier May 27 17:47:40.907494 kubelet[3166]: I0527 17:47:40.907372 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p49ql" podStartSLOduration=11.855272959 podStartE2EDuration="16.907351305s" podCreationTimestamp="2025-05-27 17:47:24 +0000 UTC" firstStartedPulling="2025-05-27 17:47:24.953146274 +0000 UTC m=+6.589532128" lastFinishedPulling="2025-05-27 17:47:30.005224615 +0000 UTC m=+11.641610474" observedRunningTime="2025-05-27 17:47:38.557628422 +0000 UTC m=+20.194014283" watchObservedRunningTime="2025-05-27 17:47:40.907351305 +0000 UTC m=+22.543737167" May 27 17:47:40.956643 systemd-networkd[1363]: cilium_vxlan: Gained IPv6LL May 27 17:47:41.532686 systemd-networkd[1363]: lxc_health: Gained IPv6LL May 27 17:47:41.724712 systemd-networkd[1363]: lxc00814669ec97: Gained IPv6LL May 27 17:47:42.556789 systemd-networkd[1363]: lxc78ba69330339: Gained IPv6LL May 27 17:47:43.168699 containerd[1729]: time="2025-05-27T17:47:43.168652811Z" level=info msg="connecting to shim ea341d2ab991384710433e4204bbf20ce6df68d1c1e7e0eb2cb12ed203d51bb8" address="unix:///run/containerd/s/3a1ae29cec844d360c12a7ae66ab53e8abdc637719c45badcbbed579e8718535" namespace=k8s.io protocol=ttrpc version=3 May 27 17:47:43.183568 containerd[1729]: time="2025-05-27T17:47:43.178879293Z" level=info msg="connecting to shim ad1a41a69c5e2a31824997aaac0eef5976082a7928ae2b7114b5c817d0706274" address="unix:///run/containerd/s/2c9c017e187236b36787a5eee90397a042688e12c41451ae5d2150ee6e1a3f23" namespace=k8s.io protocol=ttrpc version=3 May 27 17:47:43.212708 systemd[1]: Started cri-containerd-ea341d2ab991384710433e4204bbf20ce6df68d1c1e7e0eb2cb12ed203d51bb8.scope - libcontainer container ea341d2ab991384710433e4204bbf20ce6df68d1c1e7e0eb2cb12ed203d51bb8. May 27 17:47:43.217782 systemd[1]: Started cri-containerd-ad1a41a69c5e2a31824997aaac0eef5976082a7928ae2b7114b5c817d0706274.scope - libcontainer container ad1a41a69c5e2a31824997aaac0eef5976082a7928ae2b7114b5c817d0706274. May 27 17:47:43.263914 containerd[1729]: time="2025-05-27T17:47:43.263869443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4lfz5,Uid:8164b142-c3fe-44d4-bb91-00b671f994d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea341d2ab991384710433e4204bbf20ce6df68d1c1e7e0eb2cb12ed203d51bb8\"" May 27 17:47:43.267660 containerd[1729]: time="2025-05-27T17:47:43.267614969Z" level=info msg="CreateContainer within sandbox \"ea341d2ab991384710433e4204bbf20ce6df68d1c1e7e0eb2cb12ed203d51bb8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:47:43.271420 containerd[1729]: time="2025-05-27T17:47:43.271392967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fq7lh,Uid:d12365d9-7fbd-42e5-904a-dd0c0ef2e888,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad1a41a69c5e2a31824997aaac0eef5976082a7928ae2b7114b5c817d0706274\"" May 27 17:47:43.274129 containerd[1729]: time="2025-05-27T17:47:43.274106028Z" level=info msg="CreateContainer within sandbox \"ad1a41a69c5e2a31824997aaac0eef5976082a7928ae2b7114b5c817d0706274\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:47:43.288813 containerd[1729]: time="2025-05-27T17:47:43.288791155Z" level=info msg="Container 2f424edeab74d252749f91cb665e2dbd3dc3f4327a8c8024549f37b8eb3a233b: CDI devices from CRI Config.CDIDevices: []" May 27 17:47:43.295006 containerd[1729]: time="2025-05-27T17:47:43.294988772Z" level=info msg="Container 908365293c03cd67d48e21d54df3d496c78052e5235166ad3873f19460e65485: CDI devices from CRI Config.CDIDevices: []" May 27 17:47:43.309348 containerd[1729]: time="2025-05-27T17:47:43.309245925Z" level=info msg="CreateContainer within sandbox \"ad1a41a69c5e2a31824997aaac0eef5976082a7928ae2b7114b5c817d0706274\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"908365293c03cd67d48e21d54df3d496c78052e5235166ad3873f19460e65485\"" May 27 17:47:43.309799 containerd[1729]: time="2025-05-27T17:47:43.309740156Z" level=info msg="StartContainer for \"908365293c03cd67d48e21d54df3d496c78052e5235166ad3873f19460e65485\"" May 27 17:47:43.311134 containerd[1729]: time="2025-05-27T17:47:43.310892498Z" level=info msg="CreateContainer within sandbox \"ea341d2ab991384710433e4204bbf20ce6df68d1c1e7e0eb2cb12ed203d51bb8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f424edeab74d252749f91cb665e2dbd3dc3f4327a8c8024549f37b8eb3a233b\"" May 27 17:47:43.311134 containerd[1729]: time="2025-05-27T17:47:43.311091695Z" level=info msg="connecting to shim 908365293c03cd67d48e21d54df3d496c78052e5235166ad3873f19460e65485" address="unix:///run/containerd/s/2c9c017e187236b36787a5eee90397a042688e12c41451ae5d2150ee6e1a3f23" protocol=ttrpc version=3 May 27 17:47:43.311291 containerd[1729]: time="2025-05-27T17:47:43.311196920Z" level=info msg="StartContainer for \"2f424edeab74d252749f91cb665e2dbd3dc3f4327a8c8024549f37b8eb3a233b\"" May 27 17:47:43.312261 containerd[1729]: time="2025-05-27T17:47:43.312191856Z" level=info msg="connecting to shim 2f424edeab74d252749f91cb665e2dbd3dc3f4327a8c8024549f37b8eb3a233b" address="unix:///run/containerd/s/3a1ae29cec844d360c12a7ae66ab53e8abdc637719c45badcbbed579e8718535" protocol=ttrpc version=3 May 27 17:47:43.327651 systemd[1]: Started cri-containerd-908365293c03cd67d48e21d54df3d496c78052e5235166ad3873f19460e65485.scope - libcontainer container 908365293c03cd67d48e21d54df3d496c78052e5235166ad3873f19460e65485. May 27 17:47:43.330626 systemd[1]: Started cri-containerd-2f424edeab74d252749f91cb665e2dbd3dc3f4327a8c8024549f37b8eb3a233b.scope - libcontainer container 2f424edeab74d252749f91cb665e2dbd3dc3f4327a8c8024549f37b8eb3a233b. May 27 17:47:43.361534 containerd[1729]: time="2025-05-27T17:47:43.361509433Z" level=info msg="StartContainer for \"908365293c03cd67d48e21d54df3d496c78052e5235166ad3873f19460e65485\" returns successfully" May 27 17:47:43.361981 containerd[1729]: time="2025-05-27T17:47:43.361960301Z" level=info msg="StartContainer for \"2f424edeab74d252749f91cb665e2dbd3dc3f4327a8c8024549f37b8eb3a233b\" returns successfully" May 27 17:47:43.563367 kubelet[3166]: I0527 17:47:43.563314 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fq7lh" podStartSLOduration=19.563291637 podStartE2EDuration="19.563291637s" podCreationTimestamp="2025-05-27 17:47:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:47:43.561442671 +0000 UTC m=+25.197828531" watchObservedRunningTime="2025-05-27 17:47:43.563291637 +0000 UTC m=+25.199677498" May 27 17:47:43.597354 kubelet[3166]: I0527 17:47:43.597273 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4lfz5" podStartSLOduration=19.597250472 podStartE2EDuration="19.597250472s" podCreationTimestamp="2025-05-27 17:47:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:47:43.59624641 +0000 UTC m=+25.232632271" watchObservedRunningTime="2025-05-27 17:47:43.597250472 +0000 UTC m=+25.233636354" May 27 17:47:51.261679 kubelet[3166]: I0527 17:47:51.261579 3166 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:48:54.602088 systemd[1]: Started sshd@7-10.200.8.45:22-10.200.16.10:53482.service - OpenSSH per-connection server daemon (10.200.16.10:53482). May 27 17:48:55.229862 sshd[4466]: Accepted publickey for core from 10.200.16.10 port 53482 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:48:55.231096 sshd-session[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:48:55.235621 systemd-logind[1706]: New session 10 of user core. May 27 17:48:55.238718 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:48:55.739739 sshd[4468]: Connection closed by 10.200.16.10 port 53482 May 27 17:48:55.740303 sshd-session[4466]: pam_unix(sshd:session): session closed for user core May 27 17:48:55.743900 systemd[1]: sshd@7-10.200.8.45:22-10.200.16.10:53482.service: Deactivated successfully. May 27 17:48:55.745776 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:48:55.746502 systemd-logind[1706]: Session 10 logged out. Waiting for processes to exit. May 27 17:48:55.747513 systemd-logind[1706]: Removed session 10. May 27 17:49:00.860149 systemd[1]: Started sshd@8-10.200.8.45:22-10.200.16.10:41030.service - OpenSSH per-connection server daemon (10.200.16.10:41030). May 27 17:49:01.488655 sshd[4486]: Accepted publickey for core from 10.200.16.10 port 41030 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:01.489780 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:01.493949 systemd-logind[1706]: New session 11 of user core. May 27 17:49:01.499707 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:49:01.979678 sshd[4488]: Connection closed by 10.200.16.10 port 41030 May 27 17:49:01.980360 sshd-session[4486]: pam_unix(sshd:session): session closed for user core May 27 17:49:01.983821 systemd[1]: sshd@8-10.200.8.45:22-10.200.16.10:41030.service: Deactivated successfully. May 27 17:49:01.985926 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:49:01.987147 systemd-logind[1706]: Session 11 logged out. Waiting for processes to exit. May 27 17:49:01.988134 systemd-logind[1706]: Removed session 11. May 27 17:49:07.090087 systemd[1]: Started sshd@9-10.200.8.45:22-10.200.16.10:41038.service - OpenSSH per-connection server daemon (10.200.16.10:41038). May 27 17:49:07.723012 sshd[4501]: Accepted publickey for core from 10.200.16.10 port 41038 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:07.724340 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:07.728787 systemd-logind[1706]: New session 12 of user core. May 27 17:49:07.732693 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:49:08.216334 sshd[4503]: Connection closed by 10.200.16.10 port 41038 May 27 17:49:08.216820 sshd-session[4501]: pam_unix(sshd:session): session closed for user core May 27 17:49:08.219701 systemd[1]: sshd@9-10.200.8.45:22-10.200.16.10:41038.service: Deactivated successfully. May 27 17:49:08.221367 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:49:08.222062 systemd-logind[1706]: Session 12 logged out. Waiting for processes to exit. May 27 17:49:08.223045 systemd-logind[1706]: Removed session 12. May 27 17:49:13.330009 systemd[1]: Started sshd@10-10.200.8.45:22-10.200.16.10:43544.service - OpenSSH per-connection server daemon (10.200.16.10:43544). May 27 17:49:13.956338 sshd[4516]: Accepted publickey for core from 10.200.16.10 port 43544 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:13.957896 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:13.962323 systemd-logind[1706]: New session 13 of user core. May 27 17:49:13.966696 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:49:14.445583 sshd[4518]: Connection closed by 10.200.16.10 port 43544 May 27 17:49:14.446056 sshd-session[4516]: pam_unix(sshd:session): session closed for user core May 27 17:49:14.449204 systemd[1]: sshd@10-10.200.8.45:22-10.200.16.10:43544.service: Deactivated successfully. May 27 17:49:14.451027 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:49:14.451758 systemd-logind[1706]: Session 13 logged out. Waiting for processes to exit. May 27 17:49:14.452909 systemd-logind[1706]: Removed session 13. May 27 17:49:14.560226 systemd[1]: Started sshd@11-10.200.8.45:22-10.200.16.10:43546.service - OpenSSH per-connection server daemon (10.200.16.10:43546). May 27 17:49:15.191446 sshd[4531]: Accepted publickey for core from 10.200.16.10 port 43546 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:15.192600 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:15.196779 systemd-logind[1706]: New session 14 of user core. May 27 17:49:15.200717 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:49:15.710968 sshd[4533]: Connection closed by 10.200.16.10 port 43546 May 27 17:49:15.711447 sshd-session[4531]: pam_unix(sshd:session): session closed for user core May 27 17:49:15.714099 systemd[1]: sshd@11-10.200.8.45:22-10.200.16.10:43546.service: Deactivated successfully. May 27 17:49:15.715965 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:49:15.718010 systemd-logind[1706]: Session 14 logged out. Waiting for processes to exit. May 27 17:49:15.719379 systemd-logind[1706]: Removed session 14. May 27 17:49:15.829069 systemd[1]: Started sshd@12-10.200.8.45:22-10.200.16.10:43552.service - OpenSSH per-connection server daemon (10.200.16.10:43552). May 27 17:49:16.457368 sshd[4543]: Accepted publickey for core from 10.200.16.10 port 43552 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:16.458542 sshd-session[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:16.462637 systemd-logind[1706]: New session 15 of user core. May 27 17:49:16.467705 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:49:16.944552 sshd[4545]: Connection closed by 10.200.16.10 port 43552 May 27 17:49:16.945012 sshd-session[4543]: pam_unix(sshd:session): session closed for user core May 27 17:49:16.947455 systemd[1]: sshd@12-10.200.8.45:22-10.200.16.10:43552.service: Deactivated successfully. May 27 17:49:16.949279 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:49:16.950571 systemd-logind[1706]: Session 15 logged out. Waiting for processes to exit. May 27 17:49:16.952015 systemd-logind[1706]: Removed session 15. May 27 17:49:22.057086 systemd[1]: Started sshd@13-10.200.8.45:22-10.200.16.10:55524.service - OpenSSH per-connection server daemon (10.200.16.10:55524). May 27 17:49:22.685766 sshd[4559]: Accepted publickey for core from 10.200.16.10 port 55524 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:22.687325 sshd-session[4559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:22.691606 systemd-logind[1706]: New session 16 of user core. May 27 17:49:22.701676 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:49:23.176772 sshd[4561]: Connection closed by 10.200.16.10 port 55524 May 27 17:49:23.177260 sshd-session[4559]: pam_unix(sshd:session): session closed for user core May 27 17:49:23.179824 systemd[1]: sshd@13-10.200.8.45:22-10.200.16.10:55524.service: Deactivated successfully. May 27 17:49:23.181689 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:49:23.183209 systemd-logind[1706]: Session 16 logged out. Waiting for processes to exit. May 27 17:49:23.184808 systemd-logind[1706]: Removed session 16. May 27 17:49:23.287273 systemd[1]: Started sshd@14-10.200.8.45:22-10.200.16.10:55538.service - OpenSSH per-connection server daemon (10.200.16.10:55538). May 27 17:49:23.910870 sshd[4573]: Accepted publickey for core from 10.200.16.10 port 55538 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:23.912439 sshd-session[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:23.916624 systemd-logind[1706]: New session 17 of user core. May 27 17:49:23.922669 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:49:24.491801 sshd[4575]: Connection closed by 10.200.16.10 port 55538 May 27 17:49:24.492327 sshd-session[4573]: pam_unix(sshd:session): session closed for user core May 27 17:49:24.495031 systemd[1]: sshd@14-10.200.8.45:22-10.200.16.10:55538.service: Deactivated successfully. May 27 17:49:24.496993 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:49:24.498290 systemd-logind[1706]: Session 17 logged out. Waiting for processes to exit. May 27 17:49:24.499789 systemd-logind[1706]: Removed session 17. May 27 17:49:24.607130 systemd[1]: Started sshd@15-10.200.8.45:22-10.200.16.10:55546.service - OpenSSH per-connection server daemon (10.200.16.10:55546). May 27 17:49:25.230600 sshd[4585]: Accepted publickey for core from 10.200.16.10 port 55546 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:25.231727 sshd-session[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:25.235853 systemd-logind[1706]: New session 18 of user core. May 27 17:49:25.242689 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:49:26.502242 sshd[4587]: Connection closed by 10.200.16.10 port 55546 May 27 17:49:26.502973 sshd-session[4585]: pam_unix(sshd:session): session closed for user core May 27 17:49:26.505992 systemd[1]: sshd@15-10.200.8.45:22-10.200.16.10:55546.service: Deactivated successfully. May 27 17:49:26.507947 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:49:26.509168 systemd-logind[1706]: Session 18 logged out. Waiting for processes to exit. May 27 17:49:26.510579 systemd-logind[1706]: Removed session 18. May 27 17:49:26.615307 systemd[1]: Started sshd@16-10.200.8.45:22-10.200.16.10:55556.service - OpenSSH per-connection server daemon (10.200.16.10:55556). May 27 17:49:27.247283 sshd[4604]: Accepted publickey for core from 10.200.16.10 port 55556 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:27.248470 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:27.253601 systemd-logind[1706]: New session 19 of user core. May 27 17:49:27.256717 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:49:27.805459 sshd[4606]: Connection closed by 10.200.16.10 port 55556 May 27 17:49:27.805982 sshd-session[4604]: pam_unix(sshd:session): session closed for user core May 27 17:49:27.809256 systemd[1]: sshd@16-10.200.8.45:22-10.200.16.10:55556.service: Deactivated successfully. May 27 17:49:27.811022 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:49:27.811699 systemd-logind[1706]: Session 19 logged out. Waiting for processes to exit. May 27 17:49:27.813021 systemd-logind[1706]: Removed session 19. May 27 17:49:27.921094 systemd[1]: Started sshd@17-10.200.8.45:22-10.200.16.10:55572.service - OpenSSH per-connection server daemon (10.200.16.10:55572). May 27 17:49:28.548996 sshd[4617]: Accepted publickey for core from 10.200.16.10 port 55572 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:28.550181 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:28.554512 systemd-logind[1706]: New session 20 of user core. May 27 17:49:28.559684 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:49:29.034472 sshd[4619]: Connection closed by 10.200.16.10 port 55572 May 27 17:49:29.034915 sshd-session[4617]: pam_unix(sshd:session): session closed for user core May 27 17:49:29.037708 systemd[1]: sshd@17-10.200.8.45:22-10.200.16.10:55572.service: Deactivated successfully. May 27 17:49:29.039470 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:49:29.040142 systemd-logind[1706]: Session 20 logged out. Waiting for processes to exit. May 27 17:49:29.041322 systemd-logind[1706]: Removed session 20. May 27 17:49:34.151029 systemd[1]: Started sshd@18-10.200.8.45:22-10.200.16.10:34448.service - OpenSSH per-connection server daemon (10.200.16.10:34448). May 27 17:49:34.781030 sshd[4633]: Accepted publickey for core from 10.200.16.10 port 34448 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:34.782277 sshd-session[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:34.786667 systemd-logind[1706]: New session 21 of user core. May 27 17:49:34.793685 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:49:35.276255 sshd[4635]: Connection closed by 10.200.16.10 port 34448 May 27 17:49:35.276855 sshd-session[4633]: pam_unix(sshd:session): session closed for user core May 27 17:49:35.280125 systemd[1]: sshd@18-10.200.8.45:22-10.200.16.10:34448.service: Deactivated successfully. May 27 17:49:35.281900 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:49:35.282624 systemd-logind[1706]: Session 21 logged out. Waiting for processes to exit. May 27 17:49:35.283967 systemd-logind[1706]: Removed session 21. May 27 17:49:40.390318 systemd[1]: Started sshd@19-10.200.8.45:22-10.200.16.10:60136.service - OpenSSH per-connection server daemon (10.200.16.10:60136). May 27 17:49:41.015974 sshd[4647]: Accepted publickey for core from 10.200.16.10 port 60136 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:41.017128 sshd-session[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:41.020848 systemd-logind[1706]: New session 22 of user core. May 27 17:49:41.026718 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:49:41.504442 sshd[4649]: Connection closed by 10.200.16.10 port 60136 May 27 17:49:41.505371 sshd-session[4647]: pam_unix(sshd:session): session closed for user core May 27 17:49:41.508903 systemd[1]: sshd@19-10.200.8.45:22-10.200.16.10:60136.service: Deactivated successfully. May 27 17:49:41.510863 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:49:41.511625 systemd-logind[1706]: Session 22 logged out. Waiting for processes to exit. May 27 17:49:41.512977 systemd-logind[1706]: Removed session 22. May 27 17:49:46.623947 systemd[1]: Started sshd@20-10.200.8.45:22-10.200.16.10:60152.service - OpenSSH per-connection server daemon (10.200.16.10:60152). May 27 17:49:47.252886 sshd[4660]: Accepted publickey for core from 10.200.16.10 port 60152 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:47.254054 sshd-session[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:47.258280 systemd-logind[1706]: New session 23 of user core. May 27 17:49:47.262750 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:49:47.738097 sshd[4662]: Connection closed by 10.200.16.10 port 60152 May 27 17:49:47.738615 sshd-session[4660]: pam_unix(sshd:session): session closed for user core May 27 17:49:47.741849 systemd[1]: sshd@20-10.200.8.45:22-10.200.16.10:60152.service: Deactivated successfully. May 27 17:49:47.743504 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:49:47.744178 systemd-logind[1706]: Session 23 logged out. Waiting for processes to exit. May 27 17:49:47.745277 systemd-logind[1706]: Removed session 23. May 27 17:49:47.848211 systemd[1]: Started sshd@21-10.200.8.45:22-10.200.16.10:60154.service - OpenSSH per-connection server daemon (10.200.16.10:60154). May 27 17:49:48.473102 sshd[4673]: Accepted publickey for core from 10.200.16.10 port 60154 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:48.474313 sshd-session[4673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:48.478655 systemd-logind[1706]: New session 24 of user core. May 27 17:49:48.484727 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 17:49:50.116950 containerd[1729]: time="2025-05-27T17:49:50.116897104Z" level=info msg="StopContainer for \"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\" with timeout 30 (s)" May 27 17:49:50.117563 containerd[1729]: time="2025-05-27T17:49:50.117530357Z" level=info msg="Stop container \"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\" with signal terminated" May 27 17:49:50.124959 containerd[1729]: time="2025-05-27T17:49:50.124922986Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:49:50.129437 systemd[1]: cri-containerd-7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132.scope: Deactivated successfully. May 27 17:49:50.134113 containerd[1729]: time="2025-05-27T17:49:50.133529323Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\" id:\"9e2ce775aefbd4c756a92cc57871ba836e739df58d01895ccf2c9cb2d4d76a8c\" pid:4694 exited_at:{seconds:1748368190 nanos:132405588}" May 27 17:49:50.134476 containerd[1729]: time="2025-05-27T17:49:50.134459856Z" level=info msg="received exit event container_id:\"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\" id:\"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\" pid:3687 exited_at:{seconds:1748368190 nanos:132671290}" May 27 17:49:50.134892 containerd[1729]: time="2025-05-27T17:49:50.134877650Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\" id:\"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\" pid:3687 exited_at:{seconds:1748368190 nanos:132671290}" May 27 17:49:50.137214 containerd[1729]: time="2025-05-27T17:49:50.137002500Z" level=info msg="StopContainer for \"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\" with timeout 2 (s)" May 27 17:49:50.137510 containerd[1729]: time="2025-05-27T17:49:50.137491756Z" level=info msg="Stop container \"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\" with signal terminated" May 27 17:49:50.146825 systemd-networkd[1363]: lxc_health: Link DOWN May 27 17:49:50.146833 systemd-networkd[1363]: lxc_health: Lost carrier May 27 17:49:50.161681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132-rootfs.mount: Deactivated successfully. May 27 17:49:50.163254 systemd[1]: cri-containerd-b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82.scope: Deactivated successfully. May 27 17:49:50.164213 systemd[1]: cri-containerd-b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82.scope: Consumed 4.734s CPU time, 124.4M memory peak, 136K read from disk, 13.3M written to disk. May 27 17:49:50.166141 containerd[1729]: time="2025-05-27T17:49:50.166072131Z" level=info msg="received exit event container_id:\"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\" id:\"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\" pid:3795 exited_at:{seconds:1748368190 nanos:165504356}" May 27 17:49:50.166270 containerd[1729]: time="2025-05-27T17:49:50.166239799Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\" id:\"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\" pid:3795 exited_at:{seconds:1748368190 nanos:165504356}" May 27 17:49:50.179245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82-rootfs.mount: Deactivated successfully. May 27 17:49:50.252720 containerd[1729]: time="2025-05-27T17:49:50.252696627Z" level=info msg="StopContainer for \"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\" returns successfully" May 27 17:49:50.253282 containerd[1729]: time="2025-05-27T17:49:50.253259618Z" level=info msg="StopPodSandbox for \"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\"" May 27 17:49:50.253337 containerd[1729]: time="2025-05-27T17:49:50.253318742Z" level=info msg="Container to stop \"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:49:50.253337 containerd[1729]: time="2025-05-27T17:49:50.253330082Z" level=info msg="Container to stop \"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:49:50.253387 containerd[1729]: time="2025-05-27T17:49:50.253340137Z" level=info msg="Container to stop \"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:49:50.253387 containerd[1729]: time="2025-05-27T17:49:50.253350412Z" level=info msg="Container to stop \"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:49:50.253387 containerd[1729]: time="2025-05-27T17:49:50.253358276Z" level=info msg="Container to stop \"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:49:50.256416 containerd[1729]: time="2025-05-27T17:49:50.256395449Z" level=info msg="StopContainer for \"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\" returns successfully" May 27 17:49:50.257040 containerd[1729]: time="2025-05-27T17:49:50.256990152Z" level=info msg="StopPodSandbox for \"f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a\"" May 27 17:49:50.257186 containerd[1729]: time="2025-05-27T17:49:50.257133116Z" level=info msg="Container to stop \"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:49:50.259711 systemd[1]: cri-containerd-c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846.scope: Deactivated successfully. May 27 17:49:50.261519 containerd[1729]: time="2025-05-27T17:49:50.261420814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" id:\"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" pid:3274 exit_status:137 exited_at:{seconds:1748368190 nanos:261073135}" May 27 17:49:50.265505 systemd[1]: cri-containerd-f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a.scope: Deactivated successfully. May 27 17:49:50.286004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a-rootfs.mount: Deactivated successfully. May 27 17:49:50.288758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846-rootfs.mount: Deactivated successfully. May 27 17:49:50.299566 containerd[1729]: time="2025-05-27T17:49:50.299196431Z" level=info msg="shim disconnected" id=f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a namespace=k8s.io May 27 17:49:50.299566 containerd[1729]: time="2025-05-27T17:49:50.299523732Z" level=warning msg="cleaning up after shim disconnected" id=f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a namespace=k8s.io May 27 17:49:50.300172 containerd[1729]: time="2025-05-27T17:49:50.299534755Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:49:50.300292 containerd[1729]: time="2025-05-27T17:49:50.299333675Z" level=info msg="shim disconnected" id=c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846 namespace=k8s.io May 27 17:49:50.300292 containerd[1729]: time="2025-05-27T17:49:50.300285400Z" level=warning msg="cleaning up after shim disconnected" id=c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846 namespace=k8s.io May 27 17:49:50.300345 containerd[1729]: time="2025-05-27T17:49:50.300293314Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:49:50.314748 containerd[1729]: time="2025-05-27T17:49:50.314675958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a\" id:\"f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a\" pid:3321 exit_status:137 exited_at:{seconds:1748368190 nanos:267034516}" May 27 17:49:50.314748 containerd[1729]: time="2025-05-27T17:49:50.314699273Z" level=info msg="received exit event sandbox_id:\"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" exit_status:137 exited_at:{seconds:1748368190 nanos:261073135}" May 27 17:49:50.314748 containerd[1729]: time="2025-05-27T17:49:50.314679432Z" level=info msg="received exit event sandbox_id:\"f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a\" exit_status:137 exited_at:{seconds:1748368190 nanos:267034516}" May 27 17:49:50.315764 containerd[1729]: time="2025-05-27T17:49:50.315743338Z" level=info msg="TearDown network for sandbox \"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" successfully" May 27 17:49:50.315832 containerd[1729]: time="2025-05-27T17:49:50.315766256Z" level=info msg="StopPodSandbox for \"c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846\" returns successfully" May 27 17:49:50.317122 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c93a52023fc50579852e2fdf13e8b1c881ab6baad4cd230405edc5c907cd7846-shm.mount: Deactivated successfully. May 27 17:49:50.317638 containerd[1729]: time="2025-05-27T17:49:50.317578916Z" level=info msg="TearDown network for sandbox \"f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a\" successfully" May 27 17:49:50.317638 containerd[1729]: time="2025-05-27T17:49:50.317598649Z" level=info msg="StopPodSandbox for \"f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a\" returns successfully" May 27 17:49:50.449091 kubelet[3166]: I0527 17:49:50.448973 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cilium-cgroup\") pod \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " May 27 17:49:50.449091 kubelet[3166]: I0527 17:49:50.449005 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-hostproc\") pod \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " May 27 17:49:50.449091 kubelet[3166]: I0527 17:49:50.449004 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c6723ce1-2e6f-485e-84a6-8edd4d8d5656" (UID: "c6723ce1-2e6f-485e-84a6-8edd4d8d5656"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:49:50.449091 kubelet[3166]: I0527 17:49:50.449028 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-clustermesh-secrets\") pod \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " May 27 17:49:50.449091 kubelet[3166]: I0527 17:49:50.449045 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cilium-run\") pod \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " May 27 17:49:50.449091 kubelet[3166]: I0527 17:49:50.449061 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-xtables-lock\") pod \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " May 27 17:49:50.449459 kubelet[3166]: I0527 17:49:50.449077 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-host-proc-sys-net\") pod \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " May 27 17:49:50.450572 kubelet[3166]: I0527 17:49:50.449863 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-hubble-tls\") pod \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " May 27 17:49:50.450572 kubelet[3166]: I0527 17:49:50.449887 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-bpf-maps\") pod \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " May 27 17:49:50.450572 kubelet[3166]: I0527 17:49:50.449905 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-lib-modules\") pod \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " May 27 17:49:50.450572 kubelet[3166]: I0527 17:49:50.449933 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94lk7\" (UniqueName: \"kubernetes.io/projected/16f23d65-ff33-43da-b401-1cdfd937a4c4-kube-api-access-94lk7\") pod \"16f23d65-ff33-43da-b401-1cdfd937a4c4\" (UID: \"16f23d65-ff33-43da-b401-1cdfd937a4c4\") " May 27 17:49:50.450572 kubelet[3166]: I0527 17:49:50.449954 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cni-path\") pod \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " May 27 17:49:50.450572 kubelet[3166]: I0527 17:49:50.449981 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2ph9\" (UniqueName: \"kubernetes.io/projected/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-kube-api-access-r2ph9\") pod \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " May 27 17:49:50.450748 kubelet[3166]: I0527 17:49:50.449999 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-host-proc-sys-kernel\") pod \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " May 27 17:49:50.450748 kubelet[3166]: I0527 17:49:50.450017 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-etc-cni-netd\") pod \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " May 27 17:49:50.450748 kubelet[3166]: I0527 17:49:50.450040 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cilium-config-path\") pod \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\" (UID: \"c6723ce1-2e6f-485e-84a6-8edd4d8d5656\") " May 27 17:49:50.450748 kubelet[3166]: I0527 17:49:50.450063 3166 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16f23d65-ff33-43da-b401-1cdfd937a4c4-cilium-config-path\") pod \"16f23d65-ff33-43da-b401-1cdfd937a4c4\" (UID: \"16f23d65-ff33-43da-b401-1cdfd937a4c4\") " May 27 17:49:50.450748 kubelet[3166]: I0527 17:49:50.450100 3166 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cilium-cgroup\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.452666 kubelet[3166]: I0527 17:49:50.449499 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-hostproc" (OuterVolumeSpecName: "hostproc") pod "c6723ce1-2e6f-485e-84a6-8edd4d8d5656" (UID: "c6723ce1-2e6f-485e-84a6-8edd4d8d5656"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:49:50.454397 kubelet[3166]: I0527 17:49:50.449514 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c6723ce1-2e6f-485e-84a6-8edd4d8d5656" (UID: "c6723ce1-2e6f-485e-84a6-8edd4d8d5656"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:49:50.456251 kubelet[3166]: I0527 17:49:50.449577 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c6723ce1-2e6f-485e-84a6-8edd4d8d5656" (UID: "c6723ce1-2e6f-485e-84a6-8edd4d8d5656"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:49:50.456347 kubelet[3166]: I0527 17:49:50.449590 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c6723ce1-2e6f-485e-84a6-8edd4d8d5656" (UID: "c6723ce1-2e6f-485e-84a6-8edd4d8d5656"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:49:50.456383 kubelet[3166]: I0527 17:49:50.451377 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cni-path" (OuterVolumeSpecName: "cni-path") pod "c6723ce1-2e6f-485e-84a6-8edd4d8d5656" (UID: "c6723ce1-2e6f-485e-84a6-8edd4d8d5656"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:49:50.456814 kubelet[3166]: I0527 17:49:50.452686 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c6723ce1-2e6f-485e-84a6-8edd4d8d5656" (UID: "c6723ce1-2e6f-485e-84a6-8edd4d8d5656"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:49:50.456888 kubelet[3166]: I0527 17:49:50.452704 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c6723ce1-2e6f-485e-84a6-8edd4d8d5656" (UID: "c6723ce1-2e6f-485e-84a6-8edd4d8d5656"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:49:50.456919 kubelet[3166]: I0527 17:49:50.453542 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c6723ce1-2e6f-485e-84a6-8edd4d8d5656" (UID: "c6723ce1-2e6f-485e-84a6-8edd4d8d5656"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:49:50.456957 kubelet[3166]: I0527 17:49:50.453575 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c6723ce1-2e6f-485e-84a6-8edd4d8d5656" (UID: "c6723ce1-2e6f-485e-84a6-8edd4d8d5656"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:49:50.456984 kubelet[3166]: I0527 17:49:50.454365 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f23d65-ff33-43da-b401-1cdfd937a4c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "16f23d65-ff33-43da-b401-1cdfd937a4c4" (UID: "16f23d65-ff33-43da-b401-1cdfd937a4c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:49:50.457064 kubelet[3166]: I0527 17:49:50.454636 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c6723ce1-2e6f-485e-84a6-8edd4d8d5656" (UID: "c6723ce1-2e6f-485e-84a6-8edd4d8d5656"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:49:50.457064 kubelet[3166]: I0527 17:49:50.454685 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c6723ce1-2e6f-485e-84a6-8edd4d8d5656" (UID: "c6723ce1-2e6f-485e-84a6-8edd4d8d5656"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:49:50.457064 kubelet[3166]: I0527 17:49:50.456788 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c6723ce1-2e6f-485e-84a6-8edd4d8d5656" (UID: "c6723ce1-2e6f-485e-84a6-8edd4d8d5656"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:49:50.457064 kubelet[3166]: I0527 17:49:50.457051 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f23d65-ff33-43da-b401-1cdfd937a4c4-kube-api-access-94lk7" (OuterVolumeSpecName: "kube-api-access-94lk7") pod "16f23d65-ff33-43da-b401-1cdfd937a4c4" (UID: "16f23d65-ff33-43da-b401-1cdfd937a4c4"). InnerVolumeSpecName "kube-api-access-94lk7". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:49:50.457378 kubelet[3166]: I0527 17:49:50.457359 3166 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-kube-api-access-r2ph9" (OuterVolumeSpecName: "kube-api-access-r2ph9") pod "c6723ce1-2e6f-485e-84a6-8edd4d8d5656" (UID: "c6723ce1-2e6f-485e-84a6-8edd4d8d5656"). InnerVolumeSpecName "kube-api-access-r2ph9". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:49:50.550634 kubelet[3166]: I0527 17:49:50.550617 3166 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-clustermesh-secrets\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.550710 kubelet[3166]: I0527 17:49:50.550637 3166 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cilium-run\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.550710 kubelet[3166]: I0527 17:49:50.550649 3166 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-xtables-lock\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.550710 kubelet[3166]: I0527 17:49:50.550657 3166 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-hubble-tls\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.550710 kubelet[3166]: I0527 17:49:50.550666 3166 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-bpf-maps\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.550710 kubelet[3166]: I0527 17:49:50.550701 3166 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-lib-modules\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.550830 kubelet[3166]: I0527 17:49:50.550710 3166 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94lk7\" (UniqueName: \"kubernetes.io/projected/16f23d65-ff33-43da-b401-1cdfd937a4c4-kube-api-access-94lk7\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.550830 kubelet[3166]: I0527 17:49:50.550720 3166 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-host-proc-sys-net\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.550830 kubelet[3166]: I0527 17:49:50.550731 3166 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cni-path\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.550830 kubelet[3166]: I0527 17:49:50.550741 3166 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r2ph9\" (UniqueName: \"kubernetes.io/projected/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-kube-api-access-r2ph9\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.550830 kubelet[3166]: I0527 17:49:50.550751 3166 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-etc-cni-netd\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.550830 kubelet[3166]: I0527 17:49:50.550762 3166 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-host-proc-sys-kernel\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.550830 kubelet[3166]: I0527 17:49:50.550771 3166 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-cilium-config-path\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.550830 kubelet[3166]: I0527 17:49:50.550783 3166 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16f23d65-ff33-43da-b401-1cdfd937a4c4-cilium-config-path\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.550958 kubelet[3166]: I0527 17:49:50.550792 3166 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6723ce1-2e6f-485e-84a6-8edd4d8d5656-hostproc\") on node \"ci-4344.0.0-a-927e686d84\" DevicePath \"\"" May 27 17:49:50.790660 kubelet[3166]: I0527 17:49:50.790644 3166 scope.go:117] "RemoveContainer" containerID="b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82" May 27 17:49:50.795607 containerd[1729]: time="2025-05-27T17:49:50.794923533Z" level=info msg="RemoveContainer for \"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\"" May 27 17:49:50.796267 systemd[1]: Removed slice kubepods-burstable-podc6723ce1_2e6f_485e_84a6_8edd4d8d5656.slice - libcontainer container kubepods-burstable-podc6723ce1_2e6f_485e_84a6_8edd4d8d5656.slice. May 27 17:49:50.796380 systemd[1]: kubepods-burstable-podc6723ce1_2e6f_485e_84a6_8edd4d8d5656.slice: Consumed 4.800s CPU time, 124.9M memory peak, 136K read from disk, 13.3M written to disk. May 27 17:49:50.803108 systemd[1]: Removed slice kubepods-besteffort-pod16f23d65_ff33_43da_b401_1cdfd937a4c4.slice - libcontainer container kubepods-besteffort-pod16f23d65_ff33_43da_b401_1cdfd937a4c4.slice. May 27 17:49:50.804495 containerd[1729]: time="2025-05-27T17:49:50.804267391Z" level=info msg="RemoveContainer for \"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\" returns successfully" May 27 17:49:50.804767 kubelet[3166]: I0527 17:49:50.804747 3166 scope.go:117] "RemoveContainer" containerID="a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf" May 27 17:49:50.806296 containerd[1729]: time="2025-05-27T17:49:50.806273801Z" level=info msg="RemoveContainer for \"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\"" May 27 17:49:50.814217 containerd[1729]: time="2025-05-27T17:49:50.814191237Z" level=info msg="RemoveContainer for \"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\" returns successfully" May 27 17:49:50.814404 kubelet[3166]: I0527 17:49:50.814363 3166 scope.go:117] "RemoveContainer" containerID="ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6" May 27 17:49:50.816250 containerd[1729]: time="2025-05-27T17:49:50.816225654Z" level=info msg="RemoveContainer for \"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\"" May 27 17:49:50.829008 containerd[1729]: time="2025-05-27T17:49:50.828977004Z" level=info msg="RemoveContainer for \"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\" returns successfully" May 27 17:49:50.829153 kubelet[3166]: I0527 17:49:50.829125 3166 scope.go:117] "RemoveContainer" containerID="cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077" May 27 17:49:50.830351 containerd[1729]: time="2025-05-27T17:49:50.830315315Z" level=info msg="RemoveContainer for \"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\"" May 27 17:49:50.836210 containerd[1729]: time="2025-05-27T17:49:50.836189973Z" level=info msg="RemoveContainer for \"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\" returns successfully" May 27 17:49:50.836342 kubelet[3166]: I0527 17:49:50.836321 3166 scope.go:117] "RemoveContainer" containerID="81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac" May 27 17:49:50.838572 containerd[1729]: time="2025-05-27T17:49:50.838463840Z" level=info msg="RemoveContainer for \"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\"" May 27 17:49:50.844844 containerd[1729]: time="2025-05-27T17:49:50.844823100Z" level=info msg="RemoveContainer for \"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\" returns successfully" May 27 17:49:50.844978 kubelet[3166]: I0527 17:49:50.844959 3166 scope.go:117] "RemoveContainer" containerID="b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82" May 27 17:49:50.845150 containerd[1729]: time="2025-05-27T17:49:50.845124791Z" level=error msg="ContainerStatus for \"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\": not found" May 27 17:49:50.845255 kubelet[3166]: E0527 17:49:50.845221 3166 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\": not found" containerID="b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82" May 27 17:49:50.845346 kubelet[3166]: I0527 17:49:50.845265 3166 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82"} err="failed to get container status \"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9935d2166e2661c82a279fadc2ea9bf0ffb942e9f15fa5f0b49aabb70fe2e82\": not found" May 27 17:49:50.845377 kubelet[3166]: I0527 17:49:50.845347 3166 scope.go:117] "RemoveContainer" containerID="a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf" May 27 17:49:50.845527 containerd[1729]: time="2025-05-27T17:49:50.845490161Z" level=error msg="ContainerStatus for \"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\": not found" May 27 17:49:50.845655 kubelet[3166]: E0527 17:49:50.845638 3166 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\": not found" containerID="a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf" May 27 17:49:50.845693 kubelet[3166]: I0527 17:49:50.845664 3166 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf"} err="failed to get container status \"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\": rpc error: code = NotFound desc = an error occurred when try to find container \"a48d8d595fb30de3bae8492f1a9c136501f9458a6f64b57fc6360352b5d70faf\": not found" May 27 17:49:50.845693 kubelet[3166]: I0527 17:49:50.845679 3166 scope.go:117] "RemoveContainer" containerID="ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6" May 27 17:49:50.845831 containerd[1729]: time="2025-05-27T17:49:50.845803735Z" level=error msg="ContainerStatus for \"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\": not found" May 27 17:49:50.845921 kubelet[3166]: E0527 17:49:50.845901 3166 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\": not found" containerID="ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6" May 27 17:49:50.845956 kubelet[3166]: I0527 17:49:50.845925 3166 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6"} err="failed to get container status \"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea9ded24546b6e42c5f319fbeed9a1b68718a068a40a68b85679e2b32a2611f6\": not found" May 27 17:49:50.845956 kubelet[3166]: I0527 17:49:50.845940 3166 scope.go:117] "RemoveContainer" containerID="cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077" May 27 17:49:50.846102 containerd[1729]: time="2025-05-27T17:49:50.846082593Z" level=error msg="ContainerStatus for \"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\": not found" May 27 17:49:50.846165 kubelet[3166]: E0527 17:49:50.846156 3166 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\": not found" containerID="cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077" May 27 17:49:50.846196 kubelet[3166]: I0527 17:49:50.846171 3166 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077"} err="failed to get container status \"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd96d7a3d540dd18f6b0a9e885fe522b0d37e563045cf3547567e4704a784077\": not found" May 27 17:49:50.846196 kubelet[3166]: I0527 17:49:50.846185 3166 scope.go:117] "RemoveContainer" containerID="81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac" May 27 17:49:50.846337 containerd[1729]: time="2025-05-27T17:49:50.846308975Z" level=error msg="ContainerStatus for \"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\": not found" May 27 17:49:50.846430 kubelet[3166]: E0527 17:49:50.846394 3166 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\": not found" containerID="81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac" May 27 17:49:50.846456 kubelet[3166]: I0527 17:49:50.846432 3166 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac"} err="failed to get container status \"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\": rpc error: code = NotFound desc = an error occurred when try to find container \"81203c3effde8fb6ce6a77d25842f2da47a7334ba36a8858669586185015beac\": not found" May 27 17:49:50.846456 kubelet[3166]: I0527 17:49:50.846449 3166 scope.go:117] "RemoveContainer" containerID="7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132" May 27 17:49:50.847567 containerd[1729]: time="2025-05-27T17:49:50.847533024Z" level=info msg="RemoveContainer for \"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\"" May 27 17:49:50.852839 containerd[1729]: time="2025-05-27T17:49:50.852820019Z" level=info msg="RemoveContainer for \"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\" returns successfully" May 27 17:49:50.852968 kubelet[3166]: I0527 17:49:50.852952 3166 scope.go:117] "RemoveContainer" containerID="7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132" May 27 17:49:50.853112 containerd[1729]: time="2025-05-27T17:49:50.853092464Z" level=error msg="ContainerStatus for \"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\": not found" May 27 17:49:50.853206 kubelet[3166]: E0527 17:49:50.853189 3166 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\": not found" containerID="7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132" May 27 17:49:50.853256 kubelet[3166]: I0527 17:49:50.853234 3166 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132"} err="failed to get container status \"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a77a97100f54aca34f32cd7d693625840efa3d2670da765224c4fbdd4e16132\": not found" May 27 17:49:51.161777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f9f94d3e2c8c511764efade2387f2f8e1385f0d91cd0bc08f261f66d617d710a-shm.mount: Deactivated successfully. May 27 17:49:51.161883 systemd[1]: var-lib-kubelet-pods-16f23d65\x2dff33\x2d43da\x2db401\x2d1cdfd937a4c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d94lk7.mount: Deactivated successfully. May 27 17:49:51.161954 systemd[1]: var-lib-kubelet-pods-c6723ce1\x2d2e6f\x2d485e\x2d84a6\x2d8edd4d8d5656-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr2ph9.mount: Deactivated successfully. May 27 17:49:51.162021 systemd[1]: var-lib-kubelet-pods-c6723ce1\x2d2e6f\x2d485e\x2d84a6\x2d8edd4d8d5656-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 17:49:51.162089 systemd[1]: var-lib-kubelet-pods-c6723ce1\x2d2e6f\x2d485e\x2d84a6\x2d8edd4d8d5656-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 17:49:52.153354 sshd[4675]: Connection closed by 10.200.16.10 port 60154 May 27 17:49:52.154014 sshd-session[4673]: pam_unix(sshd:session): session closed for user core May 27 17:49:52.157927 systemd[1]: sshd@21-10.200.8.45:22-10.200.16.10:60154.service: Deactivated successfully. May 27 17:49:52.161728 systemd[1]: session-24.scope: Deactivated successfully. May 27 17:49:52.162614 systemd-logind[1706]: Session 24 logged out. Waiting for processes to exit. May 27 17:49:52.163803 systemd-logind[1706]: Removed session 24. May 27 17:49:52.267402 systemd[1]: Started sshd@22-10.200.8.45:22-10.200.16.10:49796.service - OpenSSH per-connection server daemon (10.200.16.10:49796). May 27 17:49:52.442537 kubelet[3166]: I0527 17:49:52.442259 3166 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f23d65-ff33-43da-b401-1cdfd937a4c4" path="/var/lib/kubelet/pods/16f23d65-ff33-43da-b401-1cdfd937a4c4/volumes" May 27 17:49:52.442981 kubelet[3166]: I0527 17:49:52.442959 3166 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6723ce1-2e6f-485e-84a6-8edd4d8d5656" path="/var/lib/kubelet/pods/c6723ce1-2e6f-485e-84a6-8edd4d8d5656/volumes" May 27 17:49:52.894528 sshd[4828]: Accepted publickey for core from 10.200.16.10 port 49796 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:52.896008 sshd-session[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:52.900427 systemd-logind[1706]: New session 25 of user core. May 27 17:49:52.910712 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 17:49:53.524097 kubelet[3166]: E0527 17:49:53.523987 3166 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 17:49:53.734665 kubelet[3166]: I0527 17:49:53.734635 3166 memory_manager.go:355] "RemoveStaleState removing state" podUID="16f23d65-ff33-43da-b401-1cdfd937a4c4" containerName="cilium-operator" May 27 17:49:53.734665 kubelet[3166]: I0527 17:49:53.734661 3166 memory_manager.go:355] "RemoveStaleState removing state" podUID="c6723ce1-2e6f-485e-84a6-8edd4d8d5656" containerName="cilium-agent" May 27 17:49:53.740482 kubelet[3166]: W0527 17:49:53.740454 3166 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4344.0.0-a-927e686d84" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.0.0-a-927e686d84' and this object May 27 17:49:53.740698 kubelet[3166]: E0527 17:49:53.740502 3166 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4344.0.0-a-927e686d84\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.0.0-a-927e686d84' and this object" logger="UnhandledError" May 27 17:49:53.740698 kubelet[3166]: W0527 17:49:53.740677 3166 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4344.0.0-a-927e686d84" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.0.0-a-927e686d84' and this object May 27 17:49:53.740698 kubelet[3166]: E0527 17:49:53.740693 3166 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4344.0.0-a-927e686d84\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.0.0-a-927e686d84' and this object" logger="UnhandledError" May 27 17:49:53.740790 kubelet[3166]: I0527 17:49:53.740733 3166 status_manager.go:890] "Failed to get status for pod" podUID="e069561b-f573-4fd5-8062-42f6850e05b7" pod="kube-system/cilium-v87bb" err="pods \"cilium-v87bb\" is forbidden: User \"system:node:ci-4344.0.0-a-927e686d84\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.0.0-a-927e686d84' and this object" May 27 17:49:53.740790 kubelet[3166]: W0527 17:49:53.740781 3166 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4344.0.0-a-927e686d84" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.0.0-a-927e686d84' and this object May 27 17:49:53.740831 kubelet[3166]: E0527 17:49:53.740791 3166 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4344.0.0-a-927e686d84\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.0.0-a-927e686d84' and this object" logger="UnhandledError" May 27 17:49:53.740831 kubelet[3166]: W0527 17:49:53.740829 3166 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4344.0.0-a-927e686d84" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4344.0.0-a-927e686d84' and this object May 27 17:49:53.740875 kubelet[3166]: E0527 17:49:53.740838 3166 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4344.0.0-a-927e686d84\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4344.0.0-a-927e686d84' and this object" logger="UnhandledError" May 27 17:49:53.746498 systemd[1]: Created slice kubepods-burstable-pode069561b_f573_4fd5_8062_42f6850e05b7.slice - libcontainer container kubepods-burstable-pode069561b_f573_4fd5_8062_42f6850e05b7.slice. May 27 17:49:53.766207 kubelet[3166]: I0527 17:49:53.766178 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e069561b-f573-4fd5-8062-42f6850e05b7-xtables-lock\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.766295 kubelet[3166]: I0527 17:49:53.766214 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e069561b-f573-4fd5-8062-42f6850e05b7-hubble-tls\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.766295 kubelet[3166]: I0527 17:49:53.766237 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e069561b-f573-4fd5-8062-42f6850e05b7-cilium-run\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.766295 kubelet[3166]: I0527 17:49:53.766254 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e069561b-f573-4fd5-8062-42f6850e05b7-host-proc-sys-kernel\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.766295 kubelet[3166]: I0527 17:49:53.766274 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e069561b-f573-4fd5-8062-42f6850e05b7-lib-modules\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.766295 kubelet[3166]: I0527 17:49:53.766289 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e069561b-f573-4fd5-8062-42f6850e05b7-clustermesh-secrets\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.766402 kubelet[3166]: I0527 17:49:53.766307 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e069561b-f573-4fd5-8062-42f6850e05b7-etc-cni-netd\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.766402 kubelet[3166]: I0527 17:49:53.766325 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e069561b-f573-4fd5-8062-42f6850e05b7-bpf-maps\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.766402 kubelet[3166]: I0527 17:49:53.766346 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e069561b-f573-4fd5-8062-42f6850e05b7-cilium-ipsec-secrets\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.766402 kubelet[3166]: I0527 17:49:53.766364 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e069561b-f573-4fd5-8062-42f6850e05b7-host-proc-sys-net\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.766402 kubelet[3166]: I0527 17:49:53.766380 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e069561b-f573-4fd5-8062-42f6850e05b7-cilium-cgroup\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.766402 kubelet[3166]: I0527 17:49:53.766396 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e069561b-f573-4fd5-8062-42f6850e05b7-hostproc\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.766524 kubelet[3166]: I0527 17:49:53.766413 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e069561b-f573-4fd5-8062-42f6850e05b7-cni-path\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.766524 kubelet[3166]: I0527 17:49:53.766431 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e069561b-f573-4fd5-8062-42f6850e05b7-cilium-config-path\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.766524 kubelet[3166]: I0527 17:49:53.766451 3166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kkbb\" (UniqueName: \"kubernetes.io/projected/e069561b-f573-4fd5-8062-42f6850e05b7-kube-api-access-2kkbb\") pod \"cilium-v87bb\" (UID: \"e069561b-f573-4fd5-8062-42f6850e05b7\") " pod="kube-system/cilium-v87bb" May 27 17:49:53.821312 sshd[4830]: Connection closed by 10.200.16.10 port 49796 May 27 17:49:53.823823 sshd-session[4828]: pam_unix(sshd:session): session closed for user core May 27 17:49:53.828376 systemd[1]: sshd@22-10.200.8.45:22-10.200.16.10:49796.service: Deactivated successfully. May 27 17:49:53.831439 systemd[1]: session-25.scope: Deactivated successfully. May 27 17:49:53.834289 systemd-logind[1706]: Session 25 logged out. Waiting for processes to exit. May 27 17:49:53.837082 systemd-logind[1706]: Removed session 25. May 27 17:49:53.934594 systemd[1]: Started sshd@23-10.200.8.45:22-10.200.16.10:49812.service - OpenSSH per-connection server daemon (10.200.16.10:49812). May 27 17:49:54.556589 sshd[4843]: Accepted publickey for core from 10.200.16.10 port 49812 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:54.557942 sshd-session[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:54.562284 systemd-logind[1706]: New session 26 of user core. May 27 17:49:54.566697 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 17:49:54.868283 kubelet[3166]: E0527 17:49:54.868195 3166 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 27 17:49:54.868957 kubelet[3166]: E0527 17:49:54.868299 3166 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e069561b-f573-4fd5-8062-42f6850e05b7-cilium-config-path podName:e069561b-f573-4fd5-8062-42f6850e05b7 nodeName:}" failed. No retries permitted until 2025-05-27 17:49:55.368267921 +0000 UTC m=+157.004653790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/e069561b-f573-4fd5-8062-42f6850e05b7-cilium-config-path") pod "cilium-v87bb" (UID: "e069561b-f573-4fd5-8062-42f6850e05b7") : failed to sync configmap cache: timed out waiting for the condition May 27 17:49:54.868957 kubelet[3166]: E0527 17:49:54.868205 3166 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition May 27 17:49:54.868957 kubelet[3166]: E0527 17:49:54.868365 3166 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e069561b-f573-4fd5-8062-42f6850e05b7-cilium-ipsec-secrets podName:e069561b-f573-4fd5-8062-42f6850e05b7 nodeName:}" failed. No retries permitted until 2025-05-27 17:49:55.368353264 +0000 UTC m=+157.004739136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/e069561b-f573-4fd5-8062-42f6850e05b7-cilium-ipsec-secrets") pod "cilium-v87bb" (UID: "e069561b-f573-4fd5-8062-42f6850e05b7") : failed to sync secret cache: timed out waiting for the condition May 27 17:49:54.868957 kubelet[3166]: E0527 17:49:54.868675 3166 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 27 17:49:54.869189 kubelet[3166]: E0527 17:49:54.868715 3166 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e069561b-f573-4fd5-8062-42f6850e05b7-clustermesh-secrets podName:e069561b-f573-4fd5-8062-42f6850e05b7 nodeName:}" failed. No retries permitted until 2025-05-27 17:49:55.368701829 +0000 UTC m=+157.005087694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/e069561b-f573-4fd5-8062-42f6850e05b7-clustermesh-secrets") pod "cilium-v87bb" (UID: "e069561b-f573-4fd5-8062-42f6850e05b7") : failed to sync secret cache: timed out waiting for the condition May 27 17:49:54.869189 kubelet[3166]: E0527 17:49:54.868751 3166 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 27 17:49:54.869189 kubelet[3166]: E0527 17:49:54.868764 3166 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-v87bb: failed to sync secret cache: timed out waiting for the condition May 27 17:49:54.869189 kubelet[3166]: E0527 17:49:54.868806 3166 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e069561b-f573-4fd5-8062-42f6850e05b7-hubble-tls podName:e069561b-f573-4fd5-8062-42f6850e05b7 nodeName:}" failed. No retries permitted until 2025-05-27 17:49:55.368794657 +0000 UTC m=+157.005180513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/e069561b-f573-4fd5-8062-42f6850e05b7-hubble-tls") pod "cilium-v87bb" (UID: "e069561b-f573-4fd5-8062-42f6850e05b7") : failed to sync secret cache: timed out waiting for the condition May 27 17:49:54.997434 sshd[4845]: Connection closed by 10.200.16.10 port 49812 May 27 17:49:54.997865 sshd-session[4843]: pam_unix(sshd:session): session closed for user core May 27 17:49:55.000970 systemd[1]: sshd@23-10.200.8.45:22-10.200.16.10:49812.service: Deactivated successfully. May 27 17:49:55.002853 systemd[1]: session-26.scope: Deactivated successfully. May 27 17:49:55.003494 systemd-logind[1706]: Session 26 logged out. Waiting for processes to exit. May 27 17:49:55.004965 systemd-logind[1706]: Removed session 26. May 27 17:49:55.116331 systemd[1]: Started sshd@24-10.200.8.45:22-10.200.16.10:49814.service - OpenSSH per-connection server daemon (10.200.16.10:49814). May 27 17:49:55.551951 containerd[1729]: time="2025-05-27T17:49:55.551913131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v87bb,Uid:e069561b-f573-4fd5-8062-42f6850e05b7,Namespace:kube-system,Attempt:0,}" May 27 17:49:55.579769 containerd[1729]: time="2025-05-27T17:49:55.579696887Z" level=info msg="connecting to shim 4c911d6674efba2fc482f81463dafe6f044c34c3b7976993a14578511302f9a6" address="unix:///run/containerd/s/0ac645664d6c2ce3ee6a8d78aadc89de86c7aed7480618a417b678d2050d0c28" namespace=k8s.io protocol=ttrpc version=3 May 27 17:49:55.606682 systemd[1]: Started cri-containerd-4c911d6674efba2fc482f81463dafe6f044c34c3b7976993a14578511302f9a6.scope - libcontainer container 4c911d6674efba2fc482f81463dafe6f044c34c3b7976993a14578511302f9a6. May 27 17:49:55.627728 containerd[1729]: time="2025-05-27T17:49:55.627701387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v87bb,Uid:e069561b-f573-4fd5-8062-42f6850e05b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c911d6674efba2fc482f81463dafe6f044c34c3b7976993a14578511302f9a6\"" May 27 17:49:55.630569 containerd[1729]: time="2025-05-27T17:49:55.630226762Z" level=info msg="CreateContainer within sandbox \"4c911d6674efba2fc482f81463dafe6f044c34c3b7976993a14578511302f9a6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 17:49:55.641156 containerd[1729]: time="2025-05-27T17:49:55.641137974Z" level=info msg="Container 3ae1ae1b48945e88c08c3a22ccb14197b13714590d43e953d6511477860410cc: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:55.650763 containerd[1729]: time="2025-05-27T17:49:55.650745573Z" level=info msg="CreateContainer within sandbox \"4c911d6674efba2fc482f81463dafe6f044c34c3b7976993a14578511302f9a6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ae1ae1b48945e88c08c3a22ccb14197b13714590d43e953d6511477860410cc\"" May 27 17:49:55.651284 containerd[1729]: time="2025-05-27T17:49:55.651260125Z" level=info msg="StartContainer for \"3ae1ae1b48945e88c08c3a22ccb14197b13714590d43e953d6511477860410cc\"" May 27 17:49:55.652415 containerd[1729]: time="2025-05-27T17:49:55.652394149Z" level=info msg="connecting to shim 3ae1ae1b48945e88c08c3a22ccb14197b13714590d43e953d6511477860410cc" address="unix:///run/containerd/s/0ac645664d6c2ce3ee6a8d78aadc89de86c7aed7480618a417b678d2050d0c28" protocol=ttrpc version=3 May 27 17:49:55.666670 systemd[1]: Started cri-containerd-3ae1ae1b48945e88c08c3a22ccb14197b13714590d43e953d6511477860410cc.scope - libcontainer container 3ae1ae1b48945e88c08c3a22ccb14197b13714590d43e953d6511477860410cc. May 27 17:49:55.687261 containerd[1729]: time="2025-05-27T17:49:55.687207452Z" level=info msg="StartContainer for \"3ae1ae1b48945e88c08c3a22ccb14197b13714590d43e953d6511477860410cc\" returns successfully" May 27 17:49:55.691788 systemd[1]: cri-containerd-3ae1ae1b48945e88c08c3a22ccb14197b13714590d43e953d6511477860410cc.scope: Deactivated successfully. May 27 17:49:55.694305 containerd[1729]: time="2025-05-27T17:49:55.694280712Z" level=info msg="received exit event container_id:\"3ae1ae1b48945e88c08c3a22ccb14197b13714590d43e953d6511477860410cc\" id:\"3ae1ae1b48945e88c08c3a22ccb14197b13714590d43e953d6511477860410cc\" pid:4916 exited_at:{seconds:1748368195 nanos:693851789}" May 27 17:49:55.694429 containerd[1729]: time="2025-05-27T17:49:55.694301707Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ae1ae1b48945e88c08c3a22ccb14197b13714590d43e953d6511477860410cc\" id:\"3ae1ae1b48945e88c08c3a22ccb14197b13714590d43e953d6511477860410cc\" pid:4916 exited_at:{seconds:1748368195 nanos:693851789}" May 27 17:49:55.740678 sshd[4852]: Accepted publickey for core from 10.200.16.10 port 49814 ssh2: RSA SHA256:ffDPNvcJgGlccTPo+/+EVlIT10D8CS6TdK4NBsvX590 May 27 17:49:55.741731 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:49:55.746767 systemd-logind[1706]: New session 27 of user core. May 27 17:49:55.750673 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 17:49:55.810394 containerd[1729]: time="2025-05-27T17:49:55.810305623Z" level=info msg="CreateContainer within sandbox \"4c911d6674efba2fc482f81463dafe6f044c34c3b7976993a14578511302f9a6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 17:49:55.831616 containerd[1729]: time="2025-05-27T17:49:55.831592467Z" level=info msg="Container c3b6fd6c231cd53d0460b9fa94ad329a788c7d79dd65fd2b279f25ae65062476: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:55.842141 containerd[1729]: time="2025-05-27T17:49:55.842118137Z" level=info msg="CreateContainer within sandbox \"4c911d6674efba2fc482f81463dafe6f044c34c3b7976993a14578511302f9a6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c3b6fd6c231cd53d0460b9fa94ad329a788c7d79dd65fd2b279f25ae65062476\"" May 27 17:49:55.842584 containerd[1729]: time="2025-05-27T17:49:55.842493828Z" level=info msg="StartContainer for \"c3b6fd6c231cd53d0460b9fa94ad329a788c7d79dd65fd2b279f25ae65062476\"" May 27 17:49:55.843424 containerd[1729]: time="2025-05-27T17:49:55.843371822Z" level=info msg="connecting to shim c3b6fd6c231cd53d0460b9fa94ad329a788c7d79dd65fd2b279f25ae65062476" address="unix:///run/containerd/s/0ac645664d6c2ce3ee6a8d78aadc89de86c7aed7480618a417b678d2050d0c28" protocol=ttrpc version=3 May 27 17:49:55.858680 systemd[1]: Started cri-containerd-c3b6fd6c231cd53d0460b9fa94ad329a788c7d79dd65fd2b279f25ae65062476.scope - libcontainer container c3b6fd6c231cd53d0460b9fa94ad329a788c7d79dd65fd2b279f25ae65062476. May 27 17:49:55.879627 containerd[1729]: time="2025-05-27T17:49:55.879607649Z" level=info msg="StartContainer for \"c3b6fd6c231cd53d0460b9fa94ad329a788c7d79dd65fd2b279f25ae65062476\" returns successfully" May 27 17:49:55.883274 systemd[1]: cri-containerd-c3b6fd6c231cd53d0460b9fa94ad329a788c7d79dd65fd2b279f25ae65062476.scope: Deactivated successfully. May 27 17:49:55.884411 containerd[1729]: time="2025-05-27T17:49:55.884318836Z" level=info msg="received exit event container_id:\"c3b6fd6c231cd53d0460b9fa94ad329a788c7d79dd65fd2b279f25ae65062476\" id:\"c3b6fd6c231cd53d0460b9fa94ad329a788c7d79dd65fd2b279f25ae65062476\" pid:4964 exited_at:{seconds:1748368195 nanos:883855046}" May 27 17:49:55.884611 containerd[1729]: time="2025-05-27T17:49:55.884587081Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3b6fd6c231cd53d0460b9fa94ad329a788c7d79dd65fd2b279f25ae65062476\" id:\"c3b6fd6c231cd53d0460b9fa94ad329a788c7d79dd65fd2b279f25ae65062476\" pid:4964 exited_at:{seconds:1748368195 nanos:883855046}" May 27 17:49:56.815670 containerd[1729]: time="2025-05-27T17:49:56.815623695Z" level=info msg="CreateContainer within sandbox \"4c911d6674efba2fc482f81463dafe6f044c34c3b7976993a14578511302f9a6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 17:49:56.835059 containerd[1729]: time="2025-05-27T17:49:56.834531331Z" level=info msg="Container 1a061abd90c24165b4c5a02c4c07e5b2f3c232f6162eff2d06fda903d11ac8a1: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:56.847047 containerd[1729]: time="2025-05-27T17:49:56.847017570Z" level=info msg="CreateContainer within sandbox \"4c911d6674efba2fc482f81463dafe6f044c34c3b7976993a14578511302f9a6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1a061abd90c24165b4c5a02c4c07e5b2f3c232f6162eff2d06fda903d11ac8a1\"" May 27 17:49:56.847422 containerd[1729]: time="2025-05-27T17:49:56.847398784Z" level=info msg="StartContainer for \"1a061abd90c24165b4c5a02c4c07e5b2f3c232f6162eff2d06fda903d11ac8a1\"" May 27 17:49:56.848966 containerd[1729]: time="2025-05-27T17:49:56.848923403Z" level=info msg="connecting to shim 1a061abd90c24165b4c5a02c4c07e5b2f3c232f6162eff2d06fda903d11ac8a1" address="unix:///run/containerd/s/0ac645664d6c2ce3ee6a8d78aadc89de86c7aed7480618a417b678d2050d0c28" protocol=ttrpc version=3 May 27 17:49:56.870751 systemd[1]: Started cri-containerd-1a061abd90c24165b4c5a02c4c07e5b2f3c232f6162eff2d06fda903d11ac8a1.scope - libcontainer container 1a061abd90c24165b4c5a02c4c07e5b2f3c232f6162eff2d06fda903d11ac8a1. May 27 17:49:56.897215 systemd[1]: cri-containerd-1a061abd90c24165b4c5a02c4c07e5b2f3c232f6162eff2d06fda903d11ac8a1.scope: Deactivated successfully. May 27 17:49:56.899573 containerd[1729]: time="2025-05-27T17:49:56.899511079Z" level=info msg="received exit event container_id:\"1a061abd90c24165b4c5a02c4c07e5b2f3c232f6162eff2d06fda903d11ac8a1\" id:\"1a061abd90c24165b4c5a02c4c07e5b2f3c232f6162eff2d06fda903d11ac8a1\" pid:5016 exited_at:{seconds:1748368196 nanos:899127162}" May 27 17:49:56.899959 containerd[1729]: time="2025-05-27T17:49:56.899877326Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a061abd90c24165b4c5a02c4c07e5b2f3c232f6162eff2d06fda903d11ac8a1\" id:\"1a061abd90c24165b4c5a02c4c07e5b2f3c232f6162eff2d06fda903d11ac8a1\" pid:5016 exited_at:{seconds:1748368196 nanos:899127162}" May 27 17:49:56.905765 containerd[1729]: time="2025-05-27T17:49:56.905739103Z" level=info msg="StartContainer for \"1a061abd90c24165b4c5a02c4c07e5b2f3c232f6162eff2d06fda903d11ac8a1\" returns successfully" May 27 17:49:56.914801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a061abd90c24165b4c5a02c4c07e5b2f3c232f6162eff2d06fda903d11ac8a1-rootfs.mount: Deactivated successfully. May 27 17:49:57.827521 containerd[1729]: time="2025-05-27T17:49:57.827478143Z" level=info msg="CreateContainer within sandbox \"4c911d6674efba2fc482f81463dafe6f044c34c3b7976993a14578511302f9a6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 17:49:57.842647 containerd[1729]: time="2025-05-27T17:49:57.841653193Z" level=info msg="Container da1bb1ce3ceedcc0c0d18e895f53b2b659cbd01a1289b3bec517556b88cabe71: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:57.855790 containerd[1729]: time="2025-05-27T17:49:57.855763053Z" level=info msg="CreateContainer within sandbox \"4c911d6674efba2fc482f81463dafe6f044c34c3b7976993a14578511302f9a6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"da1bb1ce3ceedcc0c0d18e895f53b2b659cbd01a1289b3bec517556b88cabe71\"" May 27 17:49:57.856187 containerd[1729]: time="2025-05-27T17:49:57.856153919Z" level=info msg="StartContainer for \"da1bb1ce3ceedcc0c0d18e895f53b2b659cbd01a1289b3bec517556b88cabe71\"" May 27 17:49:57.857232 containerd[1729]: time="2025-05-27T17:49:57.857144790Z" level=info msg="connecting to shim da1bb1ce3ceedcc0c0d18e895f53b2b659cbd01a1289b3bec517556b88cabe71" address="unix:///run/containerd/s/0ac645664d6c2ce3ee6a8d78aadc89de86c7aed7480618a417b678d2050d0c28" protocol=ttrpc version=3 May 27 17:49:57.877700 systemd[1]: Started cri-containerd-da1bb1ce3ceedcc0c0d18e895f53b2b659cbd01a1289b3bec517556b88cabe71.scope - libcontainer container da1bb1ce3ceedcc0c0d18e895f53b2b659cbd01a1289b3bec517556b88cabe71. May 27 17:49:57.899175 systemd[1]: cri-containerd-da1bb1ce3ceedcc0c0d18e895f53b2b659cbd01a1289b3bec517556b88cabe71.scope: Deactivated successfully. May 27 17:49:57.900305 containerd[1729]: time="2025-05-27T17:49:57.900282018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da1bb1ce3ceedcc0c0d18e895f53b2b659cbd01a1289b3bec517556b88cabe71\" id:\"da1bb1ce3ceedcc0c0d18e895f53b2b659cbd01a1289b3bec517556b88cabe71\" pid:5058 exited_at:{seconds:1748368197 nanos:900034617}" May 27 17:49:57.902478 containerd[1729]: time="2025-05-27T17:49:57.902299210Z" level=info msg="received exit event container_id:\"da1bb1ce3ceedcc0c0d18e895f53b2b659cbd01a1289b3bec517556b88cabe71\" id:\"da1bb1ce3ceedcc0c0d18e895f53b2b659cbd01a1289b3bec517556b88cabe71\" pid:5058 exited_at:{seconds:1748368197 nanos:900034617}" May 27 17:49:57.908308 containerd[1729]: time="2025-05-27T17:49:57.908274148Z" level=info msg="StartContainer for \"da1bb1ce3ceedcc0c0d18e895f53b2b659cbd01a1289b3bec517556b88cabe71\" returns successfully" May 27 17:49:57.916927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da1bb1ce3ceedcc0c0d18e895f53b2b659cbd01a1289b3bec517556b88cabe71-rootfs.mount: Deactivated successfully. May 27 17:49:58.525480 kubelet[3166]: E0527 17:49:58.525447 3166 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 17:49:58.835839 containerd[1729]: time="2025-05-27T17:49:58.835684031Z" level=info msg="CreateContainer within sandbox \"4c911d6674efba2fc482f81463dafe6f044c34c3b7976993a14578511302f9a6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 17:49:58.858125 containerd[1729]: time="2025-05-27T17:49:58.858094836Z" level=info msg="Container d924ec0b66e24fd1ec2bc63618ae9698ea4b71765eb5d2b6624df4c982040f96: CDI devices from CRI Config.CDIDevices: []" May 27 17:49:58.869091 containerd[1729]: time="2025-05-27T17:49:58.869067279Z" level=info msg="CreateContainer within sandbox \"4c911d6674efba2fc482f81463dafe6f044c34c3b7976993a14578511302f9a6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d924ec0b66e24fd1ec2bc63618ae9698ea4b71765eb5d2b6624df4c982040f96\"" May 27 17:49:58.869407 containerd[1729]: time="2025-05-27T17:49:58.869392805Z" level=info msg="StartContainer for \"d924ec0b66e24fd1ec2bc63618ae9698ea4b71765eb5d2b6624df4c982040f96\"" May 27 17:49:58.870452 containerd[1729]: time="2025-05-27T17:49:58.870369063Z" level=info msg="connecting to shim d924ec0b66e24fd1ec2bc63618ae9698ea4b71765eb5d2b6624df4c982040f96" address="unix:///run/containerd/s/0ac645664d6c2ce3ee6a8d78aadc89de86c7aed7480618a417b678d2050d0c28" protocol=ttrpc version=3 May 27 17:49:58.891671 systemd[1]: Started cri-containerd-d924ec0b66e24fd1ec2bc63618ae9698ea4b71765eb5d2b6624df4c982040f96.scope - libcontainer container d924ec0b66e24fd1ec2bc63618ae9698ea4b71765eb5d2b6624df4c982040f96. May 27 17:49:58.920966 containerd[1729]: time="2025-05-27T17:49:58.920908005Z" level=info msg="StartContainer for \"d924ec0b66e24fd1ec2bc63618ae9698ea4b71765eb5d2b6624df4c982040f96\" returns successfully" May 27 17:49:58.972991 containerd[1729]: time="2025-05-27T17:49:58.972972534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d924ec0b66e24fd1ec2bc63618ae9698ea4b71765eb5d2b6624df4c982040f96\" id:\"36dd000d63335e6bc358b2f4260def9331cb337fffbbf449ed40337603b3985e\" pid:5126 exited_at:{seconds:1748368198 nanos:972682745}" May 27 17:49:59.257629 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) May 27 17:49:59.855344 kubelet[3166]: I0527 17:49:59.855220 3166 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v87bb" podStartSLOduration=6.855197152 podStartE2EDuration="6.855197152s" podCreationTimestamp="2025-05-27 17:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:49:59.854689877 +0000 UTC m=+161.491075737" watchObservedRunningTime="2025-05-27 17:49:59.855197152 +0000 UTC m=+161.491583008" May 27 17:50:00.222100 containerd[1729]: time="2025-05-27T17:50:00.221945785Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d924ec0b66e24fd1ec2bc63618ae9698ea4b71765eb5d2b6624df4c982040f96\" id:\"1d517915100baaf1d7f4412e57519c03c26f1dfcf7a8280d8a4be065c9897eac\" pid:5202 exit_status:1 exited_at:{seconds:1748368200 nanos:221404649}" May 27 17:50:01.440586 kubelet[3166]: E0527 17:50:01.440328 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-fq7lh" podUID="d12365d9-7fbd-42e5-904a-dd0c0ef2e888" May 27 17:50:01.638215 systemd-networkd[1363]: lxc_health: Link UP May 27 17:50:01.640233 systemd-networkd[1363]: lxc_health: Gained carrier May 27 17:50:01.781572 kubelet[3166]: I0527 17:50:01.781178 3166 setters.go:602] "Node became not ready" node="ci-4344.0.0-a-927e686d84" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-27T17:50:01Z","lastTransitionTime":"2025-05-27T17:50:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 27 17:50:02.406292 containerd[1729]: time="2025-05-27T17:50:02.406241313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d924ec0b66e24fd1ec2bc63618ae9698ea4b71765eb5d2b6624df4c982040f96\" id:\"31bb5216a9d5eabfb953d3c9f7befaf4c45f287a2abc34efbc53ad1312dfadfe\" pid:5639 exited_at:{seconds:1748368202 nanos:405679692}" May 27 17:50:03.356814 systemd-networkd[1363]: lxc_health: Gained IPv6LL May 27 17:50:03.442086 kubelet[3166]: E0527 17:50:03.440761 3166 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-fq7lh" podUID="d12365d9-7fbd-42e5-904a-dd0c0ef2e888" May 27 17:50:04.564881 containerd[1729]: time="2025-05-27T17:50:04.564824915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d924ec0b66e24fd1ec2bc63618ae9698ea4b71765eb5d2b6624df4c982040f96\" id:\"5a5175aa10322a301cbab6505ab991b0cf82a01f96013fe4c97f87184046428f\" pid:5677 exited_at:{seconds:1748368204 nanos:563711929}" May 27 17:50:06.651112 containerd[1729]: time="2025-05-27T17:50:06.651051158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d924ec0b66e24fd1ec2bc63618ae9698ea4b71765eb5d2b6624df4c982040f96\" id:\"da2b4344ffa201c0395fa81eae2c1b0c57d6c7919976f5653ad4b81fd0413541\" pid:5707 exited_at:{seconds:1748368206 nanos:650508421}" May 27 17:50:07.465255 update_engine[1708]: I20250527 17:50:07.465193 1708 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 27 17:50:07.465255 update_engine[1708]: I20250527 17:50:07.465247 1708 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 27 17:50:07.465790 update_engine[1708]: I20250527 17:50:07.465430 1708 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 27 17:50:07.466025 update_engine[1708]: I20250527 17:50:07.465854 1708 omaha_request_params.cc:62] Current group set to alpha May 27 17:50:07.466307 update_engine[1708]: I20250527 17:50:07.466085 1708 update_attempter.cc:499] Already updated boot flags. Skipping. May 27 17:50:07.466307 update_engine[1708]: I20250527 17:50:07.466099 1708 update_attempter.cc:643] Scheduling an action processor start. May 27 17:50:07.466307 update_engine[1708]: I20250527 17:50:07.466120 1708 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 27 17:50:07.466307 update_engine[1708]: I20250527 17:50:07.466151 1708 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 27 17:50:07.466307 update_engine[1708]: I20250527 17:50:07.466212 1708 omaha_request_action.cc:271] Posting an Omaha request to disabled May 27 17:50:07.466307 update_engine[1708]: I20250527 17:50:07.466216 1708 omaha_request_action.cc:272] Request: May 27 17:50:07.466307 update_engine[1708]: May 27 17:50:07.466307 update_engine[1708]: May 27 17:50:07.466307 update_engine[1708]: May 27 17:50:07.466307 update_engine[1708]: May 27 17:50:07.466307 update_engine[1708]: May 27 17:50:07.466307 update_engine[1708]: May 27 17:50:07.466307 update_engine[1708]: May 27 17:50:07.466307 update_engine[1708]: May 27 17:50:07.466307 update_engine[1708]: I20250527 17:50:07.466223 1708 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 27 17:50:07.466987 locksmithd[1776]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 27 17:50:07.467630 update_engine[1708]: I20250527 17:50:07.467604 1708 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 27 17:50:07.468084 update_engine[1708]: I20250527 17:50:07.468055 1708 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 27 17:50:07.493663 update_engine[1708]: E20250527 17:50:07.493628 1708 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 27 17:50:07.493767 update_engine[1708]: I20250527 17:50:07.493719 1708 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 27 17:50:08.742476 containerd[1729]: time="2025-05-27T17:50:08.742421167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d924ec0b66e24fd1ec2bc63618ae9698ea4b71765eb5d2b6624df4c982040f96\" id:\"69b5a8e1c380b78fc5f851daf6a5818a02c30cacbd999681638786194d5cb9ab\" pid:5730 exited_at:{seconds:1748368208 nanos:741964944}" May 27 17:50:08.852911 sshd[4951]: Connection closed by 10.200.16.10 port 49814 May 27 17:50:08.853477 sshd-session[4852]: pam_unix(sshd:session): session closed for user core May 27 17:50:08.857756 systemd[1]: sshd@24-10.200.8.45:22-10.200.16.10:49814.service: Deactivated successfully. May 27 17:50:08.859796 systemd[1]: session-27.scope: Deactivated successfully. May 27 17:50:08.860501 systemd-logind[1706]: Session 27 logged out. Waiting for processes to exit. May 27 17:50:08.861991 systemd-logind[1706]: Removed session 27.