Jun 21 04:44:02.923172 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 23:59:04 -00 2025 Jun 21 04:44:02.923189 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 04:44:02.923197 kernel: BIOS-provided physical RAM map: Jun 21 04:44:02.923201 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 21 04:44:02.923204 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 21 04:44:02.923208 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 21 04:44:02.923214 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc4fff] reserved Jun 21 04:44:02.923218 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd0fff] usable Jun 21 04:44:02.923222 kernel: BIOS-e820: [mem 0x000000003ffd1000-0x000000003fffafff] ACPI data Jun 21 04:44:02.923225 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 21 04:44:02.923229 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 21 04:44:02.923233 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 21 04:44:02.923237 kernel: printk: legacy bootconsole [earlyser0] enabled Jun 21 04:44:02.923241 kernel: NX (Execute Disable) protection: active Jun 21 04:44:02.923247 kernel: APIC: Static calls initialized Jun 21 04:44:02.923251 kernel: efi: EFI v2.7 by Microsoft Jun 21 04:44:02.923255 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ebaca98 RNG=0x3ffd2018 Jun 21 04:44:02.923259 kernel: random: crng init done Jun 21 04:44:02.923263 kernel: secureboot: Secure boot disabled Jun 21 04:44:02.923267 kernel: SMBIOS 3.1.0 present. Jun 21 04:44:02.923271 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/21/2024 Jun 21 04:44:02.923275 kernel: DMI: Memory slots populated: 2/2 Jun 21 04:44:02.923280 kernel: Hypervisor detected: Microsoft Hyper-V Jun 21 04:44:02.923284 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jun 21 04:44:02.923288 kernel: Hyper-V: Nested features: 0x3e0101 Jun 21 04:44:02.923292 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 21 04:44:02.923296 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 21 04:44:02.923300 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 21 04:44:02.923304 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 21 04:44:02.923308 kernel: tsc: Detected 2300.000 MHz processor Jun 21 04:44:02.923312 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 21 04:44:02.923317 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 21 04:44:02.923321 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jun 21 04:44:02.923327 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 21 04:44:02.923349 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 21 04:44:02.923356 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jun 21 04:44:02.923363 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jun 21 04:44:02.923383 kernel: Using GB pages for direct mapping Jun 21 04:44:02.923388 kernel: ACPI: Early table checksum verification disabled Jun 21 04:44:02.923392 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 21 04:44:02.923399 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:02.923404 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:02.923409 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jun 21 04:44:02.923413 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 21 04:44:02.923417 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:02.923422 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:02.923427 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:02.923432 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 21 04:44:02.923436 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jun 21 04:44:02.923440 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 21 04:44:02.923445 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 21 04:44:02.923449 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] Jun 21 04:44:02.923453 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 21 04:44:02.923458 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 21 04:44:02.923462 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 21 04:44:02.923468 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 21 04:44:02.923472 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jun 21 04:44:02.923476 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jun 21 04:44:02.923480 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 21 04:44:02.923485 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 21 04:44:02.923489 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jun 21 04:44:02.923494 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jun 21 04:44:02.923498 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jun 21 04:44:02.923502 kernel: Zone ranges: Jun 21 04:44:02.923508 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 21 04:44:02.923512 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 21 04:44:02.923517 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 21 04:44:02.923521 kernel: Device empty Jun 21 04:44:02.923525 kernel: Movable zone start for each node Jun 21 04:44:02.923529 kernel: Early memory node ranges Jun 21 04:44:02.923534 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 21 04:44:02.923538 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 21 04:44:02.923542 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd0fff] Jun 21 04:44:02.923547 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 21 04:44:02.923551 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 21 04:44:02.923556 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 21 04:44:02.923560 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 21 04:44:02.923564 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 21 04:44:02.923568 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges Jun 21 04:44:02.923573 kernel: On node 0, zone DMA32: 46 pages in unavailable ranges Jun 21 04:44:02.923577 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 21 04:44:02.923581 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 21 04:44:02.923587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 21 04:44:02.923591 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 21 04:44:02.923596 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 21 04:44:02.923600 kernel: TSC deadline timer available Jun 21 04:44:02.923604 kernel: CPU topo: Max. logical packages: 1 Jun 21 04:44:02.923608 kernel: CPU topo: Max. logical dies: 1 Jun 21 04:44:02.923613 kernel: CPU topo: Max. dies per package: 1 Jun 21 04:44:02.923617 kernel: CPU topo: Max. threads per core: 2 Jun 21 04:44:02.923621 kernel: CPU topo: Num. cores per package: 1 Jun 21 04:44:02.923627 kernel: CPU topo: Num. threads per package: 2 Jun 21 04:44:02.923631 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 21 04:44:02.923635 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 21 04:44:02.923639 kernel: Booting paravirtualized kernel on Hyper-V Jun 21 04:44:02.923644 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 21 04:44:02.923648 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 21 04:44:02.923652 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 21 04:44:02.923657 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 21 04:44:02.923661 kernel: pcpu-alloc: [0] 0 1 Jun 21 04:44:02.923666 kernel: Hyper-V: PV spinlocks enabled Jun 21 04:44:02.923671 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 21 04:44:02.923676 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 04:44:02.923681 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 04:44:02.923685 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 21 04:44:02.923689 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 21 04:44:02.923694 kernel: Fallback order for Node 0: 0 Jun 21 04:44:02.923698 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2096877 Jun 21 04:44:02.923703 kernel: Policy zone: Normal Jun 21 04:44:02.923708 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 04:44:02.923712 kernel: software IO TLB: area num 2. Jun 21 04:44:02.923716 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 21 04:44:02.923720 kernel: ftrace: allocating 40093 entries in 157 pages Jun 21 04:44:02.923726 kernel: ftrace: allocated 157 pages with 5 groups Jun 21 04:44:02.923730 kernel: Dynamic Preempt: voluntary Jun 21 04:44:02.923734 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 04:44:02.923739 kernel: rcu: RCU event tracing is enabled. Jun 21 04:44:02.923744 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 21 04:44:02.923752 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 04:44:02.923756 kernel: Rude variant of Tasks RCU enabled. Jun 21 04:44:02.923762 kernel: Tracing variant of Tasks RCU enabled. Jun 21 04:44:02.923767 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 04:44:02.923771 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 21 04:44:02.923776 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 04:44:02.923780 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 04:44:02.923785 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 04:44:02.923789 kernel: Using NULL legacy PIC Jun 21 04:44:02.923794 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 21 04:44:02.923799 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 04:44:02.923804 kernel: Console: colour dummy device 80x25 Jun 21 04:44:02.923808 kernel: printk: legacy console [tty1] enabled Jun 21 04:44:02.923813 kernel: printk: legacy console [ttyS0] enabled Jun 21 04:44:02.923817 kernel: printk: legacy bootconsole [earlyser0] disabled Jun 21 04:44:02.923822 kernel: ACPI: Core revision 20240827 Jun 21 04:44:02.923828 kernel: Failed to register legacy timer interrupt Jun 21 04:44:02.923832 kernel: APIC: Switch to symmetric I/O mode setup Jun 21 04:44:02.923837 kernel: x2apic enabled Jun 21 04:44:02.923841 kernel: APIC: Switched APIC routing to: physical x2apic Jun 21 04:44:02.923846 kernel: Hyper-V: Host Build 10.0.26100.1255-1-0 Jun 21 04:44:02.923850 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 21 04:44:02.923855 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jun 21 04:44:02.923859 kernel: Hyper-V: Using IPI hypercalls Jun 21 04:44:02.923864 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 21 04:44:02.923869 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 21 04:44:02.923874 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 21 04:44:02.923879 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 21 04:44:02.923883 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 21 04:44:02.923888 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 21 04:44:02.923892 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jun 21 04:44:02.923897 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jun 21 04:44:02.923901 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 21 04:44:02.923906 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 21 04:44:02.923911 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 21 04:44:02.923916 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 21 04:44:02.923920 kernel: Spectre V2 : Mitigation: Retpolines Jun 21 04:44:02.923925 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 21 04:44:02.923929 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 21 04:44:02.923934 kernel: RETBleed: Vulnerable Jun 21 04:44:02.923938 kernel: Speculative Store Bypass: Vulnerable Jun 21 04:44:02.923943 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 21 04:44:02.923947 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 21 04:44:02.923951 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 21 04:44:02.923956 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 21 04:44:02.923961 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 21 04:44:02.923966 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 21 04:44:02.923970 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 21 04:44:02.923975 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jun 21 04:44:02.923979 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jun 21 04:44:02.923984 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jun 21 04:44:02.923988 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 21 04:44:02.923992 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 21 04:44:02.923997 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 21 04:44:02.924001 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 21 04:44:02.924007 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jun 21 04:44:02.924011 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jun 21 04:44:02.924016 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jun 21 04:44:02.924020 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jun 21 04:44:02.924025 kernel: Freeing SMP alternatives memory: 32K Jun 21 04:44:02.924029 kernel: pid_max: default: 32768 minimum: 301 Jun 21 04:44:02.924034 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 04:44:02.924038 kernel: landlock: Up and running. Jun 21 04:44:02.924043 kernel: SELinux: Initializing. Jun 21 04:44:02.924047 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 21 04:44:02.924052 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 21 04:44:02.924056 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jun 21 04:44:02.924061 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jun 21 04:44:02.924066 kernel: signal: max sigframe size: 11952 Jun 21 04:44:02.924071 kernel: rcu: Hierarchical SRCU implementation. Jun 21 04:44:02.924075 kernel: rcu: Max phase no-delay instances is 400. Jun 21 04:44:02.924080 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 04:44:02.924085 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 21 04:44:02.924089 kernel: smp: Bringing up secondary CPUs ... Jun 21 04:44:02.924094 kernel: smpboot: x86: Booting SMP configuration: Jun 21 04:44:02.924098 kernel: .... node #0, CPUs: #1 Jun 21 04:44:02.924103 kernel: smp: Brought up 1 node, 2 CPUs Jun 21 04:44:02.924108 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jun 21 04:44:02.924113 kernel: Memory: 8082312K/8387508K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 299988K reserved, 0K cma-reserved) Jun 21 04:44:02.924117 kernel: devtmpfs: initialized Jun 21 04:44:02.924122 kernel: x86/mm: Memory block size: 128MB Jun 21 04:44:02.924126 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 21 04:44:02.924131 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 04:44:02.924136 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 21 04:44:02.924140 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 04:44:02.924146 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 04:44:02.924150 kernel: audit: initializing netlink subsys (disabled) Jun 21 04:44:02.924155 kernel: audit: type=2000 audit(1750481039.027:1): state=initialized audit_enabled=0 res=1 Jun 21 04:44:02.924159 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 04:44:02.924164 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 21 04:44:02.924168 kernel: cpuidle: using governor menu Jun 21 04:44:02.924173 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 04:44:02.924177 kernel: dca service started, version 1.12.1 Jun 21 04:44:02.924182 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 21 04:44:02.924187 kernel: e820: reserve RAM buffer [mem 0x3ffd1000-0x3fffffff] Jun 21 04:44:02.924192 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 21 04:44:02.924196 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 21 04:44:02.924201 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 21 04:44:02.924205 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 04:44:02.924210 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 04:44:02.924214 kernel: ACPI: Added _OSI(Module Device) Jun 21 04:44:02.924219 kernel: ACPI: Added _OSI(Processor Device) Jun 21 04:44:02.924223 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 04:44:02.924229 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 21 04:44:02.924233 kernel: ACPI: Interpreter enabled Jun 21 04:44:02.924238 kernel: ACPI: PM: (supports S0 S5) Jun 21 04:44:02.924242 kernel: ACPI: Using IOAPIC for interrupt routing Jun 21 04:44:02.924247 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 21 04:44:02.924251 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 21 04:44:02.924256 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 21 04:44:02.924260 kernel: iommu: Default domain type: Translated Jun 21 04:44:02.924264 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 21 04:44:02.924270 kernel: efivars: Registered efivars operations Jun 21 04:44:02.924274 kernel: PCI: Using ACPI for IRQ routing Jun 21 04:44:02.924279 kernel: PCI: System does not support PCI Jun 21 04:44:02.924283 kernel: vgaarb: loaded Jun 21 04:44:02.924288 kernel: clocksource: Switched to clocksource tsc-early Jun 21 04:44:02.924292 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 04:44:02.924297 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 04:44:02.924302 kernel: pnp: PnP ACPI init Jun 21 04:44:02.924306 kernel: pnp: PnP ACPI: found 3 devices Jun 21 04:44:02.924312 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 21 04:44:02.924316 kernel: NET: Registered PF_INET protocol family Jun 21 04:44:02.924321 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 21 04:44:02.924325 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 21 04:44:02.924356 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 04:44:02.924368 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 21 04:44:02.924374 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 21 04:44:02.924381 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 21 04:44:02.924388 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 21 04:44:02.924397 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 21 04:44:02.924404 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 04:44:02.924412 kernel: NET: Registered PF_XDP protocol family Jun 21 04:44:02.924419 kernel: PCI: CLS 0 bytes, default 64 Jun 21 04:44:02.924427 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 21 04:44:02.924434 kernel: software IO TLB: mapped [mem 0x000000003aa59000-0x000000003ea59000] (64MB) Jun 21 04:44:02.924441 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jun 21 04:44:02.924449 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jun 21 04:44:02.924456 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jun 21 04:44:02.924464 kernel: clocksource: Switched to clocksource tsc Jun 21 04:44:02.924471 kernel: Initialise system trusted keyrings Jun 21 04:44:02.924478 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 21 04:44:02.924484 kernel: Key type asymmetric registered Jun 21 04:44:02.924491 kernel: Asymmetric key parser 'x509' registered Jun 21 04:44:02.924498 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 21 04:44:02.924505 kernel: io scheduler mq-deadline registered Jun 21 04:44:02.924512 kernel: io scheduler kyber registered Jun 21 04:44:02.924520 kernel: io scheduler bfq registered Jun 21 04:44:02.924528 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 21 04:44:02.924536 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 04:44:02.924544 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 04:44:02.924551 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 21 04:44:02.924558 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 04:44:02.924566 kernel: i8042: PNP: No PS/2 controller found. Jun 21 04:44:02.924679 kernel: rtc_cmos 00:02: registered as rtc0 Jun 21 04:44:02.924745 kernel: rtc_cmos 00:02: setting system clock to 2025-06-21T04:44:02 UTC (1750481042) Jun 21 04:44:02.924804 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 21 04:44:02.924813 kernel: intel_pstate: Intel P-state driver initializing Jun 21 04:44:02.924820 kernel: efifb: probing for efifb Jun 21 04:44:02.924828 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 21 04:44:02.924836 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 21 04:44:02.924844 kernel: efifb: scrolling: redraw Jun 21 04:44:02.924851 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 21 04:44:02.924859 kernel: Console: switching to colour frame buffer device 128x48 Jun 21 04:44:02.924867 kernel: fb0: EFI VGA frame buffer device Jun 21 04:44:02.924875 kernel: pstore: Using crash dump compression: deflate Jun 21 04:44:02.924883 kernel: pstore: Registered efi_pstore as persistent store backend Jun 21 04:44:02.924890 kernel: NET: Registered PF_INET6 protocol family Jun 21 04:44:02.924898 kernel: Segment Routing with IPv6 Jun 21 04:44:02.924905 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 04:44:02.924913 kernel: NET: Registered PF_PACKET protocol family Jun 21 04:44:02.924921 kernel: Key type dns_resolver registered Jun 21 04:44:02.924928 kernel: IPI shorthand broadcast: enabled Jun 21 04:44:02.924937 kernel: sched_clock: Marking stable (2677003298, 80443203)->(3058130470, -300683969) Jun 21 04:44:02.924946 kernel: registered taskstats version 1 Jun 21 04:44:02.924953 kernel: Loading compiled-in X.509 certificates Jun 21 04:44:02.924961 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: ec4617d162e00e1890f71f252cdf44036a7b66f7' Jun 21 04:44:02.924969 kernel: Demotion targets for Node 0: null Jun 21 04:44:02.924977 kernel: Key type .fscrypt registered Jun 21 04:44:02.924984 kernel: Key type fscrypt-provisioning registered Jun 21 04:44:02.924992 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 21 04:44:02.925000 kernel: ima: Allocated hash algorithm: sha1 Jun 21 04:44:02.925009 kernel: ima: No architecture policies found Jun 21 04:44:02.925017 kernel: clk: Disabling unused clocks Jun 21 04:44:02.925024 kernel: Warning: unable to open an initial console. Jun 21 04:44:02.925032 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 21 04:44:02.925040 kernel: Write protecting the kernel read-only data: 24576k Jun 21 04:44:02.925048 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 21 04:44:02.925056 kernel: Run /init as init process Jun 21 04:44:02.925064 kernel: with arguments: Jun 21 04:44:02.925071 kernel: /init Jun 21 04:44:02.925080 kernel: with environment: Jun 21 04:44:02.925087 kernel: HOME=/ Jun 21 04:44:02.925095 kernel: TERM=linux Jun 21 04:44:02.925102 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 04:44:02.925111 systemd[1]: Successfully made /usr/ read-only. Jun 21 04:44:02.925123 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 04:44:02.925132 systemd[1]: Detected virtualization microsoft. Jun 21 04:44:02.925141 systemd[1]: Detected architecture x86-64. Jun 21 04:44:02.925150 systemd[1]: Running in initrd. Jun 21 04:44:02.925158 systemd[1]: No hostname configured, using default hostname. Jun 21 04:44:02.925166 systemd[1]: Hostname set to . Jun 21 04:44:02.925174 systemd[1]: Initializing machine ID from random generator. Jun 21 04:44:02.925183 systemd[1]: Queued start job for default target initrd.target. Jun 21 04:44:02.925191 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 04:44:02.925199 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 04:44:02.925209 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 04:44:02.925218 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 04:44:02.925227 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 04:44:02.925236 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 04:44:02.925245 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 04:44:02.925253 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 04:44:02.925262 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 04:44:02.925272 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 04:44:02.925280 systemd[1]: Reached target paths.target - Path Units. Jun 21 04:44:02.925288 systemd[1]: Reached target slices.target - Slice Units. Jun 21 04:44:02.925296 systemd[1]: Reached target swap.target - Swaps. Jun 21 04:44:02.925304 systemd[1]: Reached target timers.target - Timer Units. Jun 21 04:44:02.925312 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 04:44:02.925321 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 04:44:02.925328 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 04:44:02.925603 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 04:44:02.925614 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 04:44:02.925623 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 04:44:02.925631 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 04:44:02.925639 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 04:44:02.925647 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 04:44:02.925655 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 04:44:02.925663 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 04:44:02.925671 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 04:44:02.925680 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 04:44:02.925688 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 04:44:02.925696 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 04:44:02.925724 systemd-journald[205]: Collecting audit messages is disabled. Jun 21 04:44:02.925754 systemd-journald[205]: Journal started Jun 21 04:44:02.925776 systemd-journald[205]: Runtime Journal (/run/log/journal/26ce4b8e26274473a96703be2b05f05a) is 8M, max 159M, 151M free. Jun 21 04:44:02.929370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:02.936650 systemd-modules-load[206]: Inserted module 'overlay' Jun 21 04:44:02.945231 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 04:44:02.945924 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 04:44:02.950917 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 04:44:02.955834 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 04:44:02.961457 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 04:44:02.969519 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 04:44:02.973110 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 04:44:02.975402 kernel: Bridge firewalling registered Jun 21 04:44:02.974389 systemd-modules-load[206]: Inserted module 'br_netfilter' Jun 21 04:44:02.977449 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 04:44:02.981950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:02.984033 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 04:44:02.990530 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 04:44:02.992363 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 04:44:02.998142 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 04:44:03.008690 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 04:44:03.011807 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 04:44:03.016731 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:44:03.017969 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 04:44:03.030496 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 04:44:03.037733 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 04:44:03.044413 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 04:44:03.056632 systemd-resolved[233]: Positive Trust Anchors: Jun 21 04:44:03.056645 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 04:44:03.056674 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 04:44:03.058955 systemd-resolved[233]: Defaulting to hostname 'linux'. Jun 21 04:44:03.079706 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 04:44:03.059672 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 04:44:03.062912 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 04:44:03.134349 kernel: SCSI subsystem initialized Jun 21 04:44:03.141346 kernel: Loading iSCSI transport class v2.0-870. Jun 21 04:44:03.149352 kernel: iscsi: registered transport (tcp) Jun 21 04:44:03.163602 kernel: iscsi: registered transport (qla4xxx) Jun 21 04:44:03.163643 kernel: QLogic iSCSI HBA Driver Jun 21 04:44:03.174708 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 04:44:03.193130 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 04:44:03.197720 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 04:44:03.225086 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 04:44:03.230150 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 04:44:03.271347 kernel: raid6: avx512x4 gen() 47813 MB/s Jun 21 04:44:03.288341 kernel: raid6: avx512x2 gen() 46763 MB/s Jun 21 04:44:03.306342 kernel: raid6: avx512x1 gen() 30055 MB/s Jun 21 04:44:03.324341 kernel: raid6: avx2x4 gen() 41808 MB/s Jun 21 04:44:03.341341 kernel: raid6: avx2x2 gen() 43883 MB/s Jun 21 04:44:03.358784 kernel: raid6: avx2x1 gen() 31781 MB/s Jun 21 04:44:03.358803 kernel: raid6: using algorithm avx512x4 gen() 47813 MB/s Jun 21 04:44:03.377561 kernel: raid6: .... xor() 7831 MB/s, rmw enabled Jun 21 04:44:03.377585 kernel: raid6: using avx512x2 recovery algorithm Jun 21 04:44:03.393348 kernel: xor: automatically using best checksumming function avx Jun 21 04:44:03.494349 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 04:44:03.497674 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 04:44:03.501442 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 04:44:03.522205 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jun 21 04:44:03.525749 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 04:44:03.532458 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 04:44:03.548253 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jun 21 04:44:03.563324 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 04:44:03.568049 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 04:44:03.594313 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 04:44:03.600983 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 04:44:03.635351 kernel: cryptd: max_cpu_qlen set to 1000 Jun 21 04:44:03.643353 kernel: AES CTR mode by8 optimization enabled Jun 21 04:44:03.675172 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:44:03.680412 kernel: hv_vmbus: Vmbus version:5.3 Jun 21 04:44:03.678869 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:03.686414 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:03.691547 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 21 04:44:03.689042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:03.697346 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 21 04:44:03.697382 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 21 04:44:03.700351 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 21 04:44:03.701352 kernel: hv_vmbus: registering driver hv_pci Jun 21 04:44:03.712617 kernel: PTP clock support registered Jun 21 04:44:03.715176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:44:03.718005 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:03.722942 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jun 21 04:44:03.727812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:44:03.733588 kernel: hv_vmbus: registering driver hv_storvsc Jun 21 04:44:03.738043 kernel: scsi host0: storvsc_host_t Jun 21 04:44:03.738308 kernel: hv_vmbus: registering driver hv_netvsc Jun 21 04:44:03.738320 kernel: hv_utils: Registering HyperV Utility Driver Jun 21 04:44:03.739602 kernel: hv_vmbus: registering driver hv_utils Jun 21 04:44:03.744364 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jun 21 04:44:03.797691 kernel: hv_utils: Shutdown IC version 3.2 Jun 21 04:44:03.797727 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 21 04:44:03.797739 kernel: hv_utils: TimeSync IC version 4.0 Jun 21 04:44:03.654950 kernel: hv_utils: Heartbeat IC version 3.0 Jun 21 04:44:03.662921 systemd-journald[205]: Time jumped backwards, rotating. Jun 21 04:44:03.662972 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e5277d2e5 (unnamed net_device) (uninitialized): VF slot 1 added Jun 21 04:44:03.657199 systemd-resolved[233]: Clock change detected. Flushing caches. Jun 21 04:44:03.669354 kernel: hv_vmbus: registering driver hid_hyperv Jun 21 04:44:03.679002 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 21 04:44:03.684425 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jun 21 04:44:03.684572 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 21 04:44:03.686482 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jun 21 04:44:03.684946 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:44:03.694338 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jun 21 04:44:03.699046 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jun 21 04:44:03.699090 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 21 04:44:03.699233 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 21 04:44:03.699244 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jun 21 04:44:03.703356 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 21 04:44:03.713497 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jun 21 04:44:03.717728 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jun 21 04:44:03.717883 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jun 21 04:44:03.726399 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#115 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 21 04:44:03.735455 kernel: nvme nvme0: pci function c05b:00:00.0 Jun 21 04:44:03.735631 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jun 21 04:44:03.748375 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#69 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 21 04:44:03.996367 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 21 04:44:04.001359 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 21 04:44:04.268441 kernel: nvme nvme0: using unchecked data buffer Jun 21 04:44:04.454240 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 21 04:44:04.463579 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jun 21 04:44:04.499396 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 21 04:44:04.501155 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jun 21 04:44:04.506451 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 04:44:04.526534 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jun 21 04:44:04.533969 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 21 04:44:04.531753 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 04:44:04.537305 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 04:44:04.540582 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 04:44:04.544471 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 04:44:04.548068 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 04:44:04.565580 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 04:44:04.685147 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jun 21 04:44:04.685291 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jun 21 04:44:04.687942 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jun 21 04:44:04.689397 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jun 21 04:44:04.693412 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jun 21 04:44:04.696450 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jun 21 04:44:04.701362 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jun 21 04:44:04.701386 kernel: pci 7870:00:00.0: enabling Extended Tags Jun 21 04:44:04.716363 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jun 21 04:44:04.716494 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jun 21 04:44:04.719483 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jun 21 04:44:04.723023 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jun 21 04:44:05.543399 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 21 04:44:05.543454 disk-uuid[671]: The operation has completed successfully. Jun 21 04:45:05.612940 systemd-udevd[454]: 7870:00:00.0: Worker [518] processing SEQNUM=1160 is taking a long time Jun 21 04:45:06.767386 kernel: mana 7870:00:00.0: Failed to establish HWC: -110 Jun 21 04:45:06.776359 kernel: mana 7870:00:00.0: gdma probe failed: err = -110 Jun 21 04:45:06.776580 kernel: mana 7870:00:00.0: probe with driver mana failed with error -110 Jun 21 04:45:06.781143 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 04:45:06.781230 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 04:45:06.783879 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 04:45:06.796428 sh[716]: Success Jun 21 04:45:06.827674 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 04:45:06.827721 kernel: device-mapper: uevent: version 1.0.3 Jun 21 04:45:06.828823 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 04:45:06.837365 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 21 04:45:07.048026 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 04:45:07.063429 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 04:45:07.065550 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 04:45:07.085176 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 04:45:07.085273 kernel: BTRFS: device fsid bfb8168c-5be0-428c-83e7-820ccaf1f8e9 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (729) Jun 21 04:45:07.088721 kernel: BTRFS info (device dm-0): first mount of filesystem bfb8168c-5be0-428c-83e7-820ccaf1f8e9 Jun 21 04:45:07.088755 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:45:07.089632 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 04:45:07.403043 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 04:45:07.405877 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 04:45:07.408172 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 04:45:07.408759 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 04:45:07.412671 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 04:45:07.451395 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (762) Jun 21 04:45:07.454423 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:45:07.454454 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:45:07.455832 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 21 04:45:07.492097 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 04:45:07.495450 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 04:45:07.500591 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:45:07.504449 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 04:45:07.507638 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 04:45:07.523013 systemd-networkd[892]: lo: Link UP Jun 21 04:45:07.523020 systemd-networkd[892]: lo: Gained carrier Jun 21 04:45:07.523687 systemd-networkd[892]: Enumeration completed Jun 21 04:45:07.523745 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 04:45:07.523989 systemd-networkd[892]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:45:07.523992 systemd-networkd[892]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 04:45:07.524890 systemd-networkd[892]: eth0: Link UP Jun 21 04:45:07.525017 systemd-networkd[892]: eth0: Gained carrier Jun 21 04:45:07.525025 systemd-networkd[892]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:45:07.527604 systemd[1]: Reached target network.target - Network. Jun 21 04:45:07.535998 systemd-networkd[892]: eth0: DHCPv4 address 10.200.8.44/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 21 04:45:08.268308 ignition[899]: Ignition 2.21.0 Jun 21 04:45:08.268319 ignition[899]: Stage: fetch-offline Jun 21 04:45:08.269884 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 04:45:08.268415 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:45:08.274574 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 21 04:45:08.268421 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:45:08.268499 ignition[899]: parsed url from cmdline: "" Jun 21 04:45:08.268501 ignition[899]: no config URL provided Jun 21 04:45:08.268505 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 04:45:08.268509 ignition[899]: no config at "/usr/lib/ignition/user.ign" Jun 21 04:45:08.268514 ignition[899]: failed to fetch config: resource requires networking Jun 21 04:45:08.268641 ignition[899]: Ignition finished successfully Jun 21 04:45:08.290387 ignition[909]: Ignition 2.21.0 Jun 21 04:45:08.290392 ignition[909]: Stage: fetch Jun 21 04:45:08.290568 ignition[909]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:45:08.290575 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:45:08.290637 ignition[909]: parsed url from cmdline: "" Jun 21 04:45:08.290640 ignition[909]: no config URL provided Jun 21 04:45:08.290643 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 04:45:08.290648 ignition[909]: no config at "/usr/lib/ignition/user.ign" Jun 21 04:45:08.290682 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 21 04:45:08.414682 ignition[909]: GET result: OK Jun 21 04:45:08.414807 ignition[909]: config has been read from IMDS userdata Jun 21 04:45:08.414844 ignition[909]: parsing config with SHA512: e414273d8ac09a05dbe8f43b133b9492b307c2382a11fc0fe8ad9a0347237fea61040389509f269734f1ce2c11fb71a3a80e42e9317c45261c9988f792ae91a4 Jun 21 04:45:08.421633 unknown[909]: fetched base config from "system" Jun 21 04:45:08.421733 unknown[909]: fetched base config from "system" Jun 21 04:45:08.422042 ignition[909]: fetch: fetch complete Jun 21 04:45:08.421737 unknown[909]: fetched user config from "azure" Jun 21 04:45:08.422046 ignition[909]: fetch: fetch passed Jun 21 04:45:08.424162 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 21 04:45:08.422078 ignition[909]: Ignition finished successfully Jun 21 04:45:08.427941 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 04:45:08.443703 ignition[916]: Ignition 2.21.0 Jun 21 04:45:08.443712 ignition[916]: Stage: kargs Jun 21 04:45:08.443861 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:45:08.445906 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 04:45:08.443868 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:45:08.450289 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 04:45:08.445018 ignition[916]: kargs: kargs passed Jun 21 04:45:08.445055 ignition[916]: Ignition finished successfully Jun 21 04:45:08.466857 ignition[922]: Ignition 2.21.0 Jun 21 04:45:08.466866 ignition[922]: Stage: disks Jun 21 04:45:08.467060 ignition[922]: no configs at "/usr/lib/ignition/base.d" Jun 21 04:45:08.468675 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 04:45:08.467066 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:45:08.469928 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 04:45:08.467898 ignition[922]: disks: disks passed Jun 21 04:45:08.473438 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 04:45:08.467931 ignition[922]: Ignition finished successfully Jun 21 04:45:08.477394 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 04:45:08.480384 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 04:45:08.483377 systemd[1]: Reached target basic.target - Basic System. Jun 21 04:45:08.486968 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 04:45:08.552272 systemd-fsck[930]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jun 21 04:45:08.555722 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 04:45:08.560185 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 04:45:08.787560 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6d18c974-0fd6-4e4a-98cf-62524fcf9e99 r/w with ordered data mode. Quota mode: none. Jun 21 04:45:08.788097 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 04:45:08.789837 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 04:45:08.806978 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 04:45:08.822420 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 04:45:08.825680 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 21 04:45:08.827749 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 04:45:08.838263 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (939) Jun 21 04:45:08.838280 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:45:08.838287 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:45:08.838294 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 21 04:45:08.827777 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 04:45:08.844168 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 04:45:08.844474 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 04:45:08.851701 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 04:45:09.132696 systemd-networkd[892]: eth0: Gained IPv6LL Jun 21 04:45:09.286940 coreos-metadata[941]: Jun 21 04:45:09.286 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 21 04:45:09.290369 coreos-metadata[941]: Jun 21 04:45:09.290 INFO Fetch successful Jun 21 04:45:09.292439 coreos-metadata[941]: Jun 21 04:45:09.290 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 21 04:45:09.299517 coreos-metadata[941]: Jun 21 04:45:09.299 INFO Fetch successful Jun 21 04:45:09.312978 coreos-metadata[941]: Jun 21 04:45:09.312 INFO wrote hostname ci-4372.0.0-a-59b94489dc to /sysroot/etc/hostname Jun 21 04:45:09.314856 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 21 04:45:09.401319 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 04:45:09.419695 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Jun 21 04:45:09.423142 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 04:45:09.426399 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 04:45:10.176755 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 04:45:10.181277 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 04:45:10.184810 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 04:45:10.200758 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 04:45:10.203358 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:45:10.222893 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 04:45:10.227674 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 04:45:10.231448 ignition[1057]: INFO : Ignition 2.21.0 Jun 21 04:45:10.231448 ignition[1057]: INFO : Stage: mount Jun 21 04:45:10.231448 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 04:45:10.231448 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:45:10.231448 ignition[1057]: INFO : mount: mount passed Jun 21 04:45:10.231448 ignition[1057]: INFO : Ignition finished successfully Jun 21 04:45:10.229939 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 04:45:10.241541 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 04:45:10.263356 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 (259:5) scanned by mount (1069) Jun 21 04:45:10.265796 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 04:45:10.265820 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 21 04:45:10.265831 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jun 21 04:45:10.271317 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 04:45:10.298777 ignition[1086]: INFO : Ignition 2.21.0 Jun 21 04:45:10.298777 ignition[1086]: INFO : Stage: files Jun 21 04:45:10.301382 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 04:45:10.301382 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:45:10.301382 ignition[1086]: DEBUG : files: compiled without relabeling support, skipping Jun 21 04:45:10.314256 ignition[1086]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 04:45:10.314256 ignition[1086]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 04:45:10.354750 ignition[1086]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 04:45:10.357463 ignition[1086]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 04:45:10.357463 ignition[1086]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 04:45:10.356579 unknown[1086]: wrote ssh authorized keys file for user: core Jun 21 04:45:10.371208 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 21 04:45:10.375420 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jun 21 04:45:10.658773 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 04:45:10.768843 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 21 04:45:10.768843 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 04:45:10.768843 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 21 04:45:11.284061 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 21 04:45:11.404535 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 04:45:11.408419 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 21 04:45:11.408419 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 04:45:11.408419 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 04:45:11.408419 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 04:45:11.408419 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 04:45:11.408419 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 04:45:11.408419 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 04:45:11.408419 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 04:45:11.424077 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 04:45:11.424077 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 04:45:11.424077 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 21 04:45:11.424077 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 21 04:45:11.424077 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 21 04:45:11.424077 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jun 21 04:45:12.202616 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 21 04:45:12.477951 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 21 04:45:12.477951 ignition[1086]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 21 04:45:12.504827 ignition[1086]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 04:45:12.513083 ignition[1086]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 04:45:12.513083 ignition[1086]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 21 04:45:12.519560 ignition[1086]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 21 04:45:12.519560 ignition[1086]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 04:45:12.519560 ignition[1086]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 04:45:12.519560 ignition[1086]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 04:45:12.519560 ignition[1086]: INFO : files: files passed Jun 21 04:45:12.519560 ignition[1086]: INFO : Ignition finished successfully Jun 21 04:45:12.517278 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 04:45:12.522634 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 04:45:12.535452 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 04:45:12.539750 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 04:45:12.539818 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 04:45:12.580366 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 04:45:12.580366 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 04:45:12.585427 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 04:45:12.585215 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 04:45:12.588215 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 04:45:12.592154 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 04:45:12.635547 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 04:45:12.635626 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 04:45:12.636009 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 04:45:12.639687 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 04:45:12.645438 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 04:45:12.646816 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 04:45:12.659506 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 04:45:12.661792 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 04:45:12.679017 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 04:45:12.679161 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 04:45:12.679331 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 04:45:12.686522 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 04:45:12.686649 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 04:45:12.693511 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 04:45:12.696493 systemd[1]: Stopped target basic.target - Basic System. Jun 21 04:45:12.699506 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 04:45:12.702480 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 04:45:12.705477 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 04:45:12.709481 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 04:45:12.712473 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 04:45:12.714323 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 04:45:12.716488 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 04:45:12.720501 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 04:45:12.723474 systemd[1]: Stopped target swap.target - Swaps. Jun 21 04:45:12.726440 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 04:45:12.726565 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 04:45:12.728222 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 04:45:12.731462 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 04:45:12.731596 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 04:45:12.732209 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 04:45:12.736447 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 04:45:12.736556 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 04:45:12.739675 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 04:45:12.739778 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 04:45:12.745501 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 04:45:12.745605 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 04:45:12.747454 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 21 04:45:12.747568 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 21 04:45:12.750914 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 04:45:12.754380 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 04:45:12.754497 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 04:45:12.757958 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 04:45:12.770417 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 04:45:12.770868 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 04:45:12.775604 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 04:45:12.775695 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 04:45:12.785037 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 04:45:12.785109 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 04:45:12.791434 ignition[1140]: INFO : Ignition 2.21.0 Jun 21 04:45:12.791434 ignition[1140]: INFO : Stage: umount Jun 21 04:45:12.794283 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 04:45:12.794283 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 21 04:45:12.797770 ignition[1140]: INFO : umount: umount passed Jun 21 04:45:12.797770 ignition[1140]: INFO : Ignition finished successfully Jun 21 04:45:12.797036 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 04:45:12.797130 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 04:45:12.801394 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 04:45:12.801433 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 04:45:12.804408 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 04:45:12.804445 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 04:45:12.807420 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 21 04:45:12.807473 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 21 04:45:12.810400 systemd[1]: Stopped target network.target - Network. Jun 21 04:45:12.812094 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 04:45:12.812134 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 04:45:12.812182 systemd[1]: Stopped target paths.target - Path Units. Jun 21 04:45:12.812286 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 04:45:12.815604 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 04:45:12.817801 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 04:45:12.821155 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 04:45:12.823728 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 04:45:12.823764 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 04:45:12.827110 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 04:45:12.827137 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 04:45:12.829571 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 04:45:12.829612 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 04:45:12.832779 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 04:45:12.832811 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 04:45:12.835778 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 04:45:12.838830 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 04:45:12.843581 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 04:45:12.843651 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 04:45:12.848340 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 04:45:12.848564 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 04:45:12.848640 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 04:45:12.852534 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 04:45:12.853044 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 04:45:12.854848 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 04:45:12.854877 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 04:45:12.859201 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 04:45:12.871377 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 04:45:12.871426 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 04:45:12.873302 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 04:45:12.873336 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:45:12.877002 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 04:45:12.877039 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 04:45:12.881400 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 04:45:12.881449 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 04:45:12.888279 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 04:45:12.896075 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 04:45:12.896114 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 04:45:12.901756 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 04:45:12.901846 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 04:45:12.903544 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 04:45:12.903575 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 04:45:12.903745 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 04:45:12.903765 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 04:45:12.903789 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 04:45:12.903814 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 04:45:12.904011 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 04:45:12.904034 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 04:45:12.904225 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 04:45:12.904248 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 04:45:12.910280 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 04:45:12.936651 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 04:45:12.936701 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 04:45:12.944863 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 04:45:12.944912 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 04:45:12.953455 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 04:45:12.953499 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:45:12.958412 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 04:45:12.958473 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 21 04:45:12.958503 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 21 04:45:12.958530 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 04:45:12.958927 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 04:45:12.958982 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 04:45:12.962290 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 04:45:12.962367 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 04:45:13.180253 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 04:45:13.181621 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 04:45:13.184532 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 04:45:13.186290 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 04:45:13.186419 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 04:45:13.189838 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 04:45:13.204628 systemd[1]: Switching root. Jun 21 04:45:13.243170 systemd-journald[205]: Journal stopped Jun 21 04:45:17.043193 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jun 21 04:45:17.043221 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 04:45:17.043232 kernel: SELinux: policy capability open_perms=1 Jun 21 04:45:17.043239 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 04:45:17.043247 kernel: SELinux: policy capability always_check_network=0 Jun 21 04:45:17.043254 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 04:45:17.043264 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 04:45:17.043272 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 04:45:17.043279 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 04:45:17.043286 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 04:45:17.043293 kernel: audit: type=1403 audit(1750481114.481:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 04:45:17.043302 systemd[1]: Successfully loaded SELinux policy in 104.945ms. Jun 21 04:45:17.043315 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.893ms. Jun 21 04:45:17.043326 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 04:45:17.043335 systemd[1]: Detected virtualization microsoft. Jun 21 04:45:17.043437 systemd[1]: Detected architecture x86-64. Jun 21 04:45:17.043446 systemd[1]: Detected first boot. Jun 21 04:45:17.043455 systemd[1]: Hostname set to . Jun 21 04:45:17.043464 systemd[1]: Initializing machine ID from random generator. Jun 21 04:45:17.043473 zram_generator::config[1184]: No configuration found. Jun 21 04:45:17.043482 kernel: Guest personality initialized and is inactive Jun 21 04:45:17.043490 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jun 21 04:45:17.043498 kernel: Initialized host personality Jun 21 04:45:17.043505 kernel: NET: Registered PF_VSOCK protocol family Jun 21 04:45:17.043514 systemd[1]: Populated /etc with preset unit settings. Jun 21 04:45:17.043524 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 04:45:17.043533 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 04:45:17.043541 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 04:45:17.043549 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 04:45:17.043557 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 04:45:17.043566 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 04:45:17.043574 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 04:45:17.043584 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 04:45:17.043593 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 04:45:17.043601 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 04:45:17.043609 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 04:45:17.043617 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 04:45:17.043625 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 04:45:17.043634 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 04:45:17.043642 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 04:45:17.043652 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 04:45:17.043662 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 04:45:17.043671 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 04:45:17.043680 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 21 04:45:17.043688 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 04:45:17.043696 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 04:45:17.043705 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 04:45:17.043713 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 04:45:17.043723 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 04:45:17.043731 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 04:45:17.043740 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 04:45:17.043749 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 04:45:17.043757 systemd[1]: Reached target slices.target - Slice Units. Jun 21 04:45:17.043766 systemd[1]: Reached target swap.target - Swaps. Jun 21 04:45:17.043775 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 04:45:17.043783 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 04:45:17.043793 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 04:45:17.043801 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 04:45:17.043810 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 04:45:17.043819 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 04:45:17.043828 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 04:45:17.043838 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 04:45:17.043846 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 04:45:17.043855 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 04:45:17.043863 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:45:17.043872 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 04:45:17.043880 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 04:45:17.043888 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 04:45:17.043897 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 04:45:17.043907 systemd[1]: Reached target machines.target - Containers. Jun 21 04:45:17.043917 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 04:45:17.043926 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:45:17.043935 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 04:45:17.043943 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 04:45:17.043952 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 04:45:17.043960 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 04:45:17.043969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 04:45:17.043979 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 04:45:17.043988 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 04:45:17.043997 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 04:45:17.044005 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 04:45:17.044014 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 04:45:17.044022 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 04:45:17.044031 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 04:45:17.044040 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:45:17.044050 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 04:45:17.044058 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 04:45:17.044067 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 04:45:17.044076 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 04:45:17.044085 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 04:45:17.044093 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 04:45:17.044103 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 04:45:17.044112 systemd[1]: Stopped verity-setup.service. Jun 21 04:45:17.044120 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:45:17.044130 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 04:45:17.044138 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 04:45:17.044147 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 04:45:17.044156 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 04:45:17.044164 kernel: loop: module loaded Jun 21 04:45:17.044173 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 04:45:17.044181 kernel: fuse: init (API version 7.41) Jun 21 04:45:17.044189 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 04:45:17.044215 systemd-journald[1270]: Collecting audit messages is disabled. Jun 21 04:45:17.044236 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 04:45:17.044245 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 04:45:17.044254 systemd-journald[1270]: Journal started Jun 21 04:45:17.044275 systemd-journald[1270]: Runtime Journal (/run/log/journal/5816ff2e1cd944f0b590b905014686f9) is 8M, max 159M, 151M free. Jun 21 04:45:16.678469 systemd[1]: Queued start job for default target multi-user.target. Jun 21 04:45:16.686791 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 21 04:45:16.687087 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 04:45:17.050360 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 04:45:17.053890 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 04:45:17.054022 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 04:45:17.056035 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 04:45:17.056160 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 04:45:17.059606 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 04:45:17.059741 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 04:45:17.061911 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 04:45:17.062036 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 04:45:17.065625 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 04:45:17.065808 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 04:45:17.067581 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 04:45:17.069354 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 04:45:17.071833 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 04:45:17.073778 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 04:45:17.081788 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 04:45:17.088419 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 04:45:17.101420 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 04:45:17.104445 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 04:45:17.104536 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 04:45:17.106735 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 04:45:17.111413 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 04:45:17.113109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:45:17.114478 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 04:45:17.119448 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 04:45:17.122805 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 04:45:17.125081 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 04:45:17.127464 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 04:45:17.129512 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 04:45:17.137117 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 04:45:17.143366 kernel: ACPI: bus type drm_connector registered Jun 21 04:45:17.143566 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 04:45:17.146898 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 04:45:17.147530 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 04:45:17.150870 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 04:45:17.155320 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 04:45:17.158546 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 04:45:17.161442 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 04:45:17.163206 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 04:45:17.166622 systemd-journald[1270]: Time spent on flushing to /var/log/journal/5816ff2e1cd944f0b590b905014686f9 is 27.390ms for 983 entries. Jun 21 04:45:17.166622 systemd-journald[1270]: System Journal (/var/log/journal/5816ff2e1cd944f0b590b905014686f9) is 11.8M, max 2.6G, 2.6G free. Jun 21 04:45:17.223307 systemd-journald[1270]: Received client request to flush runtime journal. Jun 21 04:45:17.223338 systemd-journald[1270]: /var/log/journal/5816ff2e1cd944f0b590b905014686f9/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jun 21 04:45:17.223371 systemd-journald[1270]: Rotating system journal. Jun 21 04:45:17.223387 kernel: loop0: detected capacity change from 0 to 28496 Jun 21 04:45:17.167586 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 04:45:17.202898 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:45:17.224223 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 04:45:17.235884 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 04:45:17.390882 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 04:45:17.394131 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 04:45:17.463362 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 04:45:17.509089 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Jun 21 04:45:17.509104 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Jun 21 04:45:17.530563 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 04:45:17.533372 kernel: loop1: detected capacity change from 0 to 113872 Jun 21 04:45:17.688146 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 04:45:17.833367 kernel: loop2: detected capacity change from 0 to 229808 Jun 21 04:45:17.873429 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 04:45:17.877296 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 04:45:17.881434 kernel: loop3: detected capacity change from 0 to 146240 Jun 21 04:45:17.902676 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Jun 21 04:45:18.073903 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 04:45:18.078291 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 04:45:18.145448 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 04:45:18.157736 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 21 04:45:18.186365 kernel: loop4: detected capacity change from 0 to 28496 Jun 21 04:45:18.201360 kernel: loop5: detected capacity change from 0 to 113872 Jun 21 04:45:18.211735 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 04:45:18.220369 kernel: mousedev: PS/2 mouse device common for all mice Jun 21 04:45:18.223372 kernel: loop6: detected capacity change from 0 to 229808 Jun 21 04:45:18.235367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#97 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 21 04:45:18.242366 kernel: loop7: detected capacity change from 0 to 146240 Jun 21 04:45:18.259374 kernel: hv_vmbus: registering driver hyperv_fb Jun 21 04:45:18.261694 (sd-merge)[1389]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 21 04:45:18.262035 (sd-merge)[1389]: Merged extensions into '/usr'. Jun 21 04:45:18.266360 kernel: hv_vmbus: registering driver hv_balloon Jun 21 04:45:18.271786 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 21 04:45:18.271829 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 21 04:45:18.272450 systemd[1]: Reload requested from client PID 1324 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 04:45:18.272460 systemd[1]: Reloading... Jun 21 04:45:18.279379 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 21 04:45:18.279425 kernel: Console: switching to colour dummy device 80x25 Jun 21 04:45:18.287374 kernel: Console: switching to colour frame buffer device 128x48 Jun 21 04:45:18.372031 zram_generator::config[1445]: No configuration found. Jun 21 04:45:18.412333 systemd-networkd[1357]: lo: Link UP Jun 21 04:45:18.413564 systemd-networkd[1357]: lo: Gained carrier Jun 21 04:45:18.414473 systemd-networkd[1357]: Enumeration completed Jun 21 04:45:18.414697 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:45:18.414699 systemd-networkd[1357]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 04:45:18.416288 systemd-networkd[1357]: eth0: Link UP Jun 21 04:45:18.416679 systemd-networkd[1357]: eth0: Gained carrier Jun 21 04:45:18.417399 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:45:18.432438 systemd-networkd[1357]: eth0: DHCPv4 address 10.200.8.44/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 21 04:45:18.562713 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:45:18.635361 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 21 04:45:18.663975 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jun 21 04:45:18.666548 systemd[1]: Reloading finished in 393 ms. Jun 21 04:45:18.682759 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 04:45:18.685585 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 04:45:18.718084 systemd[1]: Starting ensure-sysext.service... Jun 21 04:45:18.721446 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 04:45:18.725667 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 04:45:18.729185 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 04:45:18.734048 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 04:45:18.739468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 04:45:18.748457 systemd[1]: Reload requested from client PID 1515 ('systemctl') (unit ensure-sysext.service)... Jun 21 04:45:18.748467 systemd[1]: Reloading... Jun 21 04:45:18.763861 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 04:45:18.763960 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 04:45:18.764155 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 04:45:18.764412 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 04:45:18.764808 systemd-tmpfiles[1519]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 04:45:18.764935 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Jun 21 04:45:18.764961 systemd-tmpfiles[1519]: ACLs are not supported, ignoring. Jun 21 04:45:18.767656 systemd-tmpfiles[1519]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 04:45:18.767736 systemd-tmpfiles[1519]: Skipping /boot Jun 21 04:45:18.773591 systemd-tmpfiles[1519]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 04:45:18.773601 systemd-tmpfiles[1519]: Skipping /boot Jun 21 04:45:18.812363 zram_generator::config[1551]: No configuration found. Jun 21 04:45:18.887432 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:45:18.967152 systemd[1]: Reloading finished in 218 ms. Jun 21 04:45:18.990433 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 04:45:18.990760 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 04:45:18.991037 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 04:45:18.998302 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:45:18.999228 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 04:45:19.002187 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 04:45:19.003446 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:45:19.004555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 04:45:19.011819 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 04:45:19.014912 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 04:45:19.016641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:45:19.016745 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:45:19.022806 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 04:45:19.025614 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 04:45:19.029472 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 04:45:19.031104 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:45:19.032538 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 04:45:19.032678 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 04:45:19.035429 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 04:45:19.035585 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 04:45:19.038653 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 04:45:19.038779 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 04:45:19.044678 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:45:19.044817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:45:19.046904 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 04:45:19.051680 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 04:45:19.054586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 04:45:19.056159 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:45:19.056269 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:45:19.056366 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:45:19.068403 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 04:45:19.071758 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 04:45:19.074043 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 04:45:19.074182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 04:45:19.076297 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 04:45:19.076445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 04:45:19.081087 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 04:45:19.086807 systemd[1]: Finished ensure-sysext.service. Jun 21 04:45:19.091275 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:45:19.092148 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 04:45:19.094295 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 04:45:19.095872 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 04:45:19.095903 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 04:45:19.095935 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 04:45:19.095966 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 04:45:19.096147 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 04:45:19.098104 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 04:45:19.101526 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 04:45:19.104669 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 04:45:19.104794 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 04:45:19.120681 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 04:45:19.138904 augenrules[1662]: No rules Jun 21 04:45:19.139562 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 04:45:19.139714 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 04:45:19.142838 systemd-resolved[1625]: Positive Trust Anchors: Jun 21 04:45:19.142847 systemd-resolved[1625]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 04:45:19.142877 systemd-resolved[1625]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 04:45:19.146251 systemd-resolved[1625]: Using system hostname 'ci-4372.0.0-a-59b94489dc'. Jun 21 04:45:19.147197 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 04:45:19.149443 systemd[1]: Reached target network.target - Network. Jun 21 04:45:19.150276 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 04:45:19.450446 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 04:45:19.454523 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 04:45:19.885611 systemd-networkd[1357]: eth0: Gained IPv6LL Jun 21 04:45:19.887323 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 04:45:19.890586 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 04:45:21.119901 ldconfig[1319]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 04:45:21.130981 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 04:45:21.134482 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 04:45:21.155715 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 04:45:21.159552 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 04:45:21.162461 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 04:45:21.165419 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 04:45:21.166540 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 21 04:45:21.167922 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 04:45:21.168978 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 04:45:21.171381 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 04:45:21.172452 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 04:45:21.172478 systemd[1]: Reached target paths.target - Path Units. Jun 21 04:45:21.174380 systemd[1]: Reached target timers.target - Timer Units. Jun 21 04:45:21.175715 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 04:45:21.179232 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 04:45:21.183117 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 04:45:21.186515 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 04:45:21.189387 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 04:45:21.192619 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 04:45:21.195692 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 04:45:21.198860 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 04:45:21.201978 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 04:45:21.204383 systemd[1]: Reached target basic.target - Basic System. Jun 21 04:45:21.206410 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 04:45:21.206430 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 04:45:21.208170 systemd[1]: Starting chronyd.service - NTP client/server... Jun 21 04:45:21.211160 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 04:45:21.216443 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 21 04:45:21.221443 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 04:45:21.225634 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 04:45:21.229324 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 04:45:21.233189 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 04:45:21.233253 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 04:45:21.235013 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 21 04:45:21.236490 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jun 21 04:45:21.240399 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 21 04:45:21.244430 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 21 04:45:21.247283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:21.250904 jq[1683]: false Jun 21 04:45:21.251858 KVP[1686]: KVP starting; pid is:1686 Jun 21 04:45:21.252224 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 04:45:21.256228 KVP[1686]: KVP LIC Version: 3.1 Jun 21 04:45:21.256360 kernel: hv_utils: KVP IC version 4.0 Jun 21 04:45:21.256518 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 04:45:21.262693 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 04:45:21.266557 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Refreshing passwd entry cache Jun 21 04:45:21.266267 oslogin_cache_refresh[1685]: Refreshing passwd entry cache Jun 21 04:45:21.272195 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 04:45:21.279492 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 04:45:21.285770 extend-filesystems[1684]: Found /dev/nvme0n1p6 Jun 21 04:45:21.286513 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 04:45:21.290016 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 21 04:45:21.291726 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 04:45:21.292907 extend-filesystems[1684]: Found /dev/nvme0n1p9 Jun 21 04:45:21.293481 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 04:45:21.299433 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 04:45:21.301806 extend-filesystems[1684]: Checking size of /dev/nvme0n1p9 Jun 21 04:45:21.308672 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Failure getting users, quitting Jun 21 04:45:21.308672 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 04:45:21.308672 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Refreshing group entry cache Jun 21 04:45:21.307215 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 04:45:21.303070 oslogin_cache_refresh[1685]: Failure getting users, quitting Jun 21 04:45:21.303083 oslogin_cache_refresh[1685]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 04:45:21.303112 oslogin_cache_refresh[1685]: Refreshing group entry cache Jun 21 04:45:21.312653 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 04:45:21.312804 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 04:45:21.313560 (chronyd)[1675]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 21 04:45:21.319639 chronyd[1717]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 21 04:45:21.323796 systemd[1]: Started chronyd.service - NTP client/server. Jun 21 04:45:21.325416 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Failure getting groups, quitting Jun 21 04:45:21.325416 google_oslogin_nss_cache[1685]: oslogin_cache_refresh[1685]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 04:45:21.320553 oslogin_cache_refresh[1685]: Failure getting groups, quitting Jun 21 04:45:21.325313 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 21 04:45:21.320561 oslogin_cache_refresh[1685]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 04:45:21.325832 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 21 04:45:21.322316 chronyd[1717]: Timezone right/UTC failed leap second check, ignoring Jun 21 04:45:21.327496 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 04:45:21.322460 chronyd[1717]: Loaded seccomp filter (level 2) Jun 21 04:45:21.327729 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 04:45:21.334679 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 04:45:21.334844 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 04:45:21.336469 jq[1703]: true Jun 21 04:45:21.347514 extend-filesystems[1684]: Old size kept for /dev/nvme0n1p9 Jun 21 04:45:21.350497 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 04:45:21.352175 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 04:45:21.352340 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 04:45:21.358977 (ntainerd)[1720]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 04:45:21.372811 update_engine[1701]: I20250621 04:45:21.372648 1701 main.cc:92] Flatcar Update Engine starting Jun 21 04:45:21.378366 jq[1723]: true Jun 21 04:45:21.413370 tar[1714]: linux-amd64/LICENSE Jun 21 04:45:21.413370 tar[1714]: linux-amd64/helm Jun 21 04:45:21.433924 dbus-daemon[1678]: [system] SELinux support is enabled Jun 21 04:45:21.434175 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 04:45:21.440874 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 04:45:21.441287 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 04:45:21.444806 update_engine[1701]: I20250621 04:45:21.443507 1701 update_check_scheduler.cc:74] Next update check in 9m55s Jun 21 04:45:21.443033 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 04:45:21.443051 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 04:45:21.453376 systemd[1]: Started update-engine.service - Update Engine. Jun 21 04:45:21.466550 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 04:45:21.472300 bash[1762]: Updated "/home/core/.ssh/authorized_keys" Jun 21 04:45:21.472530 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 04:45:21.475611 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 21 04:45:21.478242 systemd-logind[1698]: New seat seat0. Jun 21 04:45:21.484067 systemd-logind[1698]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 21 04:45:21.484187 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 04:45:21.524046 coreos-metadata[1677]: Jun 21 04:45:21.523 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 21 04:45:21.532790 coreos-metadata[1677]: Jun 21 04:45:21.532 INFO Fetch successful Jun 21 04:45:21.532870 coreos-metadata[1677]: Jun 21 04:45:21.532 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 21 04:45:21.536951 coreos-metadata[1677]: Jun 21 04:45:21.536 INFO Fetch successful Jun 21 04:45:21.538438 coreos-metadata[1677]: Jun 21 04:45:21.537 INFO Fetching http://168.63.129.16/machine/a6687b74-a34e-49d6-ac86-04a0961b5373/0c4365f2%2Deb60%2D4100%2D9023%2Dde27ecfd1647.%5Fci%2D4372.0.0%2Da%2D59b94489dc?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 21 04:45:21.542460 coreos-metadata[1677]: Jun 21 04:45:21.542 INFO Fetch successful Jun 21 04:45:21.542460 coreos-metadata[1677]: Jun 21 04:45:21.542 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 21 04:45:21.552565 coreos-metadata[1677]: Jun 21 04:45:21.552 INFO Fetch successful Jun 21 04:45:21.600377 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 21 04:45:21.601994 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 04:45:21.760645 locksmithd[1769]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 04:45:22.070175 sshd_keygen[1738]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 04:45:22.107987 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 04:45:22.115294 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 04:45:22.119527 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 21 04:45:22.140142 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 04:45:22.141435 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 04:45:22.145549 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 04:45:22.164951 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 21 04:45:22.167950 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 04:45:22.174580 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 04:45:22.178612 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 21 04:45:22.180329 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 04:45:22.203242 tar[1714]: linux-amd64/README.md Jun 21 04:45:22.212868 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 04:45:22.220323 containerd[1720]: time="2025-06-21T04:45:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 04:45:22.221008 containerd[1720]: time="2025-06-21T04:45:22.220985797Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 04:45:22.229244 containerd[1720]: time="2025-06-21T04:45:22.229222810Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.656µs" Jun 21 04:45:22.229311 containerd[1720]: time="2025-06-21T04:45:22.229302547Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 04:45:22.229358 containerd[1720]: time="2025-06-21T04:45:22.229335790Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 04:45:22.230250 containerd[1720]: time="2025-06-21T04:45:22.229479003Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 04:45:22.230250 containerd[1720]: time="2025-06-21T04:45:22.229500313Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 04:45:22.230250 containerd[1720]: time="2025-06-21T04:45:22.229519414Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 04:45:22.230250 containerd[1720]: time="2025-06-21T04:45:22.229559680Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 04:45:22.230250 containerd[1720]: time="2025-06-21T04:45:22.229568110Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 04:45:22.230250 containerd[1720]: time="2025-06-21T04:45:22.229749393Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 04:45:22.230250 containerd[1720]: time="2025-06-21T04:45:22.229758096Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 04:45:22.230250 containerd[1720]: time="2025-06-21T04:45:22.229767659Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 04:45:22.230250 containerd[1720]: time="2025-06-21T04:45:22.229775284Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 04:45:22.230250 containerd[1720]: time="2025-06-21T04:45:22.229825497Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 04:45:22.230250 containerd[1720]: time="2025-06-21T04:45:22.229960484Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 04:45:22.230500 containerd[1720]: time="2025-06-21T04:45:22.229978765Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 04:45:22.230500 containerd[1720]: time="2025-06-21T04:45:22.229987118Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 04:45:22.230500 containerd[1720]: time="2025-06-21T04:45:22.230020194Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 04:45:22.230500 containerd[1720]: time="2025-06-21T04:45:22.230275447Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 04:45:22.230500 containerd[1720]: time="2025-06-21T04:45:22.230324128Z" level=info msg="metadata content store policy set" policy=shared Jun 21 04:45:22.245651 containerd[1720]: time="2025-06-21T04:45:22.245052909Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 04:45:22.245651 containerd[1720]: time="2025-06-21T04:45:22.245102611Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 04:45:22.245651 containerd[1720]: time="2025-06-21T04:45:22.245118845Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 04:45:22.245651 containerd[1720]: time="2025-06-21T04:45:22.245130411Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 04:45:22.245651 containerd[1720]: time="2025-06-21T04:45:22.245142015Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 04:45:22.245651 containerd[1720]: time="2025-06-21T04:45:22.245153286Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 04:45:22.245651 containerd[1720]: time="2025-06-21T04:45:22.245164662Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 04:45:22.245651 containerd[1720]: time="2025-06-21T04:45:22.245175111Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 04:45:22.245651 containerd[1720]: time="2025-06-21T04:45:22.245184468Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 04:45:22.245651 containerd[1720]: time="2025-06-21T04:45:22.245197526Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 04:45:22.245651 containerd[1720]: time="2025-06-21T04:45:22.245206375Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 04:45:22.245651 containerd[1720]: time="2025-06-21T04:45:22.245221583Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 04:45:22.245651 containerd[1720]: time="2025-06-21T04:45:22.245308991Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 04:45:22.245651 containerd[1720]: time="2025-06-21T04:45:22.245322375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 04:45:22.245920 containerd[1720]: time="2025-06-21T04:45:22.245334546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 04:45:22.245920 containerd[1720]: time="2025-06-21T04:45:22.245381591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 04:45:22.245920 containerd[1720]: time="2025-06-21T04:45:22.245390869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 04:45:22.245920 containerd[1720]: time="2025-06-21T04:45:22.245399829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 04:45:22.245920 containerd[1720]: time="2025-06-21T04:45:22.245409627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 04:45:22.245920 containerd[1720]: time="2025-06-21T04:45:22.245417973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 04:45:22.245920 containerd[1720]: time="2025-06-21T04:45:22.245427371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 04:45:22.245920 containerd[1720]: time="2025-06-21T04:45:22.245453415Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 04:45:22.245920 containerd[1720]: time="2025-06-21T04:45:22.245463064Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 04:45:22.245920 containerd[1720]: time="2025-06-21T04:45:22.245521770Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 04:45:22.245920 containerd[1720]: time="2025-06-21T04:45:22.245534103Z" level=info msg="Start snapshots syncer" Jun 21 04:45:22.245920 containerd[1720]: time="2025-06-21T04:45:22.245561031Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 04:45:22.246140 containerd[1720]: time="2025-06-21T04:45:22.245791411Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 04:45:22.246140 containerd[1720]: time="2025-06-21T04:45:22.245852563Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 04:45:22.246258 containerd[1720]: time="2025-06-21T04:45:22.245921261Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 04:45:22.246258 containerd[1720]: time="2025-06-21T04:45:22.246004864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 04:45:22.246258 containerd[1720]: time="2025-06-21T04:45:22.246019768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 04:45:22.246258 containerd[1720]: time="2025-06-21T04:45:22.246028559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 04:45:22.246258 containerd[1720]: time="2025-06-21T04:45:22.246039009Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 04:45:22.246258 containerd[1720]: time="2025-06-21T04:45:22.246059627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 04:45:22.246258 containerd[1720]: time="2025-06-21T04:45:22.246069585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 04:45:22.246258 containerd[1720]: time="2025-06-21T04:45:22.246079111Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 04:45:22.246258 containerd[1720]: time="2025-06-21T04:45:22.246098584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 04:45:22.246258 containerd[1720]: time="2025-06-21T04:45:22.246108261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 04:45:22.246258 containerd[1720]: time="2025-06-21T04:45:22.246125911Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 04:45:22.246258 containerd[1720]: time="2025-06-21T04:45:22.246151038Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 04:45:22.246258 containerd[1720]: time="2025-06-21T04:45:22.246163833Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 04:45:22.246258 containerd[1720]: time="2025-06-21T04:45:22.246171514Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 04:45:22.246500 containerd[1720]: time="2025-06-21T04:45:22.246181009Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 04:45:22.246500 containerd[1720]: time="2025-06-21T04:45:22.246186684Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 04:45:22.246500 containerd[1720]: time="2025-06-21T04:45:22.246202410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 04:45:22.246500 containerd[1720]: time="2025-06-21T04:45:22.246215951Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 04:45:22.246500 containerd[1720]: time="2025-06-21T04:45:22.246231038Z" level=info msg="runtime interface created" Jun 21 04:45:22.246500 containerd[1720]: time="2025-06-21T04:45:22.246235587Z" level=info msg="created NRI interface" Jun 21 04:45:22.246500 containerd[1720]: time="2025-06-21T04:45:22.246242229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 04:45:22.246500 containerd[1720]: time="2025-06-21T04:45:22.246251265Z" level=info msg="Connect containerd service" Jun 21 04:45:22.246500 containerd[1720]: time="2025-06-21T04:45:22.246278419Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 04:45:22.247460 containerd[1720]: time="2025-06-21T04:45:22.246870359Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 04:45:22.645498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:22.654581 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:45:22.901520 containerd[1720]: time="2025-06-21T04:45:22.900963333Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 04:45:22.901520 containerd[1720]: time="2025-06-21T04:45:22.901011437Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 04:45:22.901520 containerd[1720]: time="2025-06-21T04:45:22.901032351Z" level=info msg="Start subscribing containerd event" Jun 21 04:45:22.901520 containerd[1720]: time="2025-06-21T04:45:22.901055615Z" level=info msg="Start recovering state" Jun 21 04:45:22.901520 containerd[1720]: time="2025-06-21T04:45:22.901134858Z" level=info msg="Start event monitor" Jun 21 04:45:22.901520 containerd[1720]: time="2025-06-21T04:45:22.901144309Z" level=info msg="Start cni network conf syncer for default" Jun 21 04:45:22.901520 containerd[1720]: time="2025-06-21T04:45:22.901152804Z" level=info msg="Start streaming server" Jun 21 04:45:22.901520 containerd[1720]: time="2025-06-21T04:45:22.901163543Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 04:45:22.901520 containerd[1720]: time="2025-06-21T04:45:22.901169884Z" level=info msg="runtime interface starting up..." Jun 21 04:45:22.901520 containerd[1720]: time="2025-06-21T04:45:22.901175200Z" level=info msg="starting plugins..." Jun 21 04:45:22.901520 containerd[1720]: time="2025-06-21T04:45:22.901184686Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 04:45:22.901520 containerd[1720]: time="2025-06-21T04:45:22.901259676Z" level=info msg="containerd successfully booted in 0.681204s" Jun 21 04:45:22.901410 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 04:45:22.903129 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 04:45:22.906627 systemd[1]: Startup finished in 2.794s (kernel) + 1min 11.833s (initrd) + 8.528s (userspace) = 1min 23.157s. Jun 21 04:45:23.131150 login[1820]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 21 04:45:23.134542 login[1821]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 21 04:45:23.140531 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 04:45:23.141371 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 04:45:23.150280 systemd-logind[1698]: New session 1 of user core. Jun 21 04:45:23.158302 systemd-logind[1698]: New session 2 of user core. Jun 21 04:45:23.164215 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 04:45:23.167981 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 04:45:23.179243 (systemd)[1857]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 04:45:23.182793 systemd-logind[1698]: New session c1 of user core. Jun 21 04:45:23.210648 kubelet[1840]: E0621 04:45:23.210612 1840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:45:23.212639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:45:23.212826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:45:23.213202 systemd[1]: kubelet.service: Consumed 845ms CPU time, 265.9M memory peak. Jun 21 04:45:23.282912 waagent[1818]: 2025-06-21T04:45:23.282858Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jun 21 04:45:23.284275 waagent[1818]: 2025-06-21T04:45:23.284093Z INFO Daemon Daemon OS: flatcar 4372.0.0 Jun 21 04:45:23.285560 waagent[1818]: 2025-06-21T04:45:23.285090Z INFO Daemon Daemon Python: 3.11.12 Jun 21 04:45:23.286655 waagent[1818]: 2025-06-21T04:45:23.286626Z INFO Daemon Daemon Run daemon Jun 21 04:45:23.287779 waagent[1818]: 2025-06-21T04:45:23.287742Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.0.0' Jun 21 04:45:23.289298 waagent[1818]: 2025-06-21T04:45:23.288239Z INFO Daemon Daemon Using waagent for provisioning Jun 21 04:45:23.291204 waagent[1818]: 2025-06-21T04:45:23.291178Z INFO Daemon Daemon Activate resource disk Jun 21 04:45:23.294351 waagent[1818]: 2025-06-21T04:45:23.292415Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 21 04:45:23.295899 waagent[1818]: 2025-06-21T04:45:23.295870Z INFO Daemon Daemon Found device: None Jun 21 04:45:23.297067 waagent[1818]: 2025-06-21T04:45:23.296994Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 21 04:45:23.299204 waagent[1818]: 2025-06-21T04:45:23.299128Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 21 04:45:23.302214 waagent[1818]: 2025-06-21T04:45:23.302182Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 21 04:45:23.303636 waagent[1818]: 2025-06-21T04:45:23.303609Z INFO Daemon Daemon Running default provisioning handler Jun 21 04:45:23.310216 waagent[1818]: 2025-06-21T04:45:23.309955Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 21 04:45:23.311211 waagent[1818]: 2025-06-21T04:45:23.311180Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 21 04:45:23.311372 waagent[1818]: 2025-06-21T04:45:23.311337Z INFO Daemon Daemon cloud-init is enabled: False Jun 21 04:45:23.311544 waagent[1818]: 2025-06-21T04:45:23.311531Z INFO Daemon Daemon Copying ovf-env.xml Jun 21 04:45:23.342792 systemd[1857]: Queued start job for default target default.target. Jun 21 04:45:23.352975 systemd[1857]: Created slice app.slice - User Application Slice. Jun 21 04:45:23.353002 systemd[1857]: Reached target paths.target - Paths. Jun 21 04:45:23.353027 systemd[1857]: Reached target timers.target - Timers. Jun 21 04:45:23.353785 systemd[1857]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 04:45:23.360761 systemd[1857]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 04:45:23.361376 systemd[1857]: Reached target sockets.target - Sockets. Jun 21 04:45:23.361405 systemd[1857]: Reached target basic.target - Basic System. Jun 21 04:45:23.361429 systemd[1857]: Reached target default.target - Main User Target. Jun 21 04:45:23.361448 systemd[1857]: Startup finished in 173ms. Jun 21 04:45:23.361532 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 04:45:23.364915 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 04:45:23.365850 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 04:45:23.372860 waagent[1818]: 2025-06-21T04:45:23.370799Z INFO Daemon Daemon Successfully mounted dvd Jun 21 04:45:23.381898 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 21 04:45:23.387860 waagent[1818]: 2025-06-21T04:45:23.382881Z INFO Daemon Daemon Detect protocol endpoint Jun 21 04:45:23.387860 waagent[1818]: 2025-06-21T04:45:23.383021Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 21 04:45:23.387860 waagent[1818]: 2025-06-21T04:45:23.383132Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 21 04:45:23.387860 waagent[1818]: 2025-06-21T04:45:23.383173Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 21 04:45:23.387860 waagent[1818]: 2025-06-21T04:45:23.383478Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 21 04:45:23.387860 waagent[1818]: 2025-06-21T04:45:23.383597Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 21 04:45:23.391647 waagent[1818]: 2025-06-21T04:45:23.391617Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 21 04:45:23.392098 waagent[1818]: 2025-06-21T04:45:23.391848Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 21 04:45:23.392098 waagent[1818]: 2025-06-21T04:45:23.391916Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 21 04:45:23.477627 waagent[1818]: 2025-06-21T04:45:23.477555Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 21 04:45:23.478676 waagent[1818]: 2025-06-21T04:45:23.478085Z INFO Daemon Daemon Forcing an update of the goal state. Jun 21 04:45:23.486083 waagent[1818]: 2025-06-21T04:45:23.486046Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 21 04:45:23.517997 waagent[1818]: 2025-06-21T04:45:23.517967Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 21 04:45:23.519103 waagent[1818]: 2025-06-21T04:45:23.519075Z INFO Daemon Jun 21 04:45:23.519619 waagent[1818]: 2025-06-21T04:45:23.519556Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 0605e232-9ec6-4725-8b75-c6ec2200759f eTag: 10489386606525195637 source: Fabric] Jun 21 04:45:23.521465 waagent[1818]: 2025-06-21T04:45:23.521439Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 21 04:45:23.521866 waagent[1818]: 2025-06-21T04:45:23.521847Z INFO Daemon Jun 21 04:45:23.522977 waagent[1818]: 2025-06-21T04:45:23.522954Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 21 04:45:23.528085 waagent[1818]: 2025-06-21T04:45:23.528056Z INFO Daemon Daemon Downloading artifacts profile blob Jun 21 04:45:23.715768 waagent[1818]: 2025-06-21T04:45:23.715727Z INFO Daemon Downloaded certificate {'thumbprint': 'B70C9DE074B0AB08B0E1EB9A2848F0C65D52F716', 'hasPrivateKey': True} Jun 21 04:45:23.717518 waagent[1818]: 2025-06-21T04:45:23.717487Z INFO Daemon Fetch goal state completed Jun 21 04:45:23.779374 waagent[1818]: 2025-06-21T04:45:23.779269Z INFO Daemon Daemon Starting provisioning Jun 21 04:45:23.779894 waagent[1818]: 2025-06-21T04:45:23.779722Z INFO Daemon Daemon Handle ovf-env.xml. Jun 21 04:45:23.780151 waagent[1818]: 2025-06-21T04:45:23.780130Z INFO Daemon Daemon Set hostname [ci-4372.0.0-a-59b94489dc] Jun 21 04:45:23.797587 waagent[1818]: 2025-06-21T04:45:23.797547Z INFO Daemon Daemon Publish hostname [ci-4372.0.0-a-59b94489dc] Jun 21 04:45:23.798515 waagent[1818]: 2025-06-21T04:45:23.798486Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 21 04:45:23.799603 waagent[1818]: 2025-06-21T04:45:23.799578Z INFO Daemon Daemon Primary interface is [eth0] Jun 21 04:45:23.805127 systemd-networkd[1357]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 04:45:23.805134 systemd-networkd[1357]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 04:45:23.805216 systemd-networkd[1357]: eth0: DHCP lease lost Jun 21 04:45:23.805882 waagent[1818]: 2025-06-21T04:45:23.805842Z INFO Daemon Daemon Create user account if not exists Jun 21 04:45:23.806820 waagent[1818]: 2025-06-21T04:45:23.806790Z INFO Daemon Daemon User core already exists, skip useradd Jun 21 04:45:23.807098 waagent[1818]: 2025-06-21T04:45:23.807036Z INFO Daemon Daemon Configure sudoer Jun 21 04:45:23.812151 waagent[1818]: 2025-06-21T04:45:23.812112Z INFO Daemon Daemon Configure sshd Jun 21 04:45:23.820422 waagent[1818]: 2025-06-21T04:45:23.820095Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 21 04:45:23.822394 waagent[1818]: 2025-06-21T04:45:23.822293Z INFO Daemon Daemon Deploy ssh public key. Jun 21 04:45:23.827385 systemd-networkd[1357]: eth0: DHCPv4 address 10.200.8.44/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 21 04:45:24.916602 waagent[1818]: 2025-06-21T04:45:24.916404Z INFO Daemon Daemon Provisioning complete Jun 21 04:45:24.927732 waagent[1818]: 2025-06-21T04:45:24.927692Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 21 04:45:24.928757 waagent[1818]: 2025-06-21T04:45:24.928728Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 21 04:45:24.929319 waagent[1818]: 2025-06-21T04:45:24.929150Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jun 21 04:45:25.018400 waagent[1907]: 2025-06-21T04:45:25.018329Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jun 21 04:45:25.018596 waagent[1907]: 2025-06-21T04:45:25.018424Z INFO ExtHandler ExtHandler OS: flatcar 4372.0.0 Jun 21 04:45:25.018596 waagent[1907]: 2025-06-21T04:45:25.018460Z INFO ExtHandler ExtHandler Python: 3.11.12 Jun 21 04:45:25.018596 waagent[1907]: 2025-06-21T04:45:25.018495Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jun 21 04:45:25.037568 waagent[1907]: 2025-06-21T04:45:25.037527Z INFO ExtHandler ExtHandler Distro: flatcar-4372.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jun 21 04:45:25.037689 waagent[1907]: 2025-06-21T04:45:25.037667Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 21 04:45:25.037746 waagent[1907]: 2025-06-21T04:45:25.037712Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 21 04:45:25.042765 waagent[1907]: 2025-06-21T04:45:25.042722Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 21 04:45:25.049652 waagent[1907]: 2025-06-21T04:45:25.049622Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 21 04:45:25.049949 waagent[1907]: 2025-06-21T04:45:25.049924Z INFO ExtHandler Jun 21 04:45:25.049983 waagent[1907]: 2025-06-21T04:45:25.049971Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: bc7270dd-3d79-408c-9d90-af065b5f3623 eTag: 10489386606525195637 source: Fabric] Jun 21 04:45:25.050154 waagent[1907]: 2025-06-21T04:45:25.050136Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 21 04:45:25.050480 waagent[1907]: 2025-06-21T04:45:25.050458Z INFO ExtHandler Jun 21 04:45:25.050520 waagent[1907]: 2025-06-21T04:45:25.050495Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 21 04:45:25.054619 waagent[1907]: 2025-06-21T04:45:25.054595Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 21 04:45:25.147997 waagent[1907]: 2025-06-21T04:45:25.147954Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B70C9DE074B0AB08B0E1EB9A2848F0C65D52F716', 'hasPrivateKey': True} Jun 21 04:45:25.148276 waagent[1907]: 2025-06-21T04:45:25.148250Z INFO ExtHandler Fetch goal state completed Jun 21 04:45:25.160929 waagent[1907]: 2025-06-21T04:45:25.160890Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jun 21 04:45:25.164511 waagent[1907]: 2025-06-21T04:45:25.164472Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1907 Jun 21 04:45:25.164611 waagent[1907]: 2025-06-21T04:45:25.164576Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 21 04:45:25.164811 waagent[1907]: 2025-06-21T04:45:25.164792Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jun 21 04:45:25.165675 waagent[1907]: 2025-06-21T04:45:25.165649Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 21 04:45:25.165911 waagent[1907]: 2025-06-21T04:45:25.165890Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jun 21 04:45:25.166006 waagent[1907]: 2025-06-21T04:45:25.165988Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jun 21 04:45:25.166335 waagent[1907]: 2025-06-21T04:45:25.166312Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 21 04:45:25.188401 waagent[1907]: 2025-06-21T04:45:25.188324Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 21 04:45:25.188489 waagent[1907]: 2025-06-21T04:45:25.188469Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 21 04:45:25.193162 waagent[1907]: 2025-06-21T04:45:25.193027Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 21 04:45:25.199770 systemd[1]: Reload requested from client PID 1922 ('systemctl') (unit waagent.service)... Jun 21 04:45:25.199781 systemd[1]: Reloading... Jun 21 04:45:25.248364 zram_generator::config[1956]: No configuration found. Jun 21 04:45:25.332760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:45:25.412748 systemd[1]: Reloading finished in 212 ms. Jun 21 04:45:25.434754 waagent[1907]: 2025-06-21T04:45:25.434707Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 21 04:45:25.434814 waagent[1907]: 2025-06-21T04:45:25.434793Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 21 04:45:26.081321 waagent[1907]: 2025-06-21T04:45:26.081262Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 21 04:45:26.081611 waagent[1907]: 2025-06-21T04:45:26.081574Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jun 21 04:45:26.082267 waagent[1907]: 2025-06-21T04:45:26.082236Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 21 04:45:26.082372 waagent[1907]: 2025-06-21T04:45:26.082269Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 21 04:45:26.082397 waagent[1907]: 2025-06-21T04:45:26.082362Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 21 04:45:26.082534 waagent[1907]: 2025-06-21T04:45:26.082515Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 21 04:45:26.082914 waagent[1907]: 2025-06-21T04:45:26.082890Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 21 04:45:26.082914 waagent[1907]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 21 04:45:26.082914 waagent[1907]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jun 21 04:45:26.082914 waagent[1907]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 21 04:45:26.082914 waagent[1907]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 21 04:45:26.082914 waagent[1907]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 21 04:45:26.082914 waagent[1907]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 21 04:45:26.083051 waagent[1907]: 2025-06-21T04:45:26.082952Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 21 04:45:26.083051 waagent[1907]: 2025-06-21T04:45:26.082997Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 21 04:45:26.083122 waagent[1907]: 2025-06-21T04:45:26.083085Z INFO EnvHandler ExtHandler Configure routes Jun 21 04:45:26.083444 waagent[1907]: 2025-06-21T04:45:26.083422Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 21 04:45:26.083516 waagent[1907]: 2025-06-21T04:45:26.083463Z INFO EnvHandler ExtHandler Gateway:None Jun 21 04:45:26.083849 waagent[1907]: 2025-06-21T04:45:26.083820Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 21 04:45:26.083908 waagent[1907]: 2025-06-21T04:45:26.083785Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 21 04:45:26.084226 waagent[1907]: 2025-06-21T04:45:26.084199Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 21 04:45:26.084272 waagent[1907]: 2025-06-21T04:45:26.084248Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 21 04:45:26.084663 waagent[1907]: 2025-06-21T04:45:26.084407Z INFO EnvHandler ExtHandler Routes:None Jun 21 04:45:26.084742 waagent[1907]: 2025-06-21T04:45:26.084725Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 21 04:45:26.093662 waagent[1907]: 2025-06-21T04:45:26.093633Z INFO ExtHandler ExtHandler Jun 21 04:45:26.093716 waagent[1907]: 2025-06-21T04:45:26.093686Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d1851301-6fdf-4883-bff3-571c950a14ca correlation c177fa71-be7c-4901-850d-e595cc7b2f06 created: 2025-06-21T04:43:33.424873Z] Jun 21 04:45:26.093928 waagent[1907]: 2025-06-21T04:45:26.093906Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 21 04:45:26.094258 waagent[1907]: 2025-06-21T04:45:26.094238Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jun 21 04:45:26.118405 waagent[1907]: 2025-06-21T04:45:26.118337Z INFO MonitorHandler ExtHandler Network interfaces: Jun 21 04:45:26.118405 waagent[1907]: Executing ['ip', '-a', '-o', 'link']: Jun 21 04:45:26.118405 waagent[1907]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 21 04:45:26.118405 waagent[1907]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:77:d2:e5 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jun 21 04:45:26.118405 waagent[1907]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 21 04:45:26.118405 waagent[1907]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 21 04:45:26.118405 waagent[1907]: 2: eth0 inet 10.200.8.44/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 21 04:45:26.118405 waagent[1907]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 21 04:45:26.118405 waagent[1907]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 21 04:45:26.118405 waagent[1907]: 2: eth0 inet6 fe80::7e1e:52ff:fe77:d2e5/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 21 04:45:26.135711 waagent[1907]: 2025-06-21T04:45:26.135671Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jun 21 04:45:26.135711 waagent[1907]: Try `iptables -h' or 'iptables --help' for more information.) Jun 21 04:45:26.136733 waagent[1907]: 2025-06-21T04:45:26.136673Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A7604D20-D6A8-466F-A2B5-CCFF7DE17976;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jun 21 04:45:26.163770 waagent[1907]: 2025-06-21T04:45:26.163730Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jun 21 04:45:26.163770 waagent[1907]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:45:26.163770 waagent[1907]: pkts bytes target prot opt in out source destination Jun 21 04:45:26.163770 waagent[1907]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:45:26.163770 waagent[1907]: pkts bytes target prot opt in out source destination Jun 21 04:45:26.163770 waagent[1907]: Chain OUTPUT (policy ACCEPT 2 packets, 236 bytes) Jun 21 04:45:26.163770 waagent[1907]: pkts bytes target prot opt in out source destination Jun 21 04:45:26.163770 waagent[1907]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 21 04:45:26.163770 waagent[1907]: 5 647 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 21 04:45:26.163770 waagent[1907]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 21 04:45:26.166255 waagent[1907]: 2025-06-21T04:45:26.166213Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 21 04:45:26.166255 waagent[1907]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:45:26.166255 waagent[1907]: pkts bytes target prot opt in out source destination Jun 21 04:45:26.166255 waagent[1907]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 21 04:45:26.166255 waagent[1907]: pkts bytes target prot opt in out source destination Jun 21 04:45:26.166255 waagent[1907]: Chain OUTPUT (policy ACCEPT 5 packets, 585 bytes) Jun 21 04:45:26.166255 waagent[1907]: pkts bytes target prot opt in out source destination Jun 21 04:45:26.166255 waagent[1907]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 21 04:45:26.166255 waagent[1907]: 6 699 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 21 04:45:26.166255 waagent[1907]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 21 04:45:33.363816 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 04:45:33.365593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:33.848585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:33.853549 (kubelet)[2058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:45:33.890447 kubelet[2058]: E0621 04:45:33.890398 2058 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:45:33.892910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:45:33.893029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:45:33.893297 systemd[1]: kubelet.service: Consumed 120ms CPU time, 109M memory peak. Jun 21 04:45:44.113871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 21 04:45:44.115668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:44.606184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:44.614537 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:45:44.645551 kubelet[2073]: E0621 04:45:44.645508 2073 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:45:44.646945 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:45:44.647055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:45:44.647330 systemd[1]: kubelet.service: Consumed 116ms CPU time, 108.7M memory peak. Jun 21 04:45:45.105375 chronyd[1717]: Selected source PHC0 Jun 21 04:45:50.304499 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 04:45:50.305489 systemd[1]: Started sshd@0-10.200.8.44:22-10.200.16.10:44946.service - OpenSSH per-connection server daemon (10.200.16.10:44946). Jun 21 04:45:51.010114 sshd[2081]: Accepted publickey for core from 10.200.16.10 port 44946 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:45:51.011494 sshd-session[2081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:45:51.015725 systemd-logind[1698]: New session 3 of user core. Jun 21 04:45:51.021495 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 04:45:51.556319 systemd[1]: Started sshd@1-10.200.8.44:22-10.200.16.10:44956.service - OpenSSH per-connection server daemon (10.200.16.10:44956). Jun 21 04:45:52.184704 sshd[2086]: Accepted publickey for core from 10.200.16.10 port 44956 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:45:52.186034 sshd-session[2086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:45:52.190246 systemd-logind[1698]: New session 4 of user core. Jun 21 04:45:52.196485 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 04:45:52.626860 sshd[2088]: Connection closed by 10.200.16.10 port 44956 Jun 21 04:45:52.627378 sshd-session[2086]: pam_unix(sshd:session): session closed for user core Jun 21 04:45:52.630108 systemd[1]: sshd@1-10.200.8.44:22-10.200.16.10:44956.service: Deactivated successfully. Jun 21 04:45:52.631520 systemd[1]: session-4.scope: Deactivated successfully. Jun 21 04:45:52.632918 systemd-logind[1698]: Session 4 logged out. Waiting for processes to exit. Jun 21 04:45:52.633656 systemd-logind[1698]: Removed session 4. Jun 21 04:45:52.736891 systemd[1]: Started sshd@2-10.200.8.44:22-10.200.16.10:44970.service - OpenSSH per-connection server daemon (10.200.16.10:44970). Jun 21 04:45:53.371320 sshd[2094]: Accepted publickey for core from 10.200.16.10 port 44970 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:45:53.372593 sshd-session[2094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:45:53.376816 systemd-logind[1698]: New session 5 of user core. Jun 21 04:45:53.385502 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 04:45:53.809818 sshd[2096]: Connection closed by 10.200.16.10 port 44970 Jun 21 04:45:53.810271 sshd-session[2094]: pam_unix(sshd:session): session closed for user core Jun 21 04:45:53.813832 systemd[1]: sshd@2-10.200.8.44:22-10.200.16.10:44970.service: Deactivated successfully. Jun 21 04:45:53.815210 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 04:45:53.815792 systemd-logind[1698]: Session 5 logged out. Waiting for processes to exit. Jun 21 04:45:53.816726 systemd-logind[1698]: Removed session 5. Jun 21 04:45:53.924234 systemd[1]: Started sshd@3-10.200.8.44:22-10.200.16.10:44974.service - OpenSSH per-connection server daemon (10.200.16.10:44974). Jun 21 04:45:54.553904 sshd[2102]: Accepted publickey for core from 10.200.16.10 port 44974 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:45:54.555123 sshd-session[2102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:45:54.559191 systemd-logind[1698]: New session 6 of user core. Jun 21 04:45:54.565462 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 04:45:54.863812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 21 04:45:54.865513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:45:54.993785 sshd[2104]: Connection closed by 10.200.16.10 port 44974 Jun 21 04:45:54.994125 sshd-session[2102]: pam_unix(sshd:session): session closed for user core Jun 21 04:45:54.996406 systemd[1]: sshd@3-10.200.8.44:22-10.200.16.10:44974.service: Deactivated successfully. Jun 21 04:45:54.997580 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 04:45:54.998172 systemd-logind[1698]: Session 6 logged out. Waiting for processes to exit. Jun 21 04:45:54.999041 systemd-logind[1698]: Removed session 6. Jun 21 04:45:55.104277 systemd[1]: Started sshd@4-10.200.8.44:22-10.200.16.10:44990.service - OpenSSH per-connection server daemon (10.200.16.10:44990). Jun 21 04:45:55.379019 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:45:55.384546 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:45:55.413662 kubelet[2120]: E0621 04:45:55.413631 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:45:55.414999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:45:55.415120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:45:55.415405 systemd[1]: kubelet.service: Consumed 110ms CPU time, 108.4M memory peak. Jun 21 04:45:55.733561 sshd[2113]: Accepted publickey for core from 10.200.16.10 port 44990 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:45:55.734875 sshd-session[2113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:45:55.738826 systemd-logind[1698]: New session 7 of user core. Jun 21 04:45:55.742451 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 04:45:56.167936 sudo[2128]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 04:45:56.168123 sudo[2128]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:45:56.192176 sudo[2128]: pam_unix(sudo:session): session closed for user root Jun 21 04:45:56.292828 sshd[2127]: Connection closed by 10.200.16.10 port 44990 Jun 21 04:45:56.293331 sshd-session[2113]: pam_unix(sshd:session): session closed for user core Jun 21 04:45:56.297100 systemd[1]: sshd@4-10.200.8.44:22-10.200.16.10:44990.service: Deactivated successfully. Jun 21 04:45:56.298677 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 04:45:56.299290 systemd-logind[1698]: Session 7 logged out. Waiting for processes to exit. Jun 21 04:45:56.300240 systemd-logind[1698]: Removed session 7. Jun 21 04:45:56.410336 systemd[1]: Started sshd@5-10.200.8.44:22-10.200.16.10:45002.service - OpenSSH per-connection server daemon (10.200.16.10:45002). Jun 21 04:45:57.040372 sshd[2134]: Accepted publickey for core from 10.200.16.10 port 45002 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:45:57.041590 sshd-session[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:45:57.045825 systemd-logind[1698]: New session 8 of user core. Jun 21 04:45:57.051453 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 04:45:57.383336 sudo[2138]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 04:45:57.383549 sudo[2138]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:45:57.388967 sudo[2138]: pam_unix(sudo:session): session closed for user root Jun 21 04:45:57.392377 sudo[2137]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 04:45:57.392562 sudo[2137]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:45:57.399046 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 04:45:57.426130 augenrules[2160]: No rules Jun 21 04:45:57.426560 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 04:45:57.426721 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 04:45:57.427461 sudo[2137]: pam_unix(sudo:session): session closed for user root Jun 21 04:45:57.527067 sshd[2136]: Connection closed by 10.200.16.10 port 45002 Jun 21 04:45:57.527529 sshd-session[2134]: pam_unix(sshd:session): session closed for user core Jun 21 04:45:57.530305 systemd[1]: sshd@5-10.200.8.44:22-10.200.16.10:45002.service: Deactivated successfully. Jun 21 04:45:57.532005 systemd-logind[1698]: Session 8 logged out. Waiting for processes to exit. Jun 21 04:45:57.532206 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 04:45:57.533412 systemd-logind[1698]: Removed session 8. Jun 21 04:45:57.637150 systemd[1]: Started sshd@6-10.200.8.44:22-10.200.16.10:45012.service - OpenSSH per-connection server daemon (10.200.16.10:45012). Jun 21 04:45:58.267625 sshd[2169]: Accepted publickey for core from 10.200.16.10 port 45012 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:45:58.268949 sshd-session[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:45:58.273180 systemd-logind[1698]: New session 9 of user core. Jun 21 04:45:58.279481 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 04:45:58.610994 sudo[2172]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 04:45:58.611190 sudo[2172]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 04:46:00.051551 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 04:46:00.060669 (dockerd)[2192]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 04:46:00.671662 dockerd[2192]: time="2025-06-21T04:46:00.671613212Z" level=info msg="Starting up" Jun 21 04:46:00.672603 dockerd[2192]: time="2025-06-21T04:46:00.672571467Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 04:46:00.760591 dockerd[2192]: time="2025-06-21T04:46:00.760560433Z" level=info msg="Loading containers: start." Jun 21 04:46:00.785365 kernel: Initializing XFRM netlink socket Jun 21 04:46:01.014549 systemd-networkd[1357]: docker0: Link UP Jun 21 04:46:01.027606 dockerd[2192]: time="2025-06-21T04:46:01.027583848Z" level=info msg="Loading containers: done." Jun 21 04:46:01.036968 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2009881507-merged.mount: Deactivated successfully. Jun 21 04:46:01.047628 dockerd[2192]: time="2025-06-21T04:46:01.047596881Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 04:46:01.047713 dockerd[2192]: time="2025-06-21T04:46:01.047654034Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 04:46:01.047741 dockerd[2192]: time="2025-06-21T04:46:01.047731245Z" level=info msg="Initializing buildkit" Jun 21 04:46:01.087169 dockerd[2192]: time="2025-06-21T04:46:01.087146776Z" level=info msg="Completed buildkit initialization" Jun 21 04:46:01.092347 dockerd[2192]: time="2025-06-21T04:46:01.092308642Z" level=info msg="Daemon has completed initialization" Jun 21 04:46:01.092475 dockerd[2192]: time="2025-06-21T04:46:01.092387615Z" level=info msg="API listen on /run/docker.sock" Jun 21 04:46:01.092489 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 04:46:01.874944 containerd[1720]: time="2025-06-21T04:46:01.874910149Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 21 04:46:02.748250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3979375108.mount: Deactivated successfully. Jun 21 04:46:03.768863 containerd[1720]: time="2025-06-21T04:46:03.768822829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:03.770933 containerd[1720]: time="2025-06-21T04:46:03.770903667Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079107" Jun 21 04:46:03.773742 containerd[1720]: time="2025-06-21T04:46:03.773709522Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:03.776919 containerd[1720]: time="2025-06-21T04:46:03.776876364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:03.777520 containerd[1720]: time="2025-06-21T04:46:03.777371009Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.902422251s" Jun 21 04:46:03.777520 containerd[1720]: time="2025-06-21T04:46:03.777397570Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jun 21 04:46:03.777979 containerd[1720]: time="2025-06-21T04:46:03.777962356Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 21 04:46:05.058426 containerd[1720]: time="2025-06-21T04:46:05.058390155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:05.063901 containerd[1720]: time="2025-06-21T04:46:05.063870445Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018954" Jun 21 04:46:05.066425 containerd[1720]: time="2025-06-21T04:46:05.066390135Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:05.069899 containerd[1720]: time="2025-06-21T04:46:05.069859149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:05.070514 containerd[1720]: time="2025-06-21T04:46:05.070388254Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.292360712s" Jun 21 04:46:05.070514 containerd[1720]: time="2025-06-21T04:46:05.070414496Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jun 21 04:46:05.070965 containerd[1720]: time="2025-06-21T04:46:05.070944765Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 21 04:46:05.613562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 21 04:46:05.615625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:46:06.151391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:46:06.158814 (kubelet)[2460]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 04:46:06.199359 kubelet[2460]: E0621 04:46:06.199312 2460 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 04:46:06.201313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 04:46:06.201456 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 04:46:06.201785 systemd[1]: kubelet.service: Consumed 126ms CPU time, 109.5M memory peak. Jun 21 04:46:06.392356 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 21 04:46:06.453692 containerd[1720]: time="2025-06-21T04:46:06.453625631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:06.455895 containerd[1720]: time="2025-06-21T04:46:06.455867009Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155063" Jun 21 04:46:06.458665 containerd[1720]: time="2025-06-21T04:46:06.458628452Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:06.463790 containerd[1720]: time="2025-06-21T04:46:06.463746382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:06.464358 containerd[1720]: time="2025-06-21T04:46:06.464101558Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.393128948s" Jun 21 04:46:06.464358 containerd[1720]: time="2025-06-21T04:46:06.464132483Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jun 21 04:46:06.464771 containerd[1720]: time="2025-06-21T04:46:06.464748780Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 21 04:46:06.751156 update_engine[1701]: I20250621 04:46:06.751065 1701 update_attempter.cc:509] Updating boot flags... Jun 21 04:46:07.333933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1855702140.mount: Deactivated successfully. Jun 21 04:46:07.657969 containerd[1720]: time="2025-06-21T04:46:07.657936761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:07.661850 containerd[1720]: time="2025-06-21T04:46:07.661819064Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892754" Jun 21 04:46:07.668351 containerd[1720]: time="2025-06-21T04:46:07.668313801Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:07.672548 containerd[1720]: time="2025-06-21T04:46:07.672498209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:07.672767 containerd[1720]: time="2025-06-21T04:46:07.672749120Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.207974081s" Jun 21 04:46:07.672804 containerd[1720]: time="2025-06-21T04:46:07.672776368Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jun 21 04:46:07.673385 containerd[1720]: time="2025-06-21T04:46:07.673366411Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 21 04:46:08.259727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455679495.mount: Deactivated successfully. Jun 21 04:46:09.114914 containerd[1720]: time="2025-06-21T04:46:09.114876227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:09.120026 containerd[1720]: time="2025-06-21T04:46:09.120003016Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jun 21 04:46:09.122548 containerd[1720]: time="2025-06-21T04:46:09.122512080Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:09.126018 containerd[1720]: time="2025-06-21T04:46:09.125980210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:09.126797 containerd[1720]: time="2025-06-21T04:46:09.126611578Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.453221326s" Jun 21 04:46:09.126797 containerd[1720]: time="2025-06-21T04:46:09.126639168Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jun 21 04:46:09.127283 containerd[1720]: time="2025-06-21T04:46:09.127260678Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 04:46:09.629554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3757197751.mount: Deactivated successfully. Jun 21 04:46:09.645239 containerd[1720]: time="2025-06-21T04:46:09.645214850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 04:46:09.647773 containerd[1720]: time="2025-06-21T04:46:09.647746595Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 21 04:46:09.650796 containerd[1720]: time="2025-06-21T04:46:09.650751731Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 04:46:09.654493 containerd[1720]: time="2025-06-21T04:46:09.654458095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 04:46:09.654888 containerd[1720]: time="2025-06-21T04:46:09.654792815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 527.508737ms" Jun 21 04:46:09.654888 containerd[1720]: time="2025-06-21T04:46:09.654814930Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 21 04:46:09.655339 containerd[1720]: time="2025-06-21T04:46:09.655313146Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 21 04:46:10.235146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2265068619.mount: Deactivated successfully. Jun 21 04:46:11.704697 containerd[1720]: time="2025-06-21T04:46:11.704657986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:11.706972 containerd[1720]: time="2025-06-21T04:46:11.706942130Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247183" Jun 21 04:46:11.709544 containerd[1720]: time="2025-06-21T04:46:11.709510173Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:11.713321 containerd[1720]: time="2025-06-21T04:46:11.713279864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:11.713910 containerd[1720]: time="2025-06-21T04:46:11.713888884Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.058552631s" Jun 21 04:46:11.713952 containerd[1720]: time="2025-06-21T04:46:11.713918522Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jun 21 04:46:14.299419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:46:14.299558 systemd[1]: kubelet.service: Consumed 126ms CPU time, 109.5M memory peak. Jun 21 04:46:14.301709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:46:14.322419 systemd[1]: Reload requested from client PID 2637 ('systemctl') (unit session-9.scope)... Jun 21 04:46:14.322429 systemd[1]: Reloading... Jun 21 04:46:14.393925 zram_generator::config[2683]: No configuration found. Jun 21 04:46:14.467915 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:46:14.552427 systemd[1]: Reloading finished in 229 ms. Jun 21 04:46:14.579759 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 04:46:14.579834 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 04:46:14.580046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:46:14.581260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:46:15.198299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:46:15.205607 (kubelet)[2750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 04:46:15.235951 kubelet[2750]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:46:15.236143 kubelet[2750]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 04:46:15.236143 kubelet[2750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:46:15.236143 kubelet[2750]: I0621 04:46:15.236016 2750 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 04:46:15.469906 kubelet[2750]: I0621 04:46:15.469841 2750 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 21 04:46:15.469906 kubelet[2750]: I0621 04:46:15.469859 2750 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 04:46:15.470186 kubelet[2750]: I0621 04:46:15.470050 2750 server.go:956] "Client rotation is on, will bootstrap in background" Jun 21 04:46:15.495483 kubelet[2750]: E0621 04:46:15.495457 2750 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.44:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 21 04:46:15.498660 kubelet[2750]: I0621 04:46:15.498638 2750 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 04:46:15.508937 kubelet[2750]: I0621 04:46:15.508899 2750 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 04:46:15.512584 kubelet[2750]: I0621 04:46:15.512568 2750 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 04:46:15.512759 kubelet[2750]: I0621 04:46:15.512744 2750 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 04:46:15.512897 kubelet[2750]: I0621 04:46:15.512760 2750 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.0-a-59b94489dc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 04:46:15.513004 kubelet[2750]: I0621 04:46:15.512900 2750 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 04:46:15.513004 kubelet[2750]: I0621 04:46:15.512909 2750 container_manager_linux.go:303] "Creating device plugin manager" Jun 21 04:46:15.513725 kubelet[2750]: I0621 04:46:15.513710 2750 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:46:15.516497 kubelet[2750]: I0621 04:46:15.516315 2750 kubelet.go:480] "Attempting to sync node with API server" Jun 21 04:46:15.516497 kubelet[2750]: I0621 04:46:15.516332 2750 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 04:46:15.516497 kubelet[2750]: I0621 04:46:15.516366 2750 kubelet.go:386] "Adding apiserver pod source" Jun 21 04:46:15.516497 kubelet[2750]: I0621 04:46:15.516378 2750 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 04:46:15.521223 kubelet[2750]: E0621 04:46:15.521198 2750 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.0-a-59b94489dc&limit=500&resourceVersion=0\": dial tcp 10.200.8.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 21 04:46:15.522801 kubelet[2750]: E0621 04:46:15.522360 2750 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 21 04:46:15.522801 kubelet[2750]: I0621 04:46:15.522430 2750 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 04:46:15.522940 kubelet[2750]: I0621 04:46:15.522907 2750 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 21 04:46:15.524569 kubelet[2750]: W0621 04:46:15.523891 2750 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 04:46:15.526196 kubelet[2750]: I0621 04:46:15.526181 2750 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 04:46:15.526251 kubelet[2750]: I0621 04:46:15.526228 2750 server.go:1289] "Started kubelet" Jun 21 04:46:15.527393 kubelet[2750]: I0621 04:46:15.527370 2750 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 04:46:15.528076 kubelet[2750]: I0621 04:46:15.528053 2750 server.go:317] "Adding debug handlers to kubelet server" Jun 21 04:46:15.531792 kubelet[2750]: I0621 04:46:15.531401 2750 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 04:46:15.531792 kubelet[2750]: I0621 04:46:15.531515 2750 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 04:46:15.531792 kubelet[2750]: I0621 04:46:15.531571 2750 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 04:46:15.532841 kubelet[2750]: E0621 04:46:15.531693 2750 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.44:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.0-a-59b94489dc.184af55acd3f52f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.0-a-59b94489dc,UID:ci-4372.0.0-a-59b94489dc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.0-a-59b94489dc,},FirstTimestamp:2025-06-21 04:46:15.526200052 +0000 UTC m=+0.317417131,LastTimestamp:2025-06-21 04:46:15.526200052 +0000 UTC m=+0.317417131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.0-a-59b94489dc,}" Jun 21 04:46:15.533818 kubelet[2750]: I0621 04:46:15.533804 2750 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 04:46:15.536865 kubelet[2750]: E0621 04:46:15.535822 2750 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-59b94489dc\" not found" Jun 21 04:46:15.536865 kubelet[2750]: I0621 04:46:15.535845 2750 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 04:46:15.536865 kubelet[2750]: I0621 04:46:15.536006 2750 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 04:46:15.536865 kubelet[2750]: I0621 04:46:15.536043 2750 reconciler.go:26] "Reconciler: start to sync state" Jun 21 04:46:15.536865 kubelet[2750]: E0621 04:46:15.536309 2750 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 21 04:46:15.536865 kubelet[2750]: E0621 04:46:15.536509 2750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-a-59b94489dc?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="200ms" Jun 21 04:46:15.537214 kubelet[2750]: E0621 04:46:15.537202 2750 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 04:46:15.538423 kubelet[2750]: I0621 04:46:15.538412 2750 factory.go:223] Registration of the containerd container factory successfully Jun 21 04:46:15.538505 kubelet[2750]: I0621 04:46:15.538499 2750 factory.go:223] Registration of the systemd container factory successfully Jun 21 04:46:15.538589 kubelet[2750]: I0621 04:46:15.538580 2750 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 04:46:15.559434 kubelet[2750]: I0621 04:46:15.559419 2750 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 04:46:15.559434 kubelet[2750]: I0621 04:46:15.559429 2750 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 04:46:15.559517 kubelet[2750]: I0621 04:46:15.559441 2750 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:46:15.564704 kubelet[2750]: I0621 04:46:15.564688 2750 policy_none.go:49] "None policy: Start" Jun 21 04:46:15.564704 kubelet[2750]: I0621 04:46:15.564708 2750 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 04:46:15.564776 kubelet[2750]: I0621 04:46:15.564716 2750 state_mem.go:35] "Initializing new in-memory state store" Jun 21 04:46:15.565290 kubelet[2750]: I0621 04:46:15.565274 2750 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 21 04:46:15.567151 kubelet[2750]: I0621 04:46:15.567136 2750 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 21 04:46:15.567984 kubelet[2750]: I0621 04:46:15.567193 2750 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 21 04:46:15.567984 kubelet[2750]: I0621 04:46:15.567207 2750 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 04:46:15.567984 kubelet[2750]: I0621 04:46:15.567213 2750 kubelet.go:2436] "Starting kubelet main sync loop" Jun 21 04:46:15.567984 kubelet[2750]: E0621 04:46:15.567242 2750 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 04:46:15.568718 kubelet[2750]: E0621 04:46:15.568698 2750 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 21 04:46:15.572511 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 04:46:15.583927 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 04:46:15.586107 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 04:46:15.595805 kubelet[2750]: E0621 04:46:15.595790 2750 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 21 04:46:15.596129 kubelet[2750]: I0621 04:46:15.595912 2750 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 04:46:15.596129 kubelet[2750]: I0621 04:46:15.595919 2750 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 04:46:15.596129 kubelet[2750]: I0621 04:46:15.596035 2750 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 04:46:15.597051 kubelet[2750]: E0621 04:46:15.597036 2750 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 04:46:15.597192 kubelet[2750]: E0621 04:46:15.597146 2750 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.0-a-59b94489dc\" not found" Jun 21 04:46:15.677407 systemd[1]: Created slice kubepods-burstable-podd6284904e985d38368c6c5f0239e0ddc.slice - libcontainer container kubepods-burstable-podd6284904e985d38368c6c5f0239e0ddc.slice. Jun 21 04:46:15.685906 kubelet[2750]: E0621 04:46:15.685875 2750 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-59b94489dc\" not found" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.689464 systemd[1]: Created slice kubepods-burstable-pod8eed4f9ad4bdc255aba88b7c8d078d58.slice - libcontainer container kubepods-burstable-pod8eed4f9ad4bdc255aba88b7c8d078d58.slice. Jun 21 04:46:15.697865 kubelet[2750]: E0621 04:46:15.697334 2750 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-59b94489dc\" not found" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.697865 kubelet[2750]: I0621 04:46:15.697569 2750 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.697865 kubelet[2750]: E0621 04:46:15.697821 2750 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.698854 systemd[1]: Created slice kubepods-burstable-pod1caf2aa3e0a0c22c6f5270cb74513cf0.slice - libcontainer container kubepods-burstable-pod1caf2aa3e0a0c22c6f5270cb74513cf0.slice. Jun 21 04:46:15.700164 kubelet[2750]: E0621 04:46:15.700147 2750 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-59b94489dc\" not found" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.737508 kubelet[2750]: I0621 04:46:15.737455 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8eed4f9ad4bdc255aba88b7c8d078d58-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.0-a-59b94489dc\" (UID: \"8eed4f9ad4bdc255aba88b7c8d078d58\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.737794 kubelet[2750]: I0621 04:46:15.737616 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8eed4f9ad4bdc255aba88b7c8d078d58-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.0-a-59b94489dc\" (UID: \"8eed4f9ad4bdc255aba88b7c8d078d58\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.737794 kubelet[2750]: I0621 04:46:15.737636 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8eed4f9ad4bdc255aba88b7c8d078d58-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.0-a-59b94489dc\" (UID: \"8eed4f9ad4bdc255aba88b7c8d078d58\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.737794 kubelet[2750]: I0621 04:46:15.737654 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8eed4f9ad4bdc255aba88b7c8d078d58-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.0-a-59b94489dc\" (UID: \"8eed4f9ad4bdc255aba88b7c8d078d58\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.737794 kubelet[2750]: I0621 04:46:15.737670 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6284904e985d38368c6c5f0239e0ddc-ca-certs\") pod \"kube-apiserver-ci-4372.0.0-a-59b94489dc\" (UID: \"d6284904e985d38368c6c5f0239e0ddc\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.737794 kubelet[2750]: I0621 04:46:15.737684 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8eed4f9ad4bdc255aba88b7c8d078d58-ca-certs\") pod \"kube-controller-manager-ci-4372.0.0-a-59b94489dc\" (UID: \"8eed4f9ad4bdc255aba88b7c8d078d58\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.737915 kubelet[2750]: I0621 04:46:15.737699 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1caf2aa3e0a0c22c6f5270cb74513cf0-kubeconfig\") pod \"kube-scheduler-ci-4372.0.0-a-59b94489dc\" (UID: \"1caf2aa3e0a0c22c6f5270cb74513cf0\") " pod="kube-system/kube-scheduler-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.737915 kubelet[2750]: I0621 04:46:15.737714 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6284904e985d38368c6c5f0239e0ddc-k8s-certs\") pod \"kube-apiserver-ci-4372.0.0-a-59b94489dc\" (UID: \"d6284904e985d38368c6c5f0239e0ddc\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.737915 kubelet[2750]: I0621 04:46:15.737728 2750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6284904e985d38368c6c5f0239e0ddc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.0-a-59b94489dc\" (UID: \"d6284904e985d38368c6c5f0239e0ddc\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.737915 kubelet[2750]: E0621 04:46:15.737558 2750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-a-59b94489dc?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="400ms" Jun 21 04:46:15.899393 kubelet[2750]: I0621 04:46:15.899327 2750 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.899688 kubelet[2750]: E0621 04:46:15.899663 2750 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:15.986946 containerd[1720]: time="2025-06-21T04:46:15.986907522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.0-a-59b94489dc,Uid:d6284904e985d38368c6c5f0239e0ddc,Namespace:kube-system,Attempt:0,}" Jun 21 04:46:15.998494 containerd[1720]: time="2025-06-21T04:46:15.998282295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.0-a-59b94489dc,Uid:8eed4f9ad4bdc255aba88b7c8d078d58,Namespace:kube-system,Attempt:0,}" Jun 21 04:46:16.002377 containerd[1720]: time="2025-06-21T04:46:16.002308050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.0-a-59b94489dc,Uid:1caf2aa3e0a0c22c6f5270cb74513cf0,Namespace:kube-system,Attempt:0,}" Jun 21 04:46:16.078357 containerd[1720]: time="2025-06-21T04:46:16.077986085Z" level=info msg="connecting to shim 268b010fdc2de898a713c287af065a8555f114a024aaf634a9e62c873687f3e5" address="unix:///run/containerd/s/a3c571b8560e4ff33ff9d3d7d76aa493f4bc47d5baeec74b9f0ad3098e9c6345" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:16.078357 containerd[1720]: time="2025-06-21T04:46:16.078009359Z" level=info msg="connecting to shim 7daafa57f8a18008702429a6afc022a65e48b57af72c9d21086bf5c534b8f26e" address="unix:///run/containerd/s/17a39cf8396f962b1df688ed50c0ba7ece572e64e152bcb85b1013579fb730a4" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:16.097540 containerd[1720]: time="2025-06-21T04:46:16.097509615Z" level=info msg="connecting to shim 12e9e65e31335d4d4a05d647f68cba3ff4e2a3e2eddab17a6464116d7094134e" address="unix:///run/containerd/s/6600a6fcf0204ee17a7ce5f117ab92f41dbca6e6d10d3bc0254857573266edd7" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:16.101561 systemd[1]: Started cri-containerd-268b010fdc2de898a713c287af065a8555f114a024aaf634a9e62c873687f3e5.scope - libcontainer container 268b010fdc2de898a713c287af065a8555f114a024aaf634a9e62c873687f3e5. Jun 21 04:46:16.119541 systemd[1]: Started cri-containerd-7daafa57f8a18008702429a6afc022a65e48b57af72c9d21086bf5c534b8f26e.scope - libcontainer container 7daafa57f8a18008702429a6afc022a65e48b57af72c9d21086bf5c534b8f26e. Jun 21 04:46:16.131485 systemd[1]: Started cri-containerd-12e9e65e31335d4d4a05d647f68cba3ff4e2a3e2eddab17a6464116d7094134e.scope - libcontainer container 12e9e65e31335d4d4a05d647f68cba3ff4e2a3e2eddab17a6464116d7094134e. Jun 21 04:46:16.139014 kubelet[2750]: E0621 04:46:16.138987 2750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.0-a-59b94489dc?timeout=10s\": dial tcp 10.200.8.44:6443: connect: connection refused" interval="800ms" Jun 21 04:46:16.164324 containerd[1720]: time="2025-06-21T04:46:16.164299362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.0-a-59b94489dc,Uid:d6284904e985d38368c6c5f0239e0ddc,Namespace:kube-system,Attempt:0,} returns sandbox id \"268b010fdc2de898a713c287af065a8555f114a024aaf634a9e62c873687f3e5\"" Jun 21 04:46:16.171758 containerd[1720]: time="2025-06-21T04:46:16.171736379Z" level=info msg="CreateContainer within sandbox \"268b010fdc2de898a713c287af065a8555f114a024aaf634a9e62c873687f3e5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 04:46:16.182914 containerd[1720]: time="2025-06-21T04:46:16.182888794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.0-a-59b94489dc,Uid:8eed4f9ad4bdc255aba88b7c8d078d58,Namespace:kube-system,Attempt:0,} returns sandbox id \"7daafa57f8a18008702429a6afc022a65e48b57af72c9d21086bf5c534b8f26e\"" Jun 21 04:46:16.190369 containerd[1720]: time="2025-06-21T04:46:16.189476347Z" level=info msg="Container 038b42c087c830784ce3d53f38cef38e00e48aea5a0cae3b510df078633ff1d1: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:16.191251 containerd[1720]: time="2025-06-21T04:46:16.191228046Z" level=info msg="CreateContainer within sandbox \"7daafa57f8a18008702429a6afc022a65e48b57af72c9d21086bf5c534b8f26e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 04:46:16.209950 containerd[1720]: time="2025-06-21T04:46:16.209930914Z" level=info msg="CreateContainer within sandbox \"268b010fdc2de898a713c287af065a8555f114a024aaf634a9e62c873687f3e5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"038b42c087c830784ce3d53f38cef38e00e48aea5a0cae3b510df078633ff1d1\"" Jun 21 04:46:16.210383 containerd[1720]: time="2025-06-21T04:46:16.210365488Z" level=info msg="StartContainer for \"038b42c087c830784ce3d53f38cef38e00e48aea5a0cae3b510df078633ff1d1\"" Jun 21 04:46:16.211122 containerd[1720]: time="2025-06-21T04:46:16.211101100Z" level=info msg="connecting to shim 038b42c087c830784ce3d53f38cef38e00e48aea5a0cae3b510df078633ff1d1" address="unix:///run/containerd/s/a3c571b8560e4ff33ff9d3d7d76aa493f4bc47d5baeec74b9f0ad3098e9c6345" protocol=ttrpc version=3 Jun 21 04:46:16.216077 containerd[1720]: time="2025-06-21T04:46:16.216018206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.0-a-59b94489dc,Uid:1caf2aa3e0a0c22c6f5270cb74513cf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"12e9e65e31335d4d4a05d647f68cba3ff4e2a3e2eddab17a6464116d7094134e\"" Jun 21 04:46:16.222798 containerd[1720]: time="2025-06-21T04:46:16.222779251Z" level=info msg="Container 6a92f3c8539f755923c1fd2d8380c9914be5507aaa73b5936b855e35c64e40f7: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:16.224143 containerd[1720]: time="2025-06-21T04:46:16.224125108Z" level=info msg="CreateContainer within sandbox \"12e9e65e31335d4d4a05d647f68cba3ff4e2a3e2eddab17a6464116d7094134e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 04:46:16.225467 systemd[1]: Started cri-containerd-038b42c087c830784ce3d53f38cef38e00e48aea5a0cae3b510df078633ff1d1.scope - libcontainer container 038b42c087c830784ce3d53f38cef38e00e48aea5a0cae3b510df078633ff1d1. Jun 21 04:46:16.234503 containerd[1720]: time="2025-06-21T04:46:16.234460637Z" level=info msg="CreateContainer within sandbox \"7daafa57f8a18008702429a6afc022a65e48b57af72c9d21086bf5c534b8f26e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6a92f3c8539f755923c1fd2d8380c9914be5507aaa73b5936b855e35c64e40f7\"" Jun 21 04:46:16.234948 containerd[1720]: time="2025-06-21T04:46:16.234864049Z" level=info msg="StartContainer for \"6a92f3c8539f755923c1fd2d8380c9914be5507aaa73b5936b855e35c64e40f7\"" Jun 21 04:46:16.236063 containerd[1720]: time="2025-06-21T04:46:16.236008730Z" level=info msg="connecting to shim 6a92f3c8539f755923c1fd2d8380c9914be5507aaa73b5936b855e35c64e40f7" address="unix:///run/containerd/s/17a39cf8396f962b1df688ed50c0ba7ece572e64e152bcb85b1013579fb730a4" protocol=ttrpc version=3 Jun 21 04:46:16.243833 containerd[1720]: time="2025-06-21T04:46:16.243813604Z" level=info msg="Container 1b04e5226cef6959f4905eb48b5440d21a685f2f7c5c5cd0240318e498a1d62d: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:16.253452 systemd[1]: Started cri-containerd-6a92f3c8539f755923c1fd2d8380c9914be5507aaa73b5936b855e35c64e40f7.scope - libcontainer container 6a92f3c8539f755923c1fd2d8380c9914be5507aaa73b5936b855e35c64e40f7. Jun 21 04:46:16.262716 containerd[1720]: time="2025-06-21T04:46:16.262682285Z" level=info msg="CreateContainer within sandbox \"12e9e65e31335d4d4a05d647f68cba3ff4e2a3e2eddab17a6464116d7094134e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1b04e5226cef6959f4905eb48b5440d21a685f2f7c5c5cd0240318e498a1d62d\"" Jun 21 04:46:16.263521 containerd[1720]: time="2025-06-21T04:46:16.263483104Z" level=info msg="StartContainer for \"1b04e5226cef6959f4905eb48b5440d21a685f2f7c5c5cd0240318e498a1d62d\"" Jun 21 04:46:16.265625 containerd[1720]: time="2025-06-21T04:46:16.265604584Z" level=info msg="connecting to shim 1b04e5226cef6959f4905eb48b5440d21a685f2f7c5c5cd0240318e498a1d62d" address="unix:///run/containerd/s/6600a6fcf0204ee17a7ce5f117ab92f41dbca6e6d10d3bc0254857573266edd7" protocol=ttrpc version=3 Jun 21 04:46:16.283866 systemd[1]: Started cri-containerd-1b04e5226cef6959f4905eb48b5440d21a685f2f7c5c5cd0240318e498a1d62d.scope - libcontainer container 1b04e5226cef6959f4905eb48b5440d21a685f2f7c5c5cd0240318e498a1d62d. Jun 21 04:46:16.290661 containerd[1720]: time="2025-06-21T04:46:16.290621946Z" level=info msg="StartContainer for \"038b42c087c830784ce3d53f38cef38e00e48aea5a0cae3b510df078633ff1d1\" returns successfully" Jun 21 04:46:16.305868 kubelet[2750]: I0621 04:46:16.305560 2750 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:16.305868 kubelet[2750]: E0621 04:46:16.305812 2750 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.44:6443/api/v1/nodes\": dial tcp 10.200.8.44:6443: connect: connection refused" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:16.329249 containerd[1720]: time="2025-06-21T04:46:16.329227525Z" level=info msg="StartContainer for \"6a92f3c8539f755923c1fd2d8380c9914be5507aaa73b5936b855e35c64e40f7\" returns successfully" Jun 21 04:46:16.366830 containerd[1720]: time="2025-06-21T04:46:16.366771936Z" level=info msg="StartContainer for \"1b04e5226cef6959f4905eb48b5440d21a685f2f7c5c5cd0240318e498a1d62d\" returns successfully" Jun 21 04:46:16.574323 kubelet[2750]: E0621 04:46:16.574110 2750 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-59b94489dc\" not found" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:16.585127 kubelet[2750]: E0621 04:46:16.585113 2750 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-59b94489dc\" not found" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:16.589888 kubelet[2750]: E0621 04:46:16.589708 2750 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-59b94489dc\" not found" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:17.109016 kubelet[2750]: I0621 04:46:17.108582 2750 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:17.593172 kubelet[2750]: E0621 04:46:17.593013 2750 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-59b94489dc\" not found" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:17.594649 kubelet[2750]: E0621 04:46:17.594314 2750 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-59b94489dc\" not found" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:17.594649 kubelet[2750]: E0621 04:46:17.594567 2750 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.0-a-59b94489dc\" not found" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:18.639284 kubelet[2750]: E0621 04:46:18.639244 2750 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.0-a-59b94489dc\" not found" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:18.744443 kubelet[2750]: I0621 04:46:18.744391 2750 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:18.744443 kubelet[2750]: E0621 04:46:18.744415 2750 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.0.0-a-59b94489dc\": node \"ci-4372.0.0-a-59b94489dc\" not found" Jun 21 04:46:18.836948 kubelet[2750]: I0621 04:46:18.836923 2750 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:18.863046 kubelet[2750]: E0621 04:46:18.863024 2750 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.0-a-59b94489dc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:18.863046 kubelet[2750]: I0621 04:46:18.863046 2750 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:18.869541 kubelet[2750]: E0621 04:46:18.869507 2750 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.0-a-59b94489dc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:18.869541 kubelet[2750]: I0621 04:46:18.869540 2750 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:18.872555 kubelet[2750]: E0621 04:46:18.872527 2750 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.0-a-59b94489dc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:19.523405 kubelet[2750]: I0621 04:46:19.523385 2750 apiserver.go:52] "Watching apiserver" Jun 21 04:46:19.536429 kubelet[2750]: I0621 04:46:19.536409 2750 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 04:46:20.645655 systemd[1]: Reload requested from client PID 3029 ('systemctl') (unit session-9.scope)... Jun 21 04:46:20.645669 systemd[1]: Reloading... Jun 21 04:46:20.718431 zram_generator::config[3078]: No configuration found. Jun 21 04:46:20.786835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 04:46:20.882310 systemd[1]: Reloading finished in 236 ms. Jun 21 04:46:20.906913 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:46:20.907467 kubelet[2750]: I0621 04:46:20.907107 2750 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 04:46:20.922076 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 04:46:20.922280 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:46:20.922323 systemd[1]: kubelet.service: Consumed 565ms CPU time, 128.6M memory peak. Jun 21 04:46:20.923742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 04:46:21.424129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 04:46:21.430074 (kubelet)[3142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 04:46:21.465017 kubelet[3142]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:46:21.465017 kubelet[3142]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 04:46:21.465017 kubelet[3142]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 04:46:21.465250 kubelet[3142]: I0621 04:46:21.465054 3142 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 04:46:21.471552 kubelet[3142]: I0621 04:46:21.471530 3142 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 21 04:46:21.471552 kubelet[3142]: I0621 04:46:21.471546 3142 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 04:46:21.471730 kubelet[3142]: I0621 04:46:21.471720 3142 server.go:956] "Client rotation is on, will bootstrap in background" Jun 21 04:46:21.472434 kubelet[3142]: I0621 04:46:21.472420 3142 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 21 04:46:21.474387 kubelet[3142]: I0621 04:46:21.473890 3142 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 04:46:21.480371 kubelet[3142]: I0621 04:46:21.480332 3142 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 04:46:21.482814 kubelet[3142]: I0621 04:46:21.482797 3142 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 04:46:21.482959 kubelet[3142]: I0621 04:46:21.482932 3142 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 04:46:21.483087 kubelet[3142]: I0621 04:46:21.482955 3142 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.0-a-59b94489dc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 04:46:21.483185 kubelet[3142]: I0621 04:46:21.483090 3142 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 04:46:21.483185 kubelet[3142]: I0621 04:46:21.483098 3142 container_manager_linux.go:303] "Creating device plugin manager" Jun 21 04:46:21.483185 kubelet[3142]: I0621 04:46:21.483132 3142 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:46:21.483259 kubelet[3142]: I0621 04:46:21.483234 3142 kubelet.go:480] "Attempting to sync node with API server" Jun 21 04:46:21.483259 kubelet[3142]: I0621 04:46:21.483243 3142 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 04:46:21.483379 kubelet[3142]: I0621 04:46:21.483260 3142 kubelet.go:386] "Adding apiserver pod source" Jun 21 04:46:21.483379 kubelet[3142]: I0621 04:46:21.483270 3142 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 04:46:21.488658 kubelet[3142]: I0621 04:46:21.488643 3142 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 04:46:21.489900 kubelet[3142]: I0621 04:46:21.489156 3142 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 21 04:46:21.492364 kubelet[3142]: I0621 04:46:21.491963 3142 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 04:46:21.492364 kubelet[3142]: I0621 04:46:21.492002 3142 server.go:1289] "Started kubelet" Jun 21 04:46:21.497428 kubelet[3142]: I0621 04:46:21.495646 3142 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 04:46:21.497428 kubelet[3142]: I0621 04:46:21.496501 3142 server.go:317] "Adding debug handlers to kubelet server" Jun 21 04:46:21.498238 kubelet[3142]: I0621 04:46:21.498226 3142 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 04:46:21.500152 kubelet[3142]: I0621 04:46:21.500117 3142 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 04:46:21.500269 kubelet[3142]: I0621 04:46:21.500260 3142 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 04:46:21.504764 kubelet[3142]: I0621 04:46:21.504746 3142 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 04:46:21.505096 kubelet[3142]: I0621 04:46:21.505086 3142 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 04:46:21.505898 kubelet[3142]: I0621 04:46:21.505883 3142 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 04:46:21.506184 kubelet[3142]: I0621 04:46:21.505985 3142 reconciler.go:26] "Reconciler: start to sync state" Jun 21 04:46:21.509709 kubelet[3142]: E0621 04:46:21.509665 3142 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.0-a-59b94489dc\" not found" Jun 21 04:46:21.514567 kubelet[3142]: I0621 04:46:21.514553 3142 factory.go:223] Registration of the containerd container factory successfully Jun 21 04:46:21.514664 kubelet[3142]: I0621 04:46:21.514657 3142 factory.go:223] Registration of the systemd container factory successfully Jun 21 04:46:21.514774 kubelet[3142]: I0621 04:46:21.514762 3142 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 04:46:21.517657 kubelet[3142]: E0621 04:46:21.517642 3142 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 04:46:21.518181 kubelet[3142]: I0621 04:46:21.518163 3142 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 21 04:46:21.520944 kubelet[3142]: I0621 04:46:21.520903 3142 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 21 04:46:21.520944 kubelet[3142]: I0621 04:46:21.520919 3142 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 21 04:46:21.521035 kubelet[3142]: I0621 04:46:21.520933 3142 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 04:46:21.521065 kubelet[3142]: I0621 04:46:21.521060 3142 kubelet.go:2436] "Starting kubelet main sync loop" Jun 21 04:46:21.521189 kubelet[3142]: E0621 04:46:21.521123 3142 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 04:46:21.547171 kubelet[3142]: I0621 04:46:21.547155 3142 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 04:46:21.547272 kubelet[3142]: I0621 04:46:21.547204 3142 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 04:46:21.547272 kubelet[3142]: I0621 04:46:21.547218 3142 state_mem.go:36] "Initialized new in-memory state store" Jun 21 04:46:21.547318 kubelet[3142]: I0621 04:46:21.547303 3142 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 04:46:21.547318 kubelet[3142]: I0621 04:46:21.547311 3142 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 04:46:21.547390 kubelet[3142]: I0621 04:46:21.547323 3142 policy_none.go:49] "None policy: Start" Jun 21 04:46:21.547390 kubelet[3142]: I0621 04:46:21.547332 3142 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 04:46:21.547390 kubelet[3142]: I0621 04:46:21.547339 3142 state_mem.go:35] "Initializing new in-memory state store" Jun 21 04:46:21.547461 kubelet[3142]: I0621 04:46:21.547426 3142 state_mem.go:75] "Updated machine memory state" Jun 21 04:46:21.549883 kubelet[3142]: E0621 04:46:21.549866 3142 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 21 04:46:21.549976 kubelet[3142]: I0621 04:46:21.549966 3142 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 04:46:21.550004 kubelet[3142]: I0621 04:46:21.549976 3142 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 04:46:21.550273 kubelet[3142]: I0621 04:46:21.550261 3142 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 04:46:21.551368 kubelet[3142]: E0621 04:46:21.551331 3142 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 04:46:21.624980 kubelet[3142]: I0621 04:46:21.624841 3142 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:21.624980 kubelet[3142]: I0621 04:46:21.624942 3142 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:21.625070 kubelet[3142]: I0621 04:46:21.624847 3142 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:21.631920 kubelet[3142]: I0621 04:46:21.631731 3142 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 21 04:46:21.631920 kubelet[3142]: I0621 04:46:21.631866 3142 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 21 04:46:21.634714 kubelet[3142]: I0621 04:46:21.634696 3142 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 21 04:46:21.652251 kubelet[3142]: I0621 04:46:21.652219 3142 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:21.660186 kubelet[3142]: I0621 04:46:21.660157 3142 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:21.660246 kubelet[3142]: I0621 04:46:21.660203 3142 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.0-a-59b94489dc" Jun 21 04:46:21.709440 sudo[3182]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 21 04:46:21.709639 sudo[3182]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 21 04:46:21.807739 kubelet[3142]: I0621 04:46:21.807719 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6284904e985d38368c6c5f0239e0ddc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.0-a-59b94489dc\" (UID: \"d6284904e985d38368c6c5f0239e0ddc\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:21.807805 kubelet[3142]: I0621 04:46:21.807746 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8eed4f9ad4bdc255aba88b7c8d078d58-ca-certs\") pod \"kube-controller-manager-ci-4372.0.0-a-59b94489dc\" (UID: \"8eed4f9ad4bdc255aba88b7c8d078d58\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:21.807805 kubelet[3142]: I0621 04:46:21.807762 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8eed4f9ad4bdc255aba88b7c8d078d58-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.0-a-59b94489dc\" (UID: \"8eed4f9ad4bdc255aba88b7c8d078d58\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:21.807805 kubelet[3142]: I0621 04:46:21.807777 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8eed4f9ad4bdc255aba88b7c8d078d58-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.0-a-59b94489dc\" (UID: \"8eed4f9ad4bdc255aba88b7c8d078d58\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:21.807805 kubelet[3142]: I0621 04:46:21.807791 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1caf2aa3e0a0c22c6f5270cb74513cf0-kubeconfig\") pod \"kube-scheduler-ci-4372.0.0-a-59b94489dc\" (UID: \"1caf2aa3e0a0c22c6f5270cb74513cf0\") " pod="kube-system/kube-scheduler-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:21.807805 kubelet[3142]: I0621 04:46:21.807803 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6284904e985d38368c6c5f0239e0ddc-ca-certs\") pod \"kube-apiserver-ci-4372.0.0-a-59b94489dc\" (UID: \"d6284904e985d38368c6c5f0239e0ddc\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:21.807910 kubelet[3142]: I0621 04:46:21.807816 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6284904e985d38368c6c5f0239e0ddc-k8s-certs\") pod \"kube-apiserver-ci-4372.0.0-a-59b94489dc\" (UID: \"d6284904e985d38368c6c5f0239e0ddc\") " pod="kube-system/kube-apiserver-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:21.807910 kubelet[3142]: I0621 04:46:21.807829 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8eed4f9ad4bdc255aba88b7c8d078d58-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.0-a-59b94489dc\" (UID: \"8eed4f9ad4bdc255aba88b7c8d078d58\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:21.807910 kubelet[3142]: I0621 04:46:21.807845 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8eed4f9ad4bdc255aba88b7c8d078d58-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.0-a-59b94489dc\" (UID: \"8eed4f9ad4bdc255aba88b7c8d078d58\") " pod="kube-system/kube-controller-manager-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:22.157490 sudo[3182]: pam_unix(sudo:session): session closed for user root Jun 21 04:46:22.484571 kubelet[3142]: I0621 04:46:22.484507 3142 apiserver.go:52] "Watching apiserver" Jun 21 04:46:22.506293 kubelet[3142]: I0621 04:46:22.506195 3142 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 04:46:22.536973 kubelet[3142]: I0621 04:46:22.536674 3142 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:22.558413 kubelet[3142]: I0621 04:46:22.558397 3142 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 21 04:46:22.558564 kubelet[3142]: E0621 04:46:22.558553 3142 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.0-a-59b94489dc\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.0-a-59b94489dc" Jun 21 04:46:22.560560 kubelet[3142]: I0621 04:46:22.560526 3142 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.0-a-59b94489dc" podStartSLOduration=1.560501428 podStartE2EDuration="1.560501428s" podCreationTimestamp="2025-06-21 04:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:46:22.558096206 +0000 UTC m=+1.124018738" watchObservedRunningTime="2025-06-21 04:46:22.560501428 +0000 UTC m=+1.126423956" Jun 21 04:46:22.595908 kubelet[3142]: I0621 04:46:22.595836 3142 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.0-a-59b94489dc" podStartSLOduration=1.595824501 podStartE2EDuration="1.595824501s" podCreationTimestamp="2025-06-21 04:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:46:22.580177252 +0000 UTC m=+1.146099786" watchObservedRunningTime="2025-06-21 04:46:22.595824501 +0000 UTC m=+1.161747031" Jun 21 04:46:22.604223 kubelet[3142]: I0621 04:46:22.604123 3142 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.0-a-59b94489dc" podStartSLOduration=1.604112676 podStartE2EDuration="1.604112676s" podCreationTimestamp="2025-06-21 04:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:46:22.596127318 +0000 UTC m=+1.162049847" watchObservedRunningTime="2025-06-21 04:46:22.604112676 +0000 UTC m=+1.170035207" Jun 21 04:46:23.310382 sudo[2172]: pam_unix(sudo:session): session closed for user root Jun 21 04:46:23.411743 sshd[2171]: Connection closed by 10.200.16.10 port 45012 Jun 21 04:46:23.412003 sshd-session[2169]: pam_unix(sshd:session): session closed for user core Jun 21 04:46:23.415472 systemd[1]: sshd@6-10.200.8.44:22-10.200.16.10:45012.service: Deactivated successfully. Jun 21 04:46:23.417131 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 04:46:23.417296 systemd[1]: session-9.scope: Consumed 3.697s CPU time, 271.7M memory peak. Jun 21 04:46:23.418268 systemd-logind[1698]: Session 9 logged out. Waiting for processes to exit. Jun 21 04:46:23.419469 systemd-logind[1698]: Removed session 9. Jun 21 04:46:27.884411 kubelet[3142]: I0621 04:46:27.884382 3142 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 04:46:27.885361 containerd[1720]: time="2025-06-21T04:46:27.884778422Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 04:46:27.885616 kubelet[3142]: I0621 04:46:27.885262 3142 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 04:46:28.856739 systemd[1]: Created slice kubepods-besteffort-pod588278a9_4bce_42b3_ae89_a252362b5d1e.slice - libcontainer container kubepods-besteffort-pod588278a9_4bce_42b3_ae89_a252362b5d1e.slice. Jun 21 04:46:28.870552 systemd[1]: Created slice kubepods-burstable-pod1dcc0aaf_8c88_43f6_b829_5bc216780669.slice - libcontainer container kubepods-burstable-pod1dcc0aaf_8c88_43f6_b829_5bc216780669.slice. Jun 21 04:46:28.950366 kubelet[3142]: I0621 04:46:28.950339 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-cilium-cgroup\") pod \"cilium-gbs5b\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " pod="kube-system/cilium-gbs5b" Jun 21 04:46:28.950604 kubelet[3142]: I0621 04:46:28.950387 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-cni-path\") pod \"cilium-gbs5b\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " pod="kube-system/cilium-gbs5b" Jun 21 04:46:28.950604 kubelet[3142]: I0621 04:46:28.950406 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-xtables-lock\") pod \"cilium-gbs5b\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " pod="kube-system/cilium-gbs5b" Jun 21 04:46:28.950604 kubelet[3142]: I0621 04:46:28.950428 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1dcc0aaf-8c88-43f6-b829-5bc216780669-clustermesh-secrets\") pod \"cilium-gbs5b\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " pod="kube-system/cilium-gbs5b" Jun 21 04:46:28.950604 kubelet[3142]: I0621 04:46:28.950444 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-host-proc-sys-kernel\") pod \"cilium-gbs5b\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " pod="kube-system/cilium-gbs5b" Jun 21 04:46:28.950604 kubelet[3142]: I0621 04:46:28.950459 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p2j9\" (UniqueName: \"kubernetes.io/projected/1dcc0aaf-8c88-43f6-b829-5bc216780669-kube-api-access-9p2j9\") pod \"cilium-gbs5b\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " pod="kube-system/cilium-gbs5b" Jun 21 04:46:28.950716 kubelet[3142]: I0621 04:46:28.950478 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/588278a9-4bce-42b3-ae89-a252362b5d1e-lib-modules\") pod \"kube-proxy-fnv77\" (UID: \"588278a9-4bce-42b3-ae89-a252362b5d1e\") " pod="kube-system/kube-proxy-fnv77" Jun 21 04:46:28.950716 kubelet[3142]: I0621 04:46:28.950491 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-cilium-run\") pod \"cilium-gbs5b\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " pod="kube-system/cilium-gbs5b" Jun 21 04:46:28.950716 kubelet[3142]: I0621 04:46:28.950505 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-hostproc\") pod \"cilium-gbs5b\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " pod="kube-system/cilium-gbs5b" Jun 21 04:46:28.950716 kubelet[3142]: I0621 04:46:28.950520 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1dcc0aaf-8c88-43f6-b829-5bc216780669-cilium-config-path\") pod \"cilium-gbs5b\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " pod="kube-system/cilium-gbs5b" Jun 21 04:46:28.950716 kubelet[3142]: I0621 04:46:28.950537 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1dcc0aaf-8c88-43f6-b829-5bc216780669-hubble-tls\") pod \"cilium-gbs5b\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " pod="kube-system/cilium-gbs5b" Jun 21 04:46:28.950716 kubelet[3142]: I0621 04:46:28.950552 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-etc-cni-netd\") pod \"cilium-gbs5b\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " pod="kube-system/cilium-gbs5b" Jun 21 04:46:28.950809 kubelet[3142]: I0621 04:46:28.950565 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-lib-modules\") pod \"cilium-gbs5b\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " pod="kube-system/cilium-gbs5b" Jun 21 04:46:28.950809 kubelet[3142]: I0621 04:46:28.950578 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-host-proc-sys-net\") pod \"cilium-gbs5b\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " pod="kube-system/cilium-gbs5b" Jun 21 04:46:28.950809 kubelet[3142]: I0621 04:46:28.950592 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/588278a9-4bce-42b3-ae89-a252362b5d1e-kube-proxy\") pod \"kube-proxy-fnv77\" (UID: \"588278a9-4bce-42b3-ae89-a252362b5d1e\") " pod="kube-system/kube-proxy-fnv77" Jun 21 04:46:28.950809 kubelet[3142]: I0621 04:46:28.950605 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/588278a9-4bce-42b3-ae89-a252362b5d1e-xtables-lock\") pod \"kube-proxy-fnv77\" (UID: \"588278a9-4bce-42b3-ae89-a252362b5d1e\") " pod="kube-system/kube-proxy-fnv77" Jun 21 04:46:28.950809 kubelet[3142]: I0621 04:46:28.950621 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm25n\" (UniqueName: \"kubernetes.io/projected/588278a9-4bce-42b3-ae89-a252362b5d1e-kube-api-access-nm25n\") pod \"kube-proxy-fnv77\" (UID: \"588278a9-4bce-42b3-ae89-a252362b5d1e\") " pod="kube-system/kube-proxy-fnv77" Jun 21 04:46:28.950809 kubelet[3142]: I0621 04:46:28.950635 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-bpf-maps\") pod \"cilium-gbs5b\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " pod="kube-system/cilium-gbs5b" Jun 21 04:46:29.044096 systemd[1]: Created slice kubepods-besteffort-podfc5d880f_272e_4717_b882_56c5afda1f25.slice - libcontainer container kubepods-besteffort-podfc5d880f_272e_4717_b882_56c5afda1f25.slice. Jun 21 04:46:29.051769 kubelet[3142]: I0621 04:46:29.051740 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5d880f-272e-4717-b882-56c5afda1f25-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-mg4lm\" (UID: \"fc5d880f-272e-4717-b882-56c5afda1f25\") " pod="kube-system/cilium-operator-6c4d7847fc-mg4lm" Jun 21 04:46:29.051849 kubelet[3142]: I0621 04:46:29.051778 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8wcf\" (UniqueName: \"kubernetes.io/projected/fc5d880f-272e-4717-b882-56c5afda1f25-kube-api-access-z8wcf\") pod \"cilium-operator-6c4d7847fc-mg4lm\" (UID: \"fc5d880f-272e-4717-b882-56c5afda1f25\") " pod="kube-system/cilium-operator-6c4d7847fc-mg4lm" Jun 21 04:46:29.167852 containerd[1720]: time="2025-06-21T04:46:29.167812519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fnv77,Uid:588278a9-4bce-42b3-ae89-a252362b5d1e,Namespace:kube-system,Attempt:0,}" Jun 21 04:46:29.174311 containerd[1720]: time="2025-06-21T04:46:29.174286866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gbs5b,Uid:1dcc0aaf-8c88-43f6-b829-5bc216780669,Namespace:kube-system,Attempt:0,}" Jun 21 04:46:29.220374 containerd[1720]: time="2025-06-21T04:46:29.220023496Z" level=info msg="connecting to shim a2cfe3b940fc5209aaf90c62ac9505f602035389c746af59e2b0f5c7e784f70d" address="unix:///run/containerd/s/1cc41e55ae6087791cd226553eceba81bca8b51fb6485dd58bf7fb33457cad19" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:29.231621 containerd[1720]: time="2025-06-21T04:46:29.231583465Z" level=info msg="connecting to shim 8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e" address="unix:///run/containerd/s/dd45d69d5194730d069356d6c35a99398f8678c6573929bec93d9fdc4df07870" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:29.241607 systemd[1]: Started cri-containerd-a2cfe3b940fc5209aaf90c62ac9505f602035389c746af59e2b0f5c7e784f70d.scope - libcontainer container a2cfe3b940fc5209aaf90c62ac9505f602035389c746af59e2b0f5c7e784f70d. Jun 21 04:46:29.248797 systemd[1]: Started cri-containerd-8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e.scope - libcontainer container 8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e. Jun 21 04:46:29.271722 containerd[1720]: time="2025-06-21T04:46:29.271692927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fnv77,Uid:588278a9-4bce-42b3-ae89-a252362b5d1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2cfe3b940fc5209aaf90c62ac9505f602035389c746af59e2b0f5c7e784f70d\"" Jun 21 04:46:29.275482 containerd[1720]: time="2025-06-21T04:46:29.275320169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gbs5b,Uid:1dcc0aaf-8c88-43f6-b829-5bc216780669,Namespace:kube-system,Attempt:0,} returns sandbox id \"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\"" Jun 21 04:46:29.277906 containerd[1720]: time="2025-06-21T04:46:29.276812610Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 21 04:46:29.279859 containerd[1720]: time="2025-06-21T04:46:29.279840443Z" level=info msg="CreateContainer within sandbox \"a2cfe3b940fc5209aaf90c62ac9505f602035389c746af59e2b0f5c7e784f70d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 04:46:29.301744 containerd[1720]: time="2025-06-21T04:46:29.301721790Z" level=info msg="Container 1656842b361b178645e4dcc29e48e23d4321b7d6b56891b323ee2ef7272abe22: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:29.316096 containerd[1720]: time="2025-06-21T04:46:29.316074582Z" level=info msg="CreateContainer within sandbox \"a2cfe3b940fc5209aaf90c62ac9505f602035389c746af59e2b0f5c7e784f70d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1656842b361b178645e4dcc29e48e23d4321b7d6b56891b323ee2ef7272abe22\"" Jun 21 04:46:29.317136 containerd[1720]: time="2025-06-21T04:46:29.316482375Z" level=info msg="StartContainer for \"1656842b361b178645e4dcc29e48e23d4321b7d6b56891b323ee2ef7272abe22\"" Jun 21 04:46:29.317620 containerd[1720]: time="2025-06-21T04:46:29.317593044Z" level=info msg="connecting to shim 1656842b361b178645e4dcc29e48e23d4321b7d6b56891b323ee2ef7272abe22" address="unix:///run/containerd/s/1cc41e55ae6087791cd226553eceba81bca8b51fb6485dd58bf7fb33457cad19" protocol=ttrpc version=3 Jun 21 04:46:29.334478 systemd[1]: Started cri-containerd-1656842b361b178645e4dcc29e48e23d4321b7d6b56891b323ee2ef7272abe22.scope - libcontainer container 1656842b361b178645e4dcc29e48e23d4321b7d6b56891b323ee2ef7272abe22. Jun 21 04:46:29.349840 containerd[1720]: time="2025-06-21T04:46:29.349721772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mg4lm,Uid:fc5d880f-272e-4717-b882-56c5afda1f25,Namespace:kube-system,Attempt:0,}" Jun 21 04:46:29.363430 containerd[1720]: time="2025-06-21T04:46:29.363376291Z" level=info msg="StartContainer for \"1656842b361b178645e4dcc29e48e23d4321b7d6b56891b323ee2ef7272abe22\" returns successfully" Jun 21 04:46:29.387592 containerd[1720]: time="2025-06-21T04:46:29.387557297Z" level=info msg="connecting to shim 552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd" address="unix:///run/containerd/s/3fc32a0cdc4dc7c62bce6d6bfad217cdce1ae4197adc3f43834ad947a98b0609" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:29.406611 systemd[1]: Started cri-containerd-552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd.scope - libcontainer container 552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd. Jun 21 04:46:29.456073 containerd[1720]: time="2025-06-21T04:46:29.455969761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mg4lm,Uid:fc5d880f-272e-4717-b882-56c5afda1f25,Namespace:kube-system,Attempt:0,} returns sandbox id \"552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd\"" Jun 21 04:46:31.800942 kubelet[3142]: I0621 04:46:31.800889 3142 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fnv77" podStartSLOduration=3.8008736279999997 podStartE2EDuration="3.800873628s" podCreationTimestamp="2025-06-21 04:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:46:29.556438839 +0000 UTC m=+8.122361367" watchObservedRunningTime="2025-06-21 04:46:31.800873628 +0000 UTC m=+10.366796153" Jun 21 04:46:33.612637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2140391694.mount: Deactivated successfully. Jun 21 04:46:35.487261 containerd[1720]: time="2025-06-21T04:46:35.487224892Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:35.489651 containerd[1720]: time="2025-06-21T04:46:35.489615416Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 21 04:46:35.492823 containerd[1720]: time="2025-06-21T04:46:35.492768885Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:35.493814 containerd[1720]: time="2025-06-21T04:46:35.493736753Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.216898096s" Jun 21 04:46:35.493814 containerd[1720]: time="2025-06-21T04:46:35.493763915Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 21 04:46:35.494657 containerd[1720]: time="2025-06-21T04:46:35.494637065Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 21 04:46:35.500762 containerd[1720]: time="2025-06-21T04:46:35.500454808Z" level=info msg="CreateContainer within sandbox \"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 04:46:35.518414 containerd[1720]: time="2025-06-21T04:46:35.518392360Z" level=info msg="Container f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:35.536248 containerd[1720]: time="2025-06-21T04:46:35.536224524Z" level=info msg="CreateContainer within sandbox \"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\"" Jun 21 04:46:35.536601 containerd[1720]: time="2025-06-21T04:46:35.536574106Z" level=info msg="StartContainer for \"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\"" Jun 21 04:46:35.537215 containerd[1720]: time="2025-06-21T04:46:35.537191522Z" level=info msg="connecting to shim f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6" address="unix:///run/containerd/s/dd45d69d5194730d069356d6c35a99398f8678c6573929bec93d9fdc4df07870" protocol=ttrpc version=3 Jun 21 04:46:35.558579 systemd[1]: Started cri-containerd-f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6.scope - libcontainer container f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6. Jun 21 04:46:35.585314 systemd[1]: cri-containerd-f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6.scope: Deactivated successfully. Jun 21 04:46:35.586413 containerd[1720]: time="2025-06-21T04:46:35.586379055Z" level=info msg="StartContainer for \"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\" returns successfully" Jun 21 04:46:35.588898 containerd[1720]: time="2025-06-21T04:46:35.588877597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\" id:\"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\" pid:3565 exited_at:{seconds:1750481195 nanos:588590704}" Jun 21 04:46:35.589046 containerd[1720]: time="2025-06-21T04:46:35.588917629Z" level=info msg="received exit event container_id:\"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\" id:\"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\" pid:3565 exited_at:{seconds:1750481195 nanos:588590704}" Jun 21 04:46:35.601970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6-rootfs.mount: Deactivated successfully. Jun 21 04:46:39.586254 containerd[1720]: time="2025-06-21T04:46:39.586212952Z" level=info msg="CreateContainer within sandbox \"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 04:46:39.621379 containerd[1720]: time="2025-06-21T04:46:39.620918372Z" level=info msg="Container 2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:39.623409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775662323.mount: Deactivated successfully. Jun 21 04:46:39.707660 containerd[1720]: time="2025-06-21T04:46:39.707624204Z" level=info msg="CreateContainer within sandbox \"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\"" Jun 21 04:46:39.708054 containerd[1720]: time="2025-06-21T04:46:39.708011073Z" level=info msg="StartContainer for \"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\"" Jun 21 04:46:39.711034 containerd[1720]: time="2025-06-21T04:46:39.709442762Z" level=info msg="connecting to shim 2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b" address="unix:///run/containerd/s/dd45d69d5194730d069356d6c35a99398f8678c6573929bec93d9fdc4df07870" protocol=ttrpc version=3 Jun 21 04:46:39.726507 systemd[1]: Started cri-containerd-2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b.scope - libcontainer container 2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b. Jun 21 04:46:39.736411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704904999.mount: Deactivated successfully. Jun 21 04:46:39.757695 containerd[1720]: time="2025-06-21T04:46:39.757675974Z" level=info msg="StartContainer for \"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\" returns successfully" Jun 21 04:46:39.764551 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 04:46:39.764755 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:46:39.765161 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 21 04:46:39.767522 containerd[1720]: time="2025-06-21T04:46:39.767500738Z" level=info msg="received exit event container_id:\"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\" id:\"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\" pid:3612 exited_at:{seconds:1750481199 nanos:767064155}" Jun 21 04:46:39.767606 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 04:46:39.767799 systemd[1]: cri-containerd-2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b.scope: Deactivated successfully. Jun 21 04:46:39.768445 containerd[1720]: time="2025-06-21T04:46:39.768279843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\" id:\"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\" pid:3612 exited_at:{seconds:1750481199 nanos:767064155}" Jun 21 04:46:39.785870 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 04:46:40.612810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b-rootfs.mount: Deactivated successfully. Jun 21 04:46:40.665829 containerd[1720]: time="2025-06-21T04:46:40.665793788Z" level=info msg="CreateContainer within sandbox \"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 04:46:40.669277 containerd[1720]: time="2025-06-21T04:46:40.669187664Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:40.714074 containerd[1720]: time="2025-06-21T04:46:40.714047181Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 21 04:46:40.763791 containerd[1720]: time="2025-06-21T04:46:40.763754075Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 04:46:40.808228 containerd[1720]: time="2025-06-21T04:46:40.808148333Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.313487485s" Jun 21 04:46:40.808228 containerd[1720]: time="2025-06-21T04:46:40.808178755Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 21 04:46:40.917646 containerd[1720]: time="2025-06-21T04:46:40.917619650Z" level=info msg="CreateContainer within sandbox \"552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 21 04:46:40.924040 containerd[1720]: time="2025-06-21T04:46:40.924021024Z" level=info msg="Container 768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:40.924263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount648339770.mount: Deactivated successfully. Jun 21 04:46:41.219177 containerd[1720]: time="2025-06-21T04:46:41.219111736Z" level=info msg="CreateContainer within sandbox \"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\"" Jun 21 04:46:41.219715 containerd[1720]: time="2025-06-21T04:46:41.219610623Z" level=info msg="StartContainer for \"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\"" Jun 21 04:46:41.221092 containerd[1720]: time="2025-06-21T04:46:41.221035999Z" level=info msg="connecting to shim 768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0" address="unix:///run/containerd/s/dd45d69d5194730d069356d6c35a99398f8678c6573929bec93d9fdc4df07870" protocol=ttrpc version=3 Jun 21 04:46:41.224232 containerd[1720]: time="2025-06-21T04:46:41.224173099Z" level=info msg="Container 526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:41.241496 systemd[1]: Started cri-containerd-768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0.scope - libcontainer container 768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0. Jun 21 04:46:41.265455 systemd[1]: cri-containerd-768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0.scope: Deactivated successfully. Jun 21 04:46:41.312803 containerd[1720]: time="2025-06-21T04:46:41.266679324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\" id:\"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\" pid:3672 exited_at:{seconds:1750481201 nanos:266484702}" Jun 21 04:46:41.314190 containerd[1720]: time="2025-06-21T04:46:41.314168387Z" level=info msg="received exit event container_id:\"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\" id:\"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\" pid:3672 exited_at:{seconds:1750481201 nanos:266484702}" Jun 21 04:46:41.321335 containerd[1720]: time="2025-06-21T04:46:41.321316846Z" level=info msg="StartContainer for \"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\" returns successfully" Jun 21 04:46:41.612431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0-rootfs.mount: Deactivated successfully. Jun 21 04:46:42.959223 containerd[1720]: time="2025-06-21T04:46:42.959177172Z" level=info msg="CreateContainer within sandbox \"552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\"" Jun 21 04:46:42.959982 containerd[1720]: time="2025-06-21T04:46:42.959939856Z" level=info msg="StartContainer for \"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\"" Jun 21 04:46:42.960888 containerd[1720]: time="2025-06-21T04:46:42.960860472Z" level=info msg="connecting to shim 526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592" address="unix:///run/containerd/s/3fc32a0cdc4dc7c62bce6d6bfad217cdce1ae4197adc3f43834ad947a98b0609" protocol=ttrpc version=3 Jun 21 04:46:42.980498 systemd[1]: Started cri-containerd-526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592.scope - libcontainer container 526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592. Jun 21 04:46:45.312215 kubelet[3142]: E0621 04:46:45.312182 3142 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.79s" Jun 21 04:46:45.313512 containerd[1720]: time="2025-06-21T04:46:45.313468009Z" level=info msg="StartContainer for \"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\" returns successfully" Jun 21 04:46:46.327028 containerd[1720]: time="2025-06-21T04:46:46.326991820Z" level=info msg="CreateContainer within sandbox \"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 04:46:46.346170 kubelet[3142]: I0621 04:46:46.346119 3142 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-mg4lm" podStartSLOduration=5.995351073 podStartE2EDuration="17.346105548s" podCreationTimestamp="2025-06-21 04:46:29 +0000 UTC" firstStartedPulling="2025-06-21 04:46:29.457907837 +0000 UTC m=+8.023830364" lastFinishedPulling="2025-06-21 04:46:40.808662306 +0000 UTC m=+19.374584839" observedRunningTime="2025-06-21 04:46:46.330689206 +0000 UTC m=+24.896611758" watchObservedRunningTime="2025-06-21 04:46:46.346105548 +0000 UTC m=+24.912028076" Jun 21 04:46:46.466495 containerd[1720]: time="2025-06-21T04:46:46.466464377Z" level=info msg="Container 7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:46.561322 containerd[1720]: time="2025-06-21T04:46:46.561297361Z" level=info msg="CreateContainer within sandbox \"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\"" Jun 21 04:46:46.562208 containerd[1720]: time="2025-06-21T04:46:46.562102113Z" level=info msg="StartContainer for \"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\"" Jun 21 04:46:46.564273 containerd[1720]: time="2025-06-21T04:46:46.564190430Z" level=info msg="connecting to shim 7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968" address="unix:///run/containerd/s/dd45d69d5194730d069356d6c35a99398f8678c6573929bec93d9fdc4df07870" protocol=ttrpc version=3 Jun 21 04:46:46.588478 systemd[1]: Started cri-containerd-7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968.scope - libcontainer container 7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968. Jun 21 04:46:46.606034 systemd[1]: cri-containerd-7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968.scope: Deactivated successfully. Jun 21 04:46:46.607613 containerd[1720]: time="2025-06-21T04:46:46.607587968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\" id:\"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\" pid:3744 exited_at:{seconds:1750481206 nanos:606396282}" Jun 21 04:46:46.610500 containerd[1720]: time="2025-06-21T04:46:46.610404349Z" level=info msg="received exit event container_id:\"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\" id:\"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\" pid:3744 exited_at:{seconds:1750481206 nanos:606396282}" Jun 21 04:46:46.615460 containerd[1720]: time="2025-06-21T04:46:46.615435039Z" level=info msg="StartContainer for \"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\" returns successfully" Jun 21 04:46:46.623814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968-rootfs.mount: Deactivated successfully. Jun 21 04:46:48.334035 containerd[1720]: time="2025-06-21T04:46:48.333553555Z" level=info msg="CreateContainer within sandbox \"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 04:46:48.458439 containerd[1720]: time="2025-06-21T04:46:48.458410636Z" level=info msg="Container a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:48.565951 containerd[1720]: time="2025-06-21T04:46:48.565922582Z" level=info msg="CreateContainer within sandbox \"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\"" Jun 21 04:46:48.566751 containerd[1720]: time="2025-06-21T04:46:48.566245484Z" level=info msg="StartContainer for \"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\"" Jun 21 04:46:48.567076 containerd[1720]: time="2025-06-21T04:46:48.567040133Z" level=info msg="connecting to shim a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2" address="unix:///run/containerd/s/dd45d69d5194730d069356d6c35a99398f8678c6573929bec93d9fdc4df07870" protocol=ttrpc version=3 Jun 21 04:46:48.584490 systemd[1]: Started cri-containerd-a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2.scope - libcontainer container a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2. Jun 21 04:46:48.610801 containerd[1720]: time="2025-06-21T04:46:48.610730671Z" level=info msg="StartContainer for \"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\" returns successfully" Jun 21 04:46:48.660456 containerd[1720]: time="2025-06-21T04:46:48.660237658Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\" id:\"49ee980e2f957f64f9fcccd50bb46352e01c1e531f228f4a9944d4fe8f4c4621\" pid:3811 exited_at:{seconds:1750481208 nanos:659754819}" Jun 21 04:46:48.714160 kubelet[3142]: I0621 04:46:48.714140 3142 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 21 04:46:48.766079 systemd[1]: Created slice kubepods-burstable-pod18b084f5_a78e_46e8_99d8_60bd55345c6a.slice - libcontainer container kubepods-burstable-pod18b084f5_a78e_46e8_99d8_60bd55345c6a.slice. Jun 21 04:46:48.775053 systemd[1]: Created slice kubepods-burstable-poda0ca9c5d_8dfa_4393_95f2_6626409e1f35.slice - libcontainer container kubepods-burstable-poda0ca9c5d_8dfa_4393_95f2_6626409e1f35.slice. Jun 21 04:46:48.777887 kubelet[3142]: I0621 04:46:48.777781 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18b084f5-a78e-46e8-99d8-60bd55345c6a-config-volume\") pod \"coredns-674b8bbfcf-2x4fk\" (UID: \"18b084f5-a78e-46e8-99d8-60bd55345c6a\") " pod="kube-system/coredns-674b8bbfcf-2x4fk" Jun 21 04:46:48.777887 kubelet[3142]: I0621 04:46:48.777812 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0ca9c5d-8dfa-4393-95f2-6626409e1f35-config-volume\") pod \"coredns-674b8bbfcf-6r5f8\" (UID: \"a0ca9c5d-8dfa-4393-95f2-6626409e1f35\") " pod="kube-system/coredns-674b8bbfcf-6r5f8" Jun 21 04:46:48.777887 kubelet[3142]: I0621 04:46:48.777830 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b25g\" (UniqueName: \"kubernetes.io/projected/18b084f5-a78e-46e8-99d8-60bd55345c6a-kube-api-access-6b25g\") pod \"coredns-674b8bbfcf-2x4fk\" (UID: \"18b084f5-a78e-46e8-99d8-60bd55345c6a\") " pod="kube-system/coredns-674b8bbfcf-2x4fk" Jun 21 04:46:48.777887 kubelet[3142]: I0621 04:46:48.777848 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzlz\" (UniqueName: \"kubernetes.io/projected/a0ca9c5d-8dfa-4393-95f2-6626409e1f35-kube-api-access-6fzlz\") pod \"coredns-674b8bbfcf-6r5f8\" (UID: \"a0ca9c5d-8dfa-4393-95f2-6626409e1f35\") " pod="kube-system/coredns-674b8bbfcf-6r5f8" Jun 21 04:46:49.071273 containerd[1720]: time="2025-06-21T04:46:49.071234286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2x4fk,Uid:18b084f5-a78e-46e8-99d8-60bd55345c6a,Namespace:kube-system,Attempt:0,}" Jun 21 04:46:49.085992 containerd[1720]: time="2025-06-21T04:46:49.085817936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6r5f8,Uid:a0ca9c5d-8dfa-4393-95f2-6626409e1f35,Namespace:kube-system,Attempt:0,}" Jun 21 04:46:50.668317 systemd-networkd[1357]: cilium_host: Link UP Jun 21 04:46:50.668776 systemd-networkd[1357]: cilium_net: Link UP Jun 21 04:46:50.668872 systemd-networkd[1357]: cilium_net: Gained carrier Jun 21 04:46:50.668944 systemd-networkd[1357]: cilium_host: Gained carrier Jun 21 04:46:50.780332 systemd-networkd[1357]: cilium_vxlan: Link UP Jun 21 04:46:50.780338 systemd-networkd[1357]: cilium_vxlan: Gained carrier Jun 21 04:46:50.963418 kernel: NET: Registered PF_ALG protocol family Jun 21 04:46:51.149499 systemd-networkd[1357]: cilium_net: Gained IPv6LL Jun 21 04:46:51.396535 systemd-networkd[1357]: lxc_health: Link UP Jun 21 04:46:51.406467 systemd-networkd[1357]: lxc_health: Gained carrier Jun 21 04:46:51.468463 systemd-networkd[1357]: cilium_host: Gained IPv6LL Jun 21 04:46:51.601741 systemd-networkd[1357]: lxc244642d27b44: Link UP Jun 21 04:46:51.603384 kernel: eth0: renamed from tmpc4103 Jun 21 04:46:51.606883 systemd-networkd[1357]: lxc244642d27b44: Gained carrier Jun 21 04:46:51.676910 systemd-networkd[1357]: lxc38b3a743e939: Link UP Jun 21 04:46:51.687356 kernel: eth0: renamed from tmp06e2b Jun 21 04:46:51.689907 systemd-networkd[1357]: lxc38b3a743e939: Gained carrier Jun 21 04:46:52.428458 systemd-networkd[1357]: cilium_vxlan: Gained IPv6LL Jun 21 04:46:52.940867 systemd-networkd[1357]: lxc_health: Gained IPv6LL Jun 21 04:46:52.941112 systemd-networkd[1357]: lxc38b3a743e939: Gained IPv6LL Jun 21 04:46:53.068487 systemd-networkd[1357]: lxc244642d27b44: Gained IPv6LL Jun 21 04:46:53.200238 kubelet[3142]: I0621 04:46:53.199607 3142 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gbs5b" podStartSLOduration=18.981458627 podStartE2EDuration="25.199591784s" podCreationTimestamp="2025-06-21 04:46:28 +0000 UTC" firstStartedPulling="2025-06-21 04:46:29.27637321 +0000 UTC m=+7.842295736" lastFinishedPulling="2025-06-21 04:46:35.494506361 +0000 UTC m=+14.060428893" observedRunningTime="2025-06-21 04:46:49.347941185 +0000 UTC m=+27.913863718" watchObservedRunningTime="2025-06-21 04:46:53.199591784 +0000 UTC m=+31.765514316" Jun 21 04:46:54.873895 containerd[1720]: time="2025-06-21T04:46:54.873784369Z" level=info msg="connecting to shim c4103b530f8165d43261f1e2255cb01c3a26887d1648b301c29e00160305f027" address="unix:///run/containerd/s/c114979699c261ddf23f5e4419785e110ea727a2788adacea95a5eaadf9709da" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:54.896482 systemd[1]: Started cri-containerd-c4103b530f8165d43261f1e2255cb01c3a26887d1648b301c29e00160305f027.scope - libcontainer container c4103b530f8165d43261f1e2255cb01c3a26887d1648b301c29e00160305f027. Jun 21 04:46:54.918603 containerd[1720]: time="2025-06-21T04:46:54.917621203Z" level=info msg="connecting to shim 06e2be66c495269dba3f698c681e0b432f787ddb4c7d1e8ead25cf2c6e4ed811" address="unix:///run/containerd/s/393b52948e017b3780bc4423b3f72a1ed71969c5040245dbc848285464a98c0c" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:46:54.943584 systemd[1]: Started cri-containerd-06e2be66c495269dba3f698c681e0b432f787ddb4c7d1e8ead25cf2c6e4ed811.scope - libcontainer container 06e2be66c495269dba3f698c681e0b432f787ddb4c7d1e8ead25cf2c6e4ed811. Jun 21 04:46:54.958782 containerd[1720]: time="2025-06-21T04:46:54.958760819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2x4fk,Uid:18b084f5-a78e-46e8-99d8-60bd55345c6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4103b530f8165d43261f1e2255cb01c3a26887d1648b301c29e00160305f027\"" Jun 21 04:46:54.966875 containerd[1720]: time="2025-06-21T04:46:54.966847957Z" level=info msg="CreateContainer within sandbox \"c4103b530f8165d43261f1e2255cb01c3a26887d1648b301c29e00160305f027\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 04:46:55.054781 containerd[1720]: time="2025-06-21T04:46:55.054753544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6r5f8,Uid:a0ca9c5d-8dfa-4393-95f2-6626409e1f35,Namespace:kube-system,Attempt:0,} returns sandbox id \"06e2be66c495269dba3f698c681e0b432f787ddb4c7d1e8ead25cf2c6e4ed811\"" Jun 21 04:46:55.164007 containerd[1720]: time="2025-06-21T04:46:55.163943021Z" level=info msg="CreateContainer within sandbox \"06e2be66c495269dba3f698c681e0b432f787ddb4c7d1e8ead25cf2c6e4ed811\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 04:46:55.259994 containerd[1720]: time="2025-06-21T04:46:55.259968099Z" level=info msg="Container 462ab5a9f9e90823e5df61ff64d80515291ab189ed1215b444c8b987f49c5b59: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:55.566273 containerd[1720]: time="2025-06-21T04:46:55.566118938Z" level=info msg="CreateContainer within sandbox \"c4103b530f8165d43261f1e2255cb01c3a26887d1648b301c29e00160305f027\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"462ab5a9f9e90823e5df61ff64d80515291ab189ed1215b444c8b987f49c5b59\"" Jun 21 04:46:55.566651 containerd[1720]: time="2025-06-21T04:46:55.566626667Z" level=info msg="StartContainer for \"462ab5a9f9e90823e5df61ff64d80515291ab189ed1215b444c8b987f49c5b59\"" Jun 21 04:46:55.567590 containerd[1720]: time="2025-06-21T04:46:55.567546075Z" level=info msg="connecting to shim 462ab5a9f9e90823e5df61ff64d80515291ab189ed1215b444c8b987f49c5b59" address="unix:///run/containerd/s/c114979699c261ddf23f5e4419785e110ea727a2788adacea95a5eaadf9709da" protocol=ttrpc version=3 Jun 21 04:46:55.580476 systemd[1]: Started cri-containerd-462ab5a9f9e90823e5df61ff64d80515291ab189ed1215b444c8b987f49c5b59.scope - libcontainer container 462ab5a9f9e90823e5df61ff64d80515291ab189ed1215b444c8b987f49c5b59. Jun 21 04:46:55.609756 containerd[1720]: time="2025-06-21T04:46:55.609734532Z" level=info msg="Container 6c4a0ef0aff70e01ea1aea53b7ac1c9ae018e971cb8f659a49a5cea93adf1b02: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:46:55.613448 containerd[1720]: time="2025-06-21T04:46:55.613428090Z" level=info msg="StartContainer for \"462ab5a9f9e90823e5df61ff64d80515291ab189ed1215b444c8b987f49c5b59\" returns successfully" Jun 21 04:46:55.713085 containerd[1720]: time="2025-06-21T04:46:55.713061239Z" level=info msg="CreateContainer within sandbox \"06e2be66c495269dba3f698c681e0b432f787ddb4c7d1e8ead25cf2c6e4ed811\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c4a0ef0aff70e01ea1aea53b7ac1c9ae018e971cb8f659a49a5cea93adf1b02\"" Jun 21 04:46:55.713460 containerd[1720]: time="2025-06-21T04:46:55.713418336Z" level=info msg="StartContainer for \"6c4a0ef0aff70e01ea1aea53b7ac1c9ae018e971cb8f659a49a5cea93adf1b02\"" Jun 21 04:46:55.714086 containerd[1720]: time="2025-06-21T04:46:55.714052514Z" level=info msg="connecting to shim 6c4a0ef0aff70e01ea1aea53b7ac1c9ae018e971cb8f659a49a5cea93adf1b02" address="unix:///run/containerd/s/393b52948e017b3780bc4423b3f72a1ed71969c5040245dbc848285464a98c0c" protocol=ttrpc version=3 Jun 21 04:46:55.728483 systemd[1]: Started cri-containerd-6c4a0ef0aff70e01ea1aea53b7ac1c9ae018e971cb8f659a49a5cea93adf1b02.scope - libcontainer container 6c4a0ef0aff70e01ea1aea53b7ac1c9ae018e971cb8f659a49a5cea93adf1b02. Jun 21 04:46:55.771186 containerd[1720]: time="2025-06-21T04:46:55.771165665Z" level=info msg="StartContainer for \"6c4a0ef0aff70e01ea1aea53b7ac1c9ae018e971cb8f659a49a5cea93adf1b02\" returns successfully" Jun 21 04:46:56.363525 kubelet[3142]: I0621 04:46:56.363456 3142 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6r5f8" podStartSLOduration=27.36343996 podStartE2EDuration="27.36343996s" podCreationTimestamp="2025-06-21 04:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:46:56.362391135 +0000 UTC m=+34.928313664" watchObservedRunningTime="2025-06-21 04:46:56.36343996 +0000 UTC m=+34.929362499" Jun 21 04:47:58.424272 systemd[1]: Started sshd@7-10.200.8.44:22-10.200.16.10:59480.service - OpenSSH per-connection server daemon (10.200.16.10:59480). Jun 21 04:47:59.049437 sshd[4457]: Accepted publickey for core from 10.200.16.10 port 59480 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:47:59.050483 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:47:59.054323 systemd-logind[1698]: New session 10 of user core. Jun 21 04:47:59.060480 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 04:47:59.544716 sshd[4459]: Connection closed by 10.200.16.10 port 59480 Jun 21 04:47:59.545200 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Jun 21 04:47:59.547547 systemd[1]: sshd@7-10.200.8.44:22-10.200.16.10:59480.service: Deactivated successfully. Jun 21 04:47:59.549420 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 04:47:59.550697 systemd-logind[1698]: Session 10 logged out. Waiting for processes to exit. Jun 21 04:47:59.551913 systemd-logind[1698]: Removed session 10. Jun 21 04:48:04.656001 systemd[1]: Started sshd@8-10.200.8.44:22-10.200.16.10:55178.service - OpenSSH per-connection server daemon (10.200.16.10:55178). Jun 21 04:48:05.288641 sshd[4474]: Accepted publickey for core from 10.200.16.10 port 55178 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:05.289679 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:05.293665 systemd-logind[1698]: New session 11 of user core. Jun 21 04:48:05.302514 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 04:48:05.773914 sshd[4476]: Connection closed by 10.200.16.10 port 55178 Jun 21 04:48:05.774326 sshd-session[4474]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:05.776931 systemd[1]: sshd@8-10.200.8.44:22-10.200.16.10:55178.service: Deactivated successfully. Jun 21 04:48:05.778512 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 04:48:05.779131 systemd-logind[1698]: Session 11 logged out. Waiting for processes to exit. Jun 21 04:48:05.780194 systemd-logind[1698]: Removed session 11. Jun 21 04:48:10.911937 systemd[1]: Started sshd@9-10.200.8.44:22-10.200.16.10:33966.service - OpenSSH per-connection server daemon (10.200.16.10:33966). Jun 21 04:48:11.560245 sshd[4489]: Accepted publickey for core from 10.200.16.10 port 33966 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:11.561670 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:11.566571 systemd-logind[1698]: New session 12 of user core. Jun 21 04:48:11.573487 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 04:48:12.045385 sshd[4491]: Connection closed by 10.200.16.10 port 33966 Jun 21 04:48:12.045767 sshd-session[4489]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:12.048274 systemd[1]: sshd@9-10.200.8.44:22-10.200.16.10:33966.service: Deactivated successfully. Jun 21 04:48:12.049888 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 04:48:12.050515 systemd-logind[1698]: Session 12 logged out. Waiting for processes to exit. Jun 21 04:48:12.051614 systemd-logind[1698]: Removed session 12. Jun 21 04:48:17.170692 systemd[1]: Started sshd@10-10.200.8.44:22-10.200.16.10:33982.service - OpenSSH per-connection server daemon (10.200.16.10:33982). Jun 21 04:48:17.799567 sshd[4504]: Accepted publickey for core from 10.200.16.10 port 33982 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:17.800566 sshd-session[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:17.804440 systemd-logind[1698]: New session 13 of user core. Jun 21 04:48:17.807475 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 04:48:18.286399 sshd[4506]: Connection closed by 10.200.16.10 port 33982 Jun 21 04:48:18.286935 sshd-session[4504]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:18.289622 systemd[1]: sshd@10-10.200.8.44:22-10.200.16.10:33982.service: Deactivated successfully. Jun 21 04:48:18.291378 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 04:48:18.292565 systemd-logind[1698]: Session 13 logged out. Waiting for processes to exit. Jun 21 04:48:18.293619 systemd-logind[1698]: Removed session 13. Jun 21 04:48:18.398054 systemd[1]: Started sshd@11-10.200.8.44:22-10.200.16.10:33990.service - OpenSSH per-connection server daemon (10.200.16.10:33990). Jun 21 04:48:19.026656 sshd[4519]: Accepted publickey for core from 10.200.16.10 port 33990 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:19.027648 sshd-session[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:19.031729 systemd-logind[1698]: New session 14 of user core. Jun 21 04:48:19.039461 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 04:48:19.539117 sshd[4521]: Connection closed by 10.200.16.10 port 33990 Jun 21 04:48:19.539574 sshd-session[4519]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:19.541877 systemd[1]: sshd@11-10.200.8.44:22-10.200.16.10:33990.service: Deactivated successfully. Jun 21 04:48:19.543506 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 04:48:19.544745 systemd-logind[1698]: Session 14 logged out. Waiting for processes to exit. Jun 21 04:48:19.545830 systemd-logind[1698]: Removed session 14. Jun 21 04:48:19.648720 systemd[1]: Started sshd@12-10.200.8.44:22-10.200.16.10:37106.service - OpenSSH per-connection server daemon (10.200.16.10:37106). Jun 21 04:48:20.282577 sshd[4531]: Accepted publickey for core from 10.200.16.10 port 37106 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:20.283898 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:20.287595 systemd-logind[1698]: New session 15 of user core. Jun 21 04:48:20.294481 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 04:48:20.766941 sshd[4533]: Connection closed by 10.200.16.10 port 37106 Jun 21 04:48:20.767443 sshd-session[4531]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:20.770296 systemd[1]: sshd@12-10.200.8.44:22-10.200.16.10:37106.service: Deactivated successfully. Jun 21 04:48:20.771924 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 04:48:20.772741 systemd-logind[1698]: Session 15 logged out. Waiting for processes to exit. Jun 21 04:48:20.773797 systemd-logind[1698]: Removed session 15. Jun 21 04:48:25.884387 systemd[1]: Started sshd@13-10.200.8.44:22-10.200.16.10:37118.service - OpenSSH per-connection server daemon (10.200.16.10:37118). Jun 21 04:48:26.512077 sshd[4547]: Accepted publickey for core from 10.200.16.10 port 37118 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:26.513087 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:26.516601 systemd-logind[1698]: New session 16 of user core. Jun 21 04:48:26.521486 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 04:48:26.993826 sshd[4549]: Connection closed by 10.200.16.10 port 37118 Jun 21 04:48:26.994278 sshd-session[4547]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:26.997201 systemd[1]: sshd@13-10.200.8.44:22-10.200.16.10:37118.service: Deactivated successfully. Jun 21 04:48:26.998816 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 04:48:26.999538 systemd-logind[1698]: Session 16 logged out. Waiting for processes to exit. Jun 21 04:48:27.000565 systemd-logind[1698]: Removed session 16. Jun 21 04:48:27.111629 systemd[1]: Started sshd@14-10.200.8.44:22-10.200.16.10:37134.service - OpenSSH per-connection server daemon (10.200.16.10:37134). Jun 21 04:48:27.743065 sshd[4561]: Accepted publickey for core from 10.200.16.10 port 37134 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:27.744493 sshd-session[4561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:27.748518 systemd-logind[1698]: New session 17 of user core. Jun 21 04:48:27.754486 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 04:48:28.294001 sshd[4563]: Connection closed by 10.200.16.10 port 37134 Jun 21 04:48:28.294470 sshd-session[4561]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:28.296903 systemd[1]: sshd@14-10.200.8.44:22-10.200.16.10:37134.service: Deactivated successfully. Jun 21 04:48:28.298517 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 04:48:28.299764 systemd-logind[1698]: Session 17 logged out. Waiting for processes to exit. Jun 21 04:48:28.300956 systemd-logind[1698]: Removed session 17. Jun 21 04:48:28.418001 systemd[1]: Started sshd@15-10.200.8.44:22-10.200.16.10:37138.service - OpenSSH per-connection server daemon (10.200.16.10:37138). Jun 21 04:48:29.050083 sshd[4573]: Accepted publickey for core from 10.200.16.10 port 37138 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:29.051035 sshd-session[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:29.054692 systemd-logind[1698]: New session 18 of user core. Jun 21 04:48:29.061479 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 04:48:30.282290 sshd[4575]: Connection closed by 10.200.16.10 port 37138 Jun 21 04:48:30.282843 sshd-session[4573]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:30.285369 systemd[1]: sshd@15-10.200.8.44:22-10.200.16.10:37138.service: Deactivated successfully. Jun 21 04:48:30.287044 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 04:48:30.288202 systemd-logind[1698]: Session 18 logged out. Waiting for processes to exit. Jun 21 04:48:30.289566 systemd-logind[1698]: Removed session 18. Jun 21 04:48:30.391829 systemd[1]: Started sshd@16-10.200.8.44:22-10.200.16.10:39054.service - OpenSSH per-connection server daemon (10.200.16.10:39054). Jun 21 04:48:31.017414 sshd[4594]: Accepted publickey for core from 10.200.16.10 port 39054 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:31.018635 sshd-session[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:31.022571 systemd-logind[1698]: New session 19 of user core. Jun 21 04:48:31.026489 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 04:48:31.575660 sshd[4596]: Connection closed by 10.200.16.10 port 39054 Jun 21 04:48:31.576056 sshd-session[4594]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:31.578721 systemd[1]: sshd@16-10.200.8.44:22-10.200.16.10:39054.service: Deactivated successfully. Jun 21 04:48:31.580160 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 04:48:31.580790 systemd-logind[1698]: Session 19 logged out. Waiting for processes to exit. Jun 21 04:48:31.581821 systemd-logind[1698]: Removed session 19. Jun 21 04:48:31.688561 systemd[1]: Started sshd@17-10.200.8.44:22-10.200.16.10:39068.service - OpenSSH per-connection server daemon (10.200.16.10:39068). Jun 21 04:48:32.319902 sshd[4606]: Accepted publickey for core from 10.200.16.10 port 39068 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:32.321175 sshd-session[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:32.325270 systemd-logind[1698]: New session 20 of user core. Jun 21 04:48:32.332516 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 04:48:32.803229 sshd[4608]: Connection closed by 10.200.16.10 port 39068 Jun 21 04:48:32.803724 sshd-session[4606]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:32.806033 systemd[1]: sshd@17-10.200.8.44:22-10.200.16.10:39068.service: Deactivated successfully. Jun 21 04:48:32.807599 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 04:48:32.809218 systemd-logind[1698]: Session 20 logged out. Waiting for processes to exit. Jun 21 04:48:32.810038 systemd-logind[1698]: Removed session 20. Jun 21 04:48:37.922234 systemd[1]: Started sshd@18-10.200.8.44:22-10.200.16.10:39082.service - OpenSSH per-connection server daemon (10.200.16.10:39082). Jun 21 04:48:38.558303 sshd[4622]: Accepted publickey for core from 10.200.16.10 port 39082 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:38.559386 sshd-session[4622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:38.563301 systemd-logind[1698]: New session 21 of user core. Jun 21 04:48:38.567530 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 21 04:48:39.048121 sshd[4624]: Connection closed by 10.200.16.10 port 39082 Jun 21 04:48:39.048573 sshd-session[4622]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:39.051195 systemd[1]: sshd@18-10.200.8.44:22-10.200.16.10:39082.service: Deactivated successfully. Jun 21 04:48:39.052740 systemd[1]: session-21.scope: Deactivated successfully. Jun 21 04:48:39.053732 systemd-logind[1698]: Session 21 logged out. Waiting for processes to exit. Jun 21 04:48:39.054673 systemd-logind[1698]: Removed session 21. Jun 21 04:48:44.161627 systemd[1]: Started sshd@19-10.200.8.44:22-10.200.16.10:56122.service - OpenSSH per-connection server daemon (10.200.16.10:56122). Jun 21 04:48:44.790153 sshd[4636]: Accepted publickey for core from 10.200.16.10 port 56122 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:44.791160 sshd-session[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:44.794932 systemd-logind[1698]: New session 22 of user core. Jun 21 04:48:44.801473 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 21 04:48:45.271816 sshd[4638]: Connection closed by 10.200.16.10 port 56122 Jun 21 04:48:45.272264 sshd-session[4636]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:45.274458 systemd[1]: sshd@19-10.200.8.44:22-10.200.16.10:56122.service: Deactivated successfully. Jun 21 04:48:45.276057 systemd[1]: session-22.scope: Deactivated successfully. Jun 21 04:48:45.277658 systemd-logind[1698]: Session 22 logged out. Waiting for processes to exit. Jun 21 04:48:45.278430 systemd-logind[1698]: Removed session 22. Jun 21 04:48:45.385355 systemd[1]: Started sshd@20-10.200.8.44:22-10.200.16.10:56136.service - OpenSSH per-connection server daemon (10.200.16.10:56136). Jun 21 04:48:46.014288 sshd[4650]: Accepted publickey for core from 10.200.16.10 port 56136 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:46.015583 sshd-session[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:46.019585 systemd-logind[1698]: New session 23 of user core. Jun 21 04:48:46.024491 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 21 04:48:47.618856 kubelet[3142]: I0621 04:48:47.618783 3142 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2x4fk" podStartSLOduration=138.61876554 podStartE2EDuration="2m18.61876554s" podCreationTimestamp="2025-06-21 04:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:46:56.391584404 +0000 UTC m=+34.957506936" watchObservedRunningTime="2025-06-21 04:48:47.61876554 +0000 UTC m=+146.184688072" Jun 21 04:48:47.632915 containerd[1720]: time="2025-06-21T04:48:47.632811733Z" level=info msg="StopContainer for \"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\" with timeout 30 (s)" Jun 21 04:48:47.634073 containerd[1720]: time="2025-06-21T04:48:47.634044670Z" level=info msg="Stop container \"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\" with signal terminated" Jun 21 04:48:47.645453 systemd[1]: cri-containerd-526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592.scope: Deactivated successfully. Jun 21 04:48:47.647081 containerd[1720]: time="2025-06-21T04:48:47.647055915Z" level=info msg="received exit event container_id:\"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\" id:\"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\" pid:3713 exited_at:{seconds:1750481327 nanos:645324642}" Jun 21 04:48:47.650198 containerd[1720]: time="2025-06-21T04:48:47.649538964Z" level=info msg="TaskExit event in podsandbox handler container_id:\"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\" id:\"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\" pid:3713 exited_at:{seconds:1750481327 nanos:645324642}" Jun 21 04:48:47.651252 containerd[1720]: time="2025-06-21T04:48:47.651229501Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 04:48:47.655804 containerd[1720]: time="2025-06-21T04:48:47.655779207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\" id:\"6b3707740fc7324b0c614844726035aa890a38e57bc23fa5c6ee2d135654e217\" pid:4680 exited_at:{seconds:1750481327 nanos:655544577}" Jun 21 04:48:47.657119 containerd[1720]: time="2025-06-21T04:48:47.657094327Z" level=info msg="StopContainer for \"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\" with timeout 2 (s)" Jun 21 04:48:47.657488 containerd[1720]: time="2025-06-21T04:48:47.657422042Z" level=info msg="Stop container \"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\" with signal terminated" Jun 21 04:48:47.666466 systemd-networkd[1357]: lxc_health: Link DOWN Jun 21 04:48:47.666549 systemd-networkd[1357]: lxc_health: Lost carrier Jun 21 04:48:47.671066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592-rootfs.mount: Deactivated successfully. Jun 21 04:48:47.680605 systemd[1]: cri-containerd-a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2.scope: Deactivated successfully. Jun 21 04:48:47.681171 systemd[1]: cri-containerd-a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2.scope: Consumed 4.639s CPU time, 123.7M memory peak, 128K read from disk, 13.3M written to disk. Jun 21 04:48:47.681846 containerd[1720]: time="2025-06-21T04:48:47.681828944Z" level=info msg="received exit event container_id:\"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\" id:\"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\" pid:3782 exited_at:{seconds:1750481327 nanos:681613611}" Jun 21 04:48:47.682055 containerd[1720]: time="2025-06-21T04:48:47.681970210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\" id:\"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\" pid:3782 exited_at:{seconds:1750481327 nanos:681613611}" Jun 21 04:48:47.694067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2-rootfs.mount: Deactivated successfully. Jun 21 04:48:47.764484 containerd[1720]: time="2025-06-21T04:48:47.764463305Z" level=info msg="StopContainer for \"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\" returns successfully" Jun 21 04:48:47.764943 containerd[1720]: time="2025-06-21T04:48:47.764927417Z" level=info msg="StopPodSandbox for \"552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd\"" Jun 21 04:48:47.764998 containerd[1720]: time="2025-06-21T04:48:47.764972933Z" level=info msg="Container to stop \"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:48:47.767672 containerd[1720]: time="2025-06-21T04:48:47.767585523Z" level=info msg="StopContainer for \"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\" returns successfully" Jun 21 04:48:47.768107 containerd[1720]: time="2025-06-21T04:48:47.768071096Z" level=info msg="StopPodSandbox for \"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\"" Jun 21 04:48:47.768163 containerd[1720]: time="2025-06-21T04:48:47.768129312Z" level=info msg="Container to stop \"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:48:47.768163 containerd[1720]: time="2025-06-21T04:48:47.768140933Z" level=info msg="Container to stop \"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:48:47.768163 containerd[1720]: time="2025-06-21T04:48:47.768149193Z" level=info msg="Container to stop \"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:48:47.768227 containerd[1720]: time="2025-06-21T04:48:47.768156857Z" level=info msg="Container to stop \"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:48:47.768227 containerd[1720]: time="2025-06-21T04:48:47.768180174Z" level=info msg="Container to stop \"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 04:48:47.771060 systemd[1]: cri-containerd-552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd.scope: Deactivated successfully. Jun 21 04:48:47.772211 containerd[1720]: time="2025-06-21T04:48:47.772173579Z" level=info msg="TaskExit event in podsandbox handler container_id:\"552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd\" id:\"552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd\" pid:3383 exit_status:137 exited_at:{seconds:1750481327 nanos:771761640}" Jun 21 04:48:47.776482 systemd[1]: cri-containerd-8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e.scope: Deactivated successfully. Jun 21 04:48:47.798842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e-rootfs.mount: Deactivated successfully. Jun 21 04:48:47.802737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd-rootfs.mount: Deactivated successfully. Jun 21 04:48:47.819730 containerd[1720]: time="2025-06-21T04:48:47.819692511Z" level=info msg="shim disconnected" id=8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e namespace=k8s.io Jun 21 04:48:47.819998 containerd[1720]: time="2025-06-21T04:48:47.819882656Z" level=info msg="shim disconnected" id=552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd namespace=k8s.io Jun 21 04:48:47.819998 containerd[1720]: time="2025-06-21T04:48:47.819896986Z" level=warning msg="cleaning up after shim disconnected" id=552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd namespace=k8s.io Jun 21 04:48:47.819998 containerd[1720]: time="2025-06-21T04:48:47.819903445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 04:48:47.822382 containerd[1720]: time="2025-06-21T04:48:47.820979363Z" level=warning msg="cleaning up after shim disconnected" id=8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e namespace=k8s.io Jun 21 04:48:47.822806 containerd[1720]: time="2025-06-21T04:48:47.822776790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 04:48:47.836220 containerd[1720]: time="2025-06-21T04:48:47.836029779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" id:\"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" pid:3295 exit_status:137 exited_at:{seconds:1750481327 nanos:777598323}" Jun 21 04:48:47.836595 containerd[1720]: time="2025-06-21T04:48:47.836574010Z" level=info msg="received exit event sandbox_id:\"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" exit_status:137 exited_at:{seconds:1750481327 nanos:777598323}" Jun 21 04:48:47.836895 containerd[1720]: time="2025-06-21T04:48:47.836875132Z" level=info msg="TearDown network for sandbox \"552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd\" successfully" Jun 21 04:48:47.836950 containerd[1720]: time="2025-06-21T04:48:47.836941076Z" level=info msg="StopPodSandbox for \"552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd\" returns successfully" Jun 21 04:48:47.837584 containerd[1720]: time="2025-06-21T04:48:47.837438506Z" level=info msg="TearDown network for sandbox \"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" successfully" Jun 21 04:48:47.837584 containerd[1720]: time="2025-06-21T04:48:47.837456053Z" level=info msg="StopPodSandbox for \"8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e\" returns successfully" Jun 21 04:48:47.837959 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd-shm.mount: Deactivated successfully. Jun 21 04:48:47.840168 containerd[1720]: time="2025-06-21T04:48:47.838479513Z" level=info msg="received exit event sandbox_id:\"552bdf29035923473300d749ab5c3bdbc897fd08e332cda70b54d934d36c5abd\" exit_status:137 exited_at:{seconds:1750481327 nanos:771761640}" Jun 21 04:48:47.900771 kubelet[3142]: I0621 04:48:47.900713 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-hostproc\") pod \"1dcc0aaf-8c88-43f6-b829-5bc216780669\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " Jun 21 04:48:47.900771 kubelet[3142]: I0621 04:48:47.900746 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-host-proc-sys-net\") pod \"1dcc0aaf-8c88-43f6-b829-5bc216780669\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " Jun 21 04:48:47.900771 kubelet[3142]: I0621 04:48:47.900766 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-xtables-lock\") pod \"1dcc0aaf-8c88-43f6-b829-5bc216780669\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " Jun 21 04:48:47.901281 kubelet[3142]: I0621 04:48:47.900801 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1dcc0aaf-8c88-43f6-b829-5bc216780669" (UID: "1dcc0aaf-8c88-43f6-b829-5bc216780669"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:47.901281 kubelet[3142]: I0621 04:48:47.900831 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-cni-path\") pod \"1dcc0aaf-8c88-43f6-b829-5bc216780669\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " Jun 21 04:48:47.901281 kubelet[3142]: I0621 04:48:47.900845 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-cilium-run\") pod \"1dcc0aaf-8c88-43f6-b829-5bc216780669\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " Jun 21 04:48:47.901281 kubelet[3142]: I0621 04:48:47.900869 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1dcc0aaf-8c88-43f6-b829-5bc216780669" (UID: "1dcc0aaf-8c88-43f6-b829-5bc216780669"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:47.901281 kubelet[3142]: I0621 04:48:47.900882 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-cni-path" (OuterVolumeSpecName: "cni-path") pod "1dcc0aaf-8c88-43f6-b829-5bc216780669" (UID: "1dcc0aaf-8c88-43f6-b829-5bc216780669"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:47.901398 kubelet[3142]: I0621 04:48:47.900896 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-etc-cni-netd\") pod \"1dcc0aaf-8c88-43f6-b829-5bc216780669\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " Jun 21 04:48:47.901398 kubelet[3142]: I0621 04:48:47.900915 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5d880f-272e-4717-b882-56c5afda1f25-cilium-config-path\") pod \"fc5d880f-272e-4717-b882-56c5afda1f25\" (UID: \"fc5d880f-272e-4717-b882-56c5afda1f25\") " Jun 21 04:48:47.901398 kubelet[3142]: I0621 04:48:47.900936 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8wcf\" (UniqueName: \"kubernetes.io/projected/fc5d880f-272e-4717-b882-56c5afda1f25-kube-api-access-z8wcf\") pod \"fc5d880f-272e-4717-b882-56c5afda1f25\" (UID: \"fc5d880f-272e-4717-b882-56c5afda1f25\") " Jun 21 04:48:47.901398 kubelet[3142]: I0621 04:48:47.900953 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-bpf-maps\") pod \"1dcc0aaf-8c88-43f6-b829-5bc216780669\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " Jun 21 04:48:47.901398 kubelet[3142]: I0621 04:48:47.900970 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1dcc0aaf-8c88-43f6-b829-5bc216780669-cilium-config-path\") pod \"1dcc0aaf-8c88-43f6-b829-5bc216780669\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " Jun 21 04:48:47.901398 kubelet[3142]: I0621 04:48:47.900987 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1dcc0aaf-8c88-43f6-b829-5bc216780669-clustermesh-secrets\") pod \"1dcc0aaf-8c88-43f6-b829-5bc216780669\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " Jun 21 04:48:47.901516 kubelet[3142]: I0621 04:48:47.901002 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-cilium-cgroup\") pod \"1dcc0aaf-8c88-43f6-b829-5bc216780669\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " Jun 21 04:48:47.901516 kubelet[3142]: I0621 04:48:47.901038 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-lib-modules\") pod \"1dcc0aaf-8c88-43f6-b829-5bc216780669\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " Jun 21 04:48:47.901516 kubelet[3142]: I0621 04:48:47.901055 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-host-proc-sys-kernel\") pod \"1dcc0aaf-8c88-43f6-b829-5bc216780669\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " Jun 21 04:48:47.901516 kubelet[3142]: I0621 04:48:47.901072 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p2j9\" (UniqueName: \"kubernetes.io/projected/1dcc0aaf-8c88-43f6-b829-5bc216780669-kube-api-access-9p2j9\") pod \"1dcc0aaf-8c88-43f6-b829-5bc216780669\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " Jun 21 04:48:47.901516 kubelet[3142]: I0621 04:48:47.901087 3142 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1dcc0aaf-8c88-43f6-b829-5bc216780669-hubble-tls\") pod \"1dcc0aaf-8c88-43f6-b829-5bc216780669\" (UID: \"1dcc0aaf-8c88-43f6-b829-5bc216780669\") " Jun 21 04:48:47.901516 kubelet[3142]: I0621 04:48:47.901117 3142 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-host-proc-sys-net\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:47.902416 kubelet[3142]: I0621 04:48:47.902391 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1dcc0aaf-8c88-43f6-b829-5bc216780669" (UID: "1dcc0aaf-8c88-43f6-b829-5bc216780669"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:47.902481 kubelet[3142]: I0621 04:48:47.902426 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1dcc0aaf-8c88-43f6-b829-5bc216780669" (UID: "1dcc0aaf-8c88-43f6-b829-5bc216780669"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:47.902481 kubelet[3142]: I0621 04:48:47.902443 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1dcc0aaf-8c88-43f6-b829-5bc216780669" (UID: "1dcc0aaf-8c88-43f6-b829-5bc216780669"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:47.902481 kubelet[3142]: I0621 04:48:47.902452 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-hostproc" (OuterVolumeSpecName: "hostproc") pod "1dcc0aaf-8c88-43f6-b829-5bc216780669" (UID: "1dcc0aaf-8c88-43f6-b829-5bc216780669"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:47.904364 kubelet[3142]: I0621 04:48:47.904221 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dcc0aaf-8c88-43f6-b829-5bc216780669-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1dcc0aaf-8c88-43f6-b829-5bc216780669" (UID: "1dcc0aaf-8c88-43f6-b829-5bc216780669"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 04:48:47.904364 kubelet[3142]: I0621 04:48:47.904240 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5d880f-272e-4717-b882-56c5afda1f25-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fc5d880f-272e-4717-b882-56c5afda1f25" (UID: "fc5d880f-272e-4717-b882-56c5afda1f25"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 21 04:48:47.904364 kubelet[3142]: I0621 04:48:47.904272 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1dcc0aaf-8c88-43f6-b829-5bc216780669" (UID: "1dcc0aaf-8c88-43f6-b829-5bc216780669"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:47.904364 kubelet[3142]: I0621 04:48:47.904284 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1dcc0aaf-8c88-43f6-b829-5bc216780669" (UID: "1dcc0aaf-8c88-43f6-b829-5bc216780669"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:47.904364 kubelet[3142]: I0621 04:48:47.904296 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1dcc0aaf-8c88-43f6-b829-5bc216780669" (UID: "1dcc0aaf-8c88-43f6-b829-5bc216780669"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 04:48:47.907783 kubelet[3142]: I0621 04:48:47.907749 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dcc0aaf-8c88-43f6-b829-5bc216780669-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1dcc0aaf-8c88-43f6-b829-5bc216780669" (UID: "1dcc0aaf-8c88-43f6-b829-5bc216780669"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 21 04:48:47.908074 kubelet[3142]: I0621 04:48:47.907936 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5d880f-272e-4717-b882-56c5afda1f25-kube-api-access-z8wcf" (OuterVolumeSpecName: "kube-api-access-z8wcf") pod "fc5d880f-272e-4717-b882-56c5afda1f25" (UID: "fc5d880f-272e-4717-b882-56c5afda1f25"). InnerVolumeSpecName "kube-api-access-z8wcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 04:48:47.908227 kubelet[3142]: I0621 04:48:47.908214 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dcc0aaf-8c88-43f6-b829-5bc216780669-kube-api-access-9p2j9" (OuterVolumeSpecName: "kube-api-access-9p2j9") pod "1dcc0aaf-8c88-43f6-b829-5bc216780669" (UID: "1dcc0aaf-8c88-43f6-b829-5bc216780669"). InnerVolumeSpecName "kube-api-access-9p2j9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 04:48:47.908458 kubelet[3142]: I0621 04:48:47.908439 3142 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dcc0aaf-8c88-43f6-b829-5bc216780669-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1dcc0aaf-8c88-43f6-b829-5bc216780669" (UID: "1dcc0aaf-8c88-43f6-b829-5bc216780669"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 21 04:48:48.002145 kubelet[3142]: I0621 04:48:48.002123 3142 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-bpf-maps\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.002145 kubelet[3142]: I0621 04:48:48.002148 3142 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1dcc0aaf-8c88-43f6-b829-5bc216780669-cilium-config-path\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.002240 kubelet[3142]: I0621 04:48:48.002157 3142 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1dcc0aaf-8c88-43f6-b829-5bc216780669-clustermesh-secrets\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.002240 kubelet[3142]: I0621 04:48:48.002164 3142 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-cilium-cgroup\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.002240 kubelet[3142]: I0621 04:48:48.002172 3142 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-lib-modules\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.002240 kubelet[3142]: I0621 04:48:48.002178 3142 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-host-proc-sys-kernel\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.002240 kubelet[3142]: I0621 04:48:48.002186 3142 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9p2j9\" (UniqueName: \"kubernetes.io/projected/1dcc0aaf-8c88-43f6-b829-5bc216780669-kube-api-access-9p2j9\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.002240 kubelet[3142]: I0621 04:48:48.002193 3142 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1dcc0aaf-8c88-43f6-b829-5bc216780669-hubble-tls\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.002240 kubelet[3142]: I0621 04:48:48.002200 3142 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-hostproc\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.002240 kubelet[3142]: I0621 04:48:48.002208 3142 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-xtables-lock\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.002432 kubelet[3142]: I0621 04:48:48.002215 3142 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-cni-path\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.002432 kubelet[3142]: I0621 04:48:48.002224 3142 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-cilium-run\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.002432 kubelet[3142]: I0621 04:48:48.002232 3142 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1dcc0aaf-8c88-43f6-b829-5bc216780669-etc-cni-netd\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.002432 kubelet[3142]: I0621 04:48:48.002240 3142 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5d880f-272e-4717-b882-56c5afda1f25-cilium-config-path\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.002432 kubelet[3142]: I0621 04:48:48.002249 3142 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z8wcf\" (UniqueName: \"kubernetes.io/projected/fc5d880f-272e-4717-b882-56c5afda1f25-kube-api-access-z8wcf\") on node \"ci-4372.0.0-a-59b94489dc\" DevicePath \"\"" Jun 21 04:48:48.546190 kubelet[3142]: I0621 04:48:48.546084 3142 scope.go:117] "RemoveContainer" containerID="526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592" Jun 21 04:48:48.548969 containerd[1720]: time="2025-06-21T04:48:48.548895274Z" level=info msg="RemoveContainer for \"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\"" Jun 21 04:48:48.557116 systemd[1]: Removed slice kubepods-besteffort-podfc5d880f_272e_4717_b882_56c5afda1f25.slice - libcontainer container kubepods-besteffort-podfc5d880f_272e_4717_b882_56c5afda1f25.slice. Jun 21 04:48:48.562379 systemd[1]: Removed slice kubepods-burstable-pod1dcc0aaf_8c88_43f6_b829_5bc216780669.slice - libcontainer container kubepods-burstable-pod1dcc0aaf_8c88_43f6_b829_5bc216780669.slice. Jun 21 04:48:48.562479 systemd[1]: kubepods-burstable-pod1dcc0aaf_8c88_43f6_b829_5bc216780669.slice: Consumed 4.699s CPU time, 124.1M memory peak, 128K read from disk, 13.3M written to disk. Jun 21 04:48:48.565992 containerd[1720]: time="2025-06-21T04:48:48.565968328Z" level=info msg="RemoveContainer for \"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\" returns successfully" Jun 21 04:48:48.566194 kubelet[3142]: I0621 04:48:48.566176 3142 scope.go:117] "RemoveContainer" containerID="526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592" Jun 21 04:48:48.566376 containerd[1720]: time="2025-06-21T04:48:48.566339155Z" level=error msg="ContainerStatus for \"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\": not found" Jun 21 04:48:48.566482 kubelet[3142]: E0621 04:48:48.566452 3142 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\": not found" containerID="526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592" Jun 21 04:48:48.566516 kubelet[3142]: I0621 04:48:48.566486 3142 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592"} err="failed to get container status \"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\": rpc error: code = NotFound desc = an error occurred when try to find container \"526295be84f0ff1dfe2fc434401ed1cff32cbaa9449541c994096f92cc1c3592\": not found" Jun 21 04:48:48.566539 kubelet[3142]: I0621 04:48:48.566515 3142 scope.go:117] "RemoveContainer" containerID="a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2" Jun 21 04:48:48.567631 containerd[1720]: time="2025-06-21T04:48:48.567611106Z" level=info msg="RemoveContainer for \"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\"" Jun 21 04:48:48.580272 containerd[1720]: time="2025-06-21T04:48:48.580238610Z" level=info msg="RemoveContainer for \"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\" returns successfully" Jun 21 04:48:48.580411 kubelet[3142]: I0621 04:48:48.580397 3142 scope.go:117] "RemoveContainer" containerID="7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968" Jun 21 04:48:48.581583 containerd[1720]: time="2025-06-21T04:48:48.581552124Z" level=info msg="RemoveContainer for \"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\"" Jun 21 04:48:48.590422 containerd[1720]: time="2025-06-21T04:48:48.589973605Z" level=info msg="RemoveContainer for \"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\" returns successfully" Jun 21 04:48:48.590561 kubelet[3142]: I0621 04:48:48.590546 3142 scope.go:117] "RemoveContainer" containerID="768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0" Jun 21 04:48:48.592768 containerd[1720]: time="2025-06-21T04:48:48.592744493Z" level=info msg="RemoveContainer for \"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\"" Jun 21 04:48:48.602160 containerd[1720]: time="2025-06-21T04:48:48.602126344Z" level=info msg="RemoveContainer for \"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\" returns successfully" Jun 21 04:48:48.602304 kubelet[3142]: I0621 04:48:48.602286 3142 scope.go:117] "RemoveContainer" containerID="2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b" Jun 21 04:48:48.603357 containerd[1720]: time="2025-06-21T04:48:48.603310770Z" level=info msg="RemoveContainer for \"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\"" Jun 21 04:48:48.611146 containerd[1720]: time="2025-06-21T04:48:48.611113050Z" level=info msg="RemoveContainer for \"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\" returns successfully" Jun 21 04:48:48.611276 kubelet[3142]: I0621 04:48:48.611260 3142 scope.go:117] "RemoveContainer" containerID="f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6" Jun 21 04:48:48.612392 containerd[1720]: time="2025-06-21T04:48:48.612356507Z" level=info msg="RemoveContainer for \"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\"" Jun 21 04:48:48.620662 containerd[1720]: time="2025-06-21T04:48:48.620629422Z" level=info msg="RemoveContainer for \"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\" returns successfully" Jun 21 04:48:48.620801 kubelet[3142]: I0621 04:48:48.620778 3142 scope.go:117] "RemoveContainer" containerID="a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2" Jun 21 04:48:48.621102 kubelet[3142]: E0621 04:48:48.620992 3142 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\": not found" containerID="a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2" Jun 21 04:48:48.621102 kubelet[3142]: I0621 04:48:48.621012 3142 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2"} err="failed to get container status \"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\": not found" Jun 21 04:48:48.621102 kubelet[3142]: I0621 04:48:48.621028 3142 scope.go:117] "RemoveContainer" containerID="7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968" Jun 21 04:48:48.621171 containerd[1720]: time="2025-06-21T04:48:48.620908936Z" level=error msg="ContainerStatus for \"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0833df92c88434b88a06a4333cc34a4712ba3ff92b9133bbd456e8bb1156bc2\": not found" Jun 21 04:48:48.621171 containerd[1720]: time="2025-06-21T04:48:48.621148754Z" level=error msg="ContainerStatus for \"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\": not found" Jun 21 04:48:48.621225 kubelet[3142]: E0621 04:48:48.621216 3142 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\": not found" containerID="7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968" Jun 21 04:48:48.621247 kubelet[3142]: I0621 04:48:48.621231 3142 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968"} err="failed to get container status \"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\": rpc error: code = NotFound desc = an error occurred when try to find container \"7eea21645a87145c4ef7bf39d3892cd87225e42f0e9d75aa729908fd05d58968\": not found" Jun 21 04:48:48.621247 kubelet[3142]: I0621 04:48:48.621243 3142 scope.go:117] "RemoveContainer" containerID="768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0" Jun 21 04:48:48.621389 containerd[1720]: time="2025-06-21T04:48:48.621365999Z" level=error msg="ContainerStatus for \"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\": not found" Jun 21 04:48:48.621466 kubelet[3142]: E0621 04:48:48.621438 3142 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\": not found" containerID="768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0" Jun 21 04:48:48.621504 kubelet[3142]: I0621 04:48:48.621466 3142 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0"} err="failed to get container status \"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\": rpc error: code = NotFound desc = an error occurred when try to find container \"768dd57fb037a4ca88d9d58ecc7be3737359e19555954bb2f489f1bbcaeb6ae0\": not found" Jun 21 04:48:48.621504 kubelet[3142]: I0621 04:48:48.621478 3142 scope.go:117] "RemoveContainer" containerID="2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b" Jun 21 04:48:48.621614 containerd[1720]: time="2025-06-21T04:48:48.621583991Z" level=error msg="ContainerStatus for \"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\": not found" Jun 21 04:48:48.621672 kubelet[3142]: E0621 04:48:48.621663 3142 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\": not found" containerID="2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b" Jun 21 04:48:48.621700 kubelet[3142]: I0621 04:48:48.621677 3142 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b"} err="failed to get container status \"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e892386ae44ad94b532debb8af3b4a7a30e2d2ad6258423b07f89e1fa14895b\": not found" Jun 21 04:48:48.621700 kubelet[3142]: I0621 04:48:48.621689 3142 scope.go:117] "RemoveContainer" containerID="f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6" Jun 21 04:48:48.621800 containerd[1720]: time="2025-06-21T04:48:48.621785337Z" level=error msg="ContainerStatus for \"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\": not found" Jun 21 04:48:48.621888 kubelet[3142]: E0621 04:48:48.621861 3142 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\": not found" containerID="f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6" Jun 21 04:48:48.621945 kubelet[3142]: I0621 04:48:48.621886 3142 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6"} err="failed to get container status \"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"f44accccc74fa8cecd6a10397040bb7746cbe09519d00346d632a7a43e13a6d6\": not found" Jun 21 04:48:48.670474 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8923e0aa5ed065f91a6e8d4aa7e94c9a83f1ecfafc2b0708531bc8e55f5e5a5e-shm.mount: Deactivated successfully. Jun 21 04:48:48.670750 systemd[1]: var-lib-kubelet-pods-fc5d880f\x2d272e\x2d4717\x2db882\x2d56c5afda1f25-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz8wcf.mount: Deactivated successfully. Jun 21 04:48:48.670806 systemd[1]: var-lib-kubelet-pods-1dcc0aaf\x2d8c88\x2d43f6\x2db829\x2d5bc216780669-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9p2j9.mount: Deactivated successfully. Jun 21 04:48:48.670854 systemd[1]: var-lib-kubelet-pods-1dcc0aaf\x2d8c88\x2d43f6\x2db829\x2d5bc216780669-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 21 04:48:48.670903 systemd[1]: var-lib-kubelet-pods-1dcc0aaf\x2d8c88\x2d43f6\x2db829\x2d5bc216780669-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 21 04:48:49.523261 kubelet[3142]: I0621 04:48:49.523233 3142 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dcc0aaf-8c88-43f6-b829-5bc216780669" path="/var/lib/kubelet/pods/1dcc0aaf-8c88-43f6-b829-5bc216780669/volumes" Jun 21 04:48:49.523629 kubelet[3142]: I0621 04:48:49.523614 3142 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc5d880f-272e-4717-b882-56c5afda1f25" path="/var/lib/kubelet/pods/fc5d880f-272e-4717-b882-56c5afda1f25/volumes" Jun 21 04:48:49.687459 sshd[4652]: Connection closed by 10.200.16.10 port 56136 Jun 21 04:48:49.688047 sshd-session[4650]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:49.690967 systemd[1]: sshd@20-10.200.8.44:22-10.200.16.10:56136.service: Deactivated successfully. Jun 21 04:48:49.692626 systemd[1]: session-23.scope: Deactivated successfully. Jun 21 04:48:49.693827 systemd-logind[1698]: Session 23 logged out. Waiting for processes to exit. Jun 21 04:48:49.695003 systemd-logind[1698]: Removed session 23. Jun 21 04:48:49.798081 systemd[1]: Started sshd@21-10.200.8.44:22-10.200.16.10:45544.service - OpenSSH per-connection server daemon (10.200.16.10:45544). Jun 21 04:48:50.428846 sshd[4808]: Accepted publickey for core from 10.200.16.10 port 45544 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:50.430136 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:50.435557 systemd-logind[1698]: New session 24 of user core. Jun 21 04:48:50.440676 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 21 04:48:51.249226 systemd[1]: Created slice kubepods-burstable-podd88eb044_7091_401a_aa18_26a988bd9ee0.slice - libcontainer container kubepods-burstable-podd88eb044_7091_401a_aa18_26a988bd9ee0.slice. Jun 21 04:48:51.318569 kubelet[3142]: I0621 04:48:51.318545 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d88eb044-7091-401a-aa18-26a988bd9ee0-cilium-run\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.318569 kubelet[3142]: I0621 04:48:51.318572 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d88eb044-7091-401a-aa18-26a988bd9ee0-cilium-cgroup\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.318569 kubelet[3142]: I0621 04:48:51.318590 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d88eb044-7091-401a-aa18-26a988bd9ee0-clustermesh-secrets\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.318979 kubelet[3142]: I0621 04:48:51.318627 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d88eb044-7091-401a-aa18-26a988bd9ee0-hostproc\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.318979 kubelet[3142]: I0621 04:48:51.318643 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d88eb044-7091-401a-aa18-26a988bd9ee0-cilium-config-path\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.318979 kubelet[3142]: I0621 04:48:51.318660 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d88eb044-7091-401a-aa18-26a988bd9ee0-xtables-lock\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.318979 kubelet[3142]: I0621 04:48:51.318675 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d88eb044-7091-401a-aa18-26a988bd9ee0-host-proc-sys-net\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.318979 kubelet[3142]: I0621 04:48:51.318691 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d88eb044-7091-401a-aa18-26a988bd9ee0-etc-cni-netd\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.318979 kubelet[3142]: I0621 04:48:51.318707 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d88eb044-7091-401a-aa18-26a988bd9ee0-cilium-ipsec-secrets\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.319064 kubelet[3142]: I0621 04:48:51.318722 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d88eb044-7091-401a-aa18-26a988bd9ee0-bpf-maps\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.319064 kubelet[3142]: I0621 04:48:51.318736 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d88eb044-7091-401a-aa18-26a988bd9ee0-cni-path\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.319064 kubelet[3142]: I0621 04:48:51.318764 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d88eb044-7091-401a-aa18-26a988bd9ee0-host-proc-sys-kernel\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.319064 kubelet[3142]: I0621 04:48:51.318788 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-577wz\" (UniqueName: \"kubernetes.io/projected/d88eb044-7091-401a-aa18-26a988bd9ee0-kube-api-access-577wz\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.319064 kubelet[3142]: I0621 04:48:51.318812 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d88eb044-7091-401a-aa18-26a988bd9ee0-lib-modules\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.319064 kubelet[3142]: I0621 04:48:51.318834 3142 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d88eb044-7091-401a-aa18-26a988bd9ee0-hubble-tls\") pod \"cilium-bc24b\" (UID: \"d88eb044-7091-401a-aa18-26a988bd9ee0\") " pod="kube-system/cilium-bc24b" Jun 21 04:48:51.339317 sshd[4810]: Connection closed by 10.200.16.10 port 45544 Jun 21 04:48:51.339717 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:51.342556 systemd[1]: sshd@21-10.200.8.44:22-10.200.16.10:45544.service: Deactivated successfully. Jun 21 04:48:51.344023 systemd[1]: session-24.scope: Deactivated successfully. Jun 21 04:48:51.344656 systemd-logind[1698]: Session 24 logged out. Waiting for processes to exit. Jun 21 04:48:51.345886 systemd-logind[1698]: Removed session 24. Jun 21 04:48:51.450643 systemd[1]: Started sshd@22-10.200.8.44:22-10.200.16.10:45548.service - OpenSSH per-connection server daemon (10.200.16.10:45548). Jun 21 04:48:51.553750 containerd[1720]: time="2025-06-21T04:48:51.553507906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bc24b,Uid:d88eb044-7091-401a-aa18-26a988bd9ee0,Namespace:kube-system,Attempt:0,}" Jun 21 04:48:51.577624 kubelet[3142]: E0621 04:48:51.577589 3142 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 21 04:48:51.585930 containerd[1720]: time="2025-06-21T04:48:51.585666047Z" level=info msg="connecting to shim eb3e849df74934e0f82583dbd3c3a27be3e32dc8a3949a3149bc702e260c1d6c" address="unix:///run/containerd/s/ea3ca61aa4ff2780305541aa56f5a3b74f682c37b94d104f04a7b87561c73a94" namespace=k8s.io protocol=ttrpc version=3 Jun 21 04:48:51.606453 systemd[1]: Started cri-containerd-eb3e849df74934e0f82583dbd3c3a27be3e32dc8a3949a3149bc702e260c1d6c.scope - libcontainer container eb3e849df74934e0f82583dbd3c3a27be3e32dc8a3949a3149bc702e260c1d6c. Jun 21 04:48:51.629730 containerd[1720]: time="2025-06-21T04:48:51.629707551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bc24b,Uid:d88eb044-7091-401a-aa18-26a988bd9ee0,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb3e849df74934e0f82583dbd3c3a27be3e32dc8a3949a3149bc702e260c1d6c\"" Jun 21 04:48:51.637954 containerd[1720]: time="2025-06-21T04:48:51.637916679Z" level=info msg="CreateContainer within sandbox \"eb3e849df74934e0f82583dbd3c3a27be3e32dc8a3949a3149bc702e260c1d6c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 04:48:51.653290 containerd[1720]: time="2025-06-21T04:48:51.653268883Z" level=info msg="Container a5dd76dcc33f387598647a66484069212016c854c13c820187195720c71054c1: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:48:51.670021 containerd[1720]: time="2025-06-21T04:48:51.669979939Z" level=info msg="CreateContainer within sandbox \"eb3e849df74934e0f82583dbd3c3a27be3e32dc8a3949a3149bc702e260c1d6c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a5dd76dcc33f387598647a66484069212016c854c13c820187195720c71054c1\"" Jun 21 04:48:51.670797 containerd[1720]: time="2025-06-21T04:48:51.670778551Z" level=info msg="StartContainer for \"a5dd76dcc33f387598647a66484069212016c854c13c820187195720c71054c1\"" Jun 21 04:48:51.672450 containerd[1720]: time="2025-06-21T04:48:51.672427399Z" level=info msg="connecting to shim a5dd76dcc33f387598647a66484069212016c854c13c820187195720c71054c1" address="unix:///run/containerd/s/ea3ca61aa4ff2780305541aa56f5a3b74f682c37b94d104f04a7b87561c73a94" protocol=ttrpc version=3 Jun 21 04:48:51.690471 systemd[1]: Started cri-containerd-a5dd76dcc33f387598647a66484069212016c854c13c820187195720c71054c1.scope - libcontainer container a5dd76dcc33f387598647a66484069212016c854c13c820187195720c71054c1. Jun 21 04:48:51.713361 containerd[1720]: time="2025-06-21T04:48:51.713271396Z" level=info msg="StartContainer for \"a5dd76dcc33f387598647a66484069212016c854c13c820187195720c71054c1\" returns successfully" Jun 21 04:48:51.717423 systemd[1]: cri-containerd-a5dd76dcc33f387598647a66484069212016c854c13c820187195720c71054c1.scope: Deactivated successfully. Jun 21 04:48:51.719367 containerd[1720]: time="2025-06-21T04:48:51.719285509Z" level=info msg="received exit event container_id:\"a5dd76dcc33f387598647a66484069212016c854c13c820187195720c71054c1\" id:\"a5dd76dcc33f387598647a66484069212016c854c13c820187195720c71054c1\" pid:4884 exited_at:{seconds:1750481331 nanos:719115745}" Jun 21 04:48:51.719367 containerd[1720]: time="2025-06-21T04:48:51.719325266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5dd76dcc33f387598647a66484069212016c854c13c820187195720c71054c1\" id:\"a5dd76dcc33f387598647a66484069212016c854c13c820187195720c71054c1\" pid:4884 exited_at:{seconds:1750481331 nanos:719115745}" Jun 21 04:48:52.083139 sshd[4825]: Accepted publickey for core from 10.200.16.10 port 45548 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:52.084083 sshd-session[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:52.087537 systemd-logind[1698]: New session 25 of user core. Jun 21 04:48:52.093437 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 21 04:48:52.527071 sshd[4917]: Connection closed by 10.200.16.10 port 45548 Jun 21 04:48:52.527457 sshd-session[4825]: pam_unix(sshd:session): session closed for user core Jun 21 04:48:52.529402 systemd[1]: sshd@22-10.200.8.44:22-10.200.16.10:45548.service: Deactivated successfully. Jun 21 04:48:52.530765 systemd[1]: session-25.scope: Deactivated successfully. Jun 21 04:48:52.531946 systemd-logind[1698]: Session 25 logged out. Waiting for processes to exit. Jun 21 04:48:52.532885 systemd-logind[1698]: Removed session 25. Jun 21 04:48:52.574995 containerd[1720]: time="2025-06-21T04:48:52.574842764Z" level=info msg="CreateContainer within sandbox \"eb3e849df74934e0f82583dbd3c3a27be3e32dc8a3949a3149bc702e260c1d6c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 04:48:52.591891 containerd[1720]: time="2025-06-21T04:48:52.591294997Z" level=info msg="Container 00f15a29d0e7d5d6b16f6e07d20a8e404952cb80375fbf5727f62d97d18e0186: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:48:52.602847 containerd[1720]: time="2025-06-21T04:48:52.602825890Z" level=info msg="CreateContainer within sandbox \"eb3e849df74934e0f82583dbd3c3a27be3e32dc8a3949a3149bc702e260c1d6c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"00f15a29d0e7d5d6b16f6e07d20a8e404952cb80375fbf5727f62d97d18e0186\"" Jun 21 04:48:52.603211 containerd[1720]: time="2025-06-21T04:48:52.603121448Z" level=info msg="StartContainer for \"00f15a29d0e7d5d6b16f6e07d20a8e404952cb80375fbf5727f62d97d18e0186\"" Jun 21 04:48:52.603961 containerd[1720]: time="2025-06-21T04:48:52.603918408Z" level=info msg="connecting to shim 00f15a29d0e7d5d6b16f6e07d20a8e404952cb80375fbf5727f62d97d18e0186" address="unix:///run/containerd/s/ea3ca61aa4ff2780305541aa56f5a3b74f682c37b94d104f04a7b87561c73a94" protocol=ttrpc version=3 Jun 21 04:48:52.621467 systemd[1]: Started cri-containerd-00f15a29d0e7d5d6b16f6e07d20a8e404952cb80375fbf5727f62d97d18e0186.scope - libcontainer container 00f15a29d0e7d5d6b16f6e07d20a8e404952cb80375fbf5727f62d97d18e0186. Jun 21 04:48:52.635922 systemd[1]: Started sshd@23-10.200.8.44:22-10.200.16.10:45550.service - OpenSSH per-connection server daemon (10.200.16.10:45550). Jun 21 04:48:52.656851 systemd[1]: cri-containerd-00f15a29d0e7d5d6b16f6e07d20a8e404952cb80375fbf5727f62d97d18e0186.scope: Deactivated successfully. Jun 21 04:48:52.657983 containerd[1720]: time="2025-06-21T04:48:52.657953876Z" level=info msg="received exit event container_id:\"00f15a29d0e7d5d6b16f6e07d20a8e404952cb80375fbf5727f62d97d18e0186\" id:\"00f15a29d0e7d5d6b16f6e07d20a8e404952cb80375fbf5727f62d97d18e0186\" pid:4936 exited_at:{seconds:1750481332 nanos:657780643}" Jun 21 04:48:52.658893 containerd[1720]: time="2025-06-21T04:48:52.658869252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"00f15a29d0e7d5d6b16f6e07d20a8e404952cb80375fbf5727f62d97d18e0186\" id:\"00f15a29d0e7d5d6b16f6e07d20a8e404952cb80375fbf5727f62d97d18e0186\" pid:4936 exited_at:{seconds:1750481332 nanos:657780643}" Jun 21 04:48:52.665102 containerd[1720]: time="2025-06-21T04:48:52.665083596Z" level=info msg="StartContainer for \"00f15a29d0e7d5d6b16f6e07d20a8e404952cb80375fbf5727f62d97d18e0186\" returns successfully" Jun 21 04:48:52.673484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00f15a29d0e7d5d6b16f6e07d20a8e404952cb80375fbf5727f62d97d18e0186-rootfs.mount: Deactivated successfully. Jun 21 04:48:53.271287 sshd[4942]: Accepted publickey for core from 10.200.16.10 port 45550 ssh2: RSA SHA256:4oKQ9IZ/Yu3eC3caPZbT837fBtOzsHYOJO+UUGIDRpc Jun 21 04:48:53.272202 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 04:48:53.276216 systemd-logind[1698]: New session 26 of user core. Jun 21 04:48:53.280473 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 21 04:48:53.581738 containerd[1720]: time="2025-06-21T04:48:53.581671650Z" level=info msg="CreateContainer within sandbox \"eb3e849df74934e0f82583dbd3c3a27be3e32dc8a3949a3149bc702e260c1d6c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 04:48:53.609090 containerd[1720]: time="2025-06-21T04:48:53.608020128Z" level=info msg="Container b112cd59ebd447deb68da5957f8547b87be6220e804661a239b5af7fb6254f66: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:48:53.629468 containerd[1720]: time="2025-06-21T04:48:53.629261494Z" level=info msg="CreateContainer within sandbox \"eb3e849df74934e0f82583dbd3c3a27be3e32dc8a3949a3149bc702e260c1d6c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b112cd59ebd447deb68da5957f8547b87be6220e804661a239b5af7fb6254f66\"" Jun 21 04:48:53.632223 containerd[1720]: time="2025-06-21T04:48:53.632202987Z" level=info msg="StartContainer for \"b112cd59ebd447deb68da5957f8547b87be6220e804661a239b5af7fb6254f66\"" Jun 21 04:48:53.640820 containerd[1720]: time="2025-06-21T04:48:53.638640237Z" level=info msg="connecting to shim b112cd59ebd447deb68da5957f8547b87be6220e804661a239b5af7fb6254f66" address="unix:///run/containerd/s/ea3ca61aa4ff2780305541aa56f5a3b74f682c37b94d104f04a7b87561c73a94" protocol=ttrpc version=3 Jun 21 04:48:53.663496 systemd[1]: Started cri-containerd-b112cd59ebd447deb68da5957f8547b87be6220e804661a239b5af7fb6254f66.scope - libcontainer container b112cd59ebd447deb68da5957f8547b87be6220e804661a239b5af7fb6254f66. Jun 21 04:48:53.703909 systemd[1]: cri-containerd-b112cd59ebd447deb68da5957f8547b87be6220e804661a239b5af7fb6254f66.scope: Deactivated successfully. Jun 21 04:48:53.708441 containerd[1720]: time="2025-06-21T04:48:53.708386487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b112cd59ebd447deb68da5957f8547b87be6220e804661a239b5af7fb6254f66\" id:\"b112cd59ebd447deb68da5957f8547b87be6220e804661a239b5af7fb6254f66\" pid:4992 exited_at:{seconds:1750481333 nanos:707237751}" Jun 21 04:48:53.708441 containerd[1720]: time="2025-06-21T04:48:53.708422596Z" level=info msg="StartContainer for \"b112cd59ebd447deb68da5957f8547b87be6220e804661a239b5af7fb6254f66\" returns successfully" Jun 21 04:48:53.708744 containerd[1720]: time="2025-06-21T04:48:53.708429573Z" level=info msg="received exit event container_id:\"b112cd59ebd447deb68da5957f8547b87be6220e804661a239b5af7fb6254f66\" id:\"b112cd59ebd447deb68da5957f8547b87be6220e804661a239b5af7fb6254f66\" pid:4992 exited_at:{seconds:1750481333 nanos:707237751}" Jun 21 04:48:53.722284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b112cd59ebd447deb68da5957f8547b87be6220e804661a239b5af7fb6254f66-rootfs.mount: Deactivated successfully. Jun 21 04:48:54.582284 containerd[1720]: time="2025-06-21T04:48:54.582254158Z" level=info msg="CreateContainer within sandbox \"eb3e849df74934e0f82583dbd3c3a27be3e32dc8a3949a3149bc702e260c1d6c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 04:48:54.599872 containerd[1720]: time="2025-06-21T04:48:54.599465532Z" level=info msg="Container f7d729e40151b044d2bf1cbe5d02f939184d5d9e159f0b462c43097fa7f10224: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:48:54.611147 containerd[1720]: time="2025-06-21T04:48:54.611123860Z" level=info msg="CreateContainer within sandbox \"eb3e849df74934e0f82583dbd3c3a27be3e32dc8a3949a3149bc702e260c1d6c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f7d729e40151b044d2bf1cbe5d02f939184d5d9e159f0b462c43097fa7f10224\"" Jun 21 04:48:54.611532 containerd[1720]: time="2025-06-21T04:48:54.611492244Z" level=info msg="StartContainer for \"f7d729e40151b044d2bf1cbe5d02f939184d5d9e159f0b462c43097fa7f10224\"" Jun 21 04:48:54.612145 containerd[1720]: time="2025-06-21T04:48:54.612124201Z" level=info msg="connecting to shim f7d729e40151b044d2bf1cbe5d02f939184d5d9e159f0b462c43097fa7f10224" address="unix:///run/containerd/s/ea3ca61aa4ff2780305541aa56f5a3b74f682c37b94d104f04a7b87561c73a94" protocol=ttrpc version=3 Jun 21 04:48:54.638466 systemd[1]: Started cri-containerd-f7d729e40151b044d2bf1cbe5d02f939184d5d9e159f0b462c43097fa7f10224.scope - libcontainer container f7d729e40151b044d2bf1cbe5d02f939184d5d9e159f0b462c43097fa7f10224. Jun 21 04:48:54.657748 systemd[1]: cri-containerd-f7d729e40151b044d2bf1cbe5d02f939184d5d9e159f0b462c43097fa7f10224.scope: Deactivated successfully. Jun 21 04:48:54.658593 containerd[1720]: time="2025-06-21T04:48:54.658570378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f7d729e40151b044d2bf1cbe5d02f939184d5d9e159f0b462c43097fa7f10224\" id:\"f7d729e40151b044d2bf1cbe5d02f939184d5d9e159f0b462c43097fa7f10224\" pid:5031 exited_at:{seconds:1750481334 nanos:658185037}" Jun 21 04:48:54.661687 containerd[1720]: time="2025-06-21T04:48:54.661655341Z" level=info msg="received exit event container_id:\"f7d729e40151b044d2bf1cbe5d02f939184d5d9e159f0b462c43097fa7f10224\" id:\"f7d729e40151b044d2bf1cbe5d02f939184d5d9e159f0b462c43097fa7f10224\" pid:5031 exited_at:{seconds:1750481334 nanos:658185037}" Jun 21 04:48:54.666575 containerd[1720]: time="2025-06-21T04:48:54.666557617Z" level=info msg="StartContainer for \"f7d729e40151b044d2bf1cbe5d02f939184d5d9e159f0b462c43097fa7f10224\" returns successfully" Jun 21 04:48:54.675072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7d729e40151b044d2bf1cbe5d02f939184d5d9e159f0b462c43097fa7f10224-rootfs.mount: Deactivated successfully. Jun 21 04:48:54.798834 kubelet[3142]: I0621 04:48:54.798800 3142 setters.go:618] "Node became not ready" node="ci-4372.0.0-a-59b94489dc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-21T04:48:54Z","lastTransitionTime":"2025-06-21T04:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 21 04:48:55.588367 containerd[1720]: time="2025-06-21T04:48:55.586511065Z" level=info msg="CreateContainer within sandbox \"eb3e849df74934e0f82583dbd3c3a27be3e32dc8a3949a3149bc702e260c1d6c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 04:48:55.605907 containerd[1720]: time="2025-06-21T04:48:55.605401721Z" level=info msg="Container c4ba3e1cf299c3b7b76f0c3df349eff32a9dfaa56c7bfa2c353ae645c5793163: CDI devices from CRI Config.CDIDevices: []" Jun 21 04:48:55.618883 containerd[1720]: time="2025-06-21T04:48:55.618862150Z" level=info msg="CreateContainer within sandbox \"eb3e849df74934e0f82583dbd3c3a27be3e32dc8a3949a3149bc702e260c1d6c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c4ba3e1cf299c3b7b76f0c3df349eff32a9dfaa56c7bfa2c353ae645c5793163\"" Jun 21 04:48:55.619252 containerd[1720]: time="2025-06-21T04:48:55.619163073Z" level=info msg="StartContainer for \"c4ba3e1cf299c3b7b76f0c3df349eff32a9dfaa56c7bfa2c353ae645c5793163\"" Jun 21 04:48:55.620391 containerd[1720]: time="2025-06-21T04:48:55.620331404Z" level=info msg="connecting to shim c4ba3e1cf299c3b7b76f0c3df349eff32a9dfaa56c7bfa2c353ae645c5793163" address="unix:///run/containerd/s/ea3ca61aa4ff2780305541aa56f5a3b74f682c37b94d104f04a7b87561c73a94" protocol=ttrpc version=3 Jun 21 04:48:55.638483 systemd[1]: Started cri-containerd-c4ba3e1cf299c3b7b76f0c3df349eff32a9dfaa56c7bfa2c353ae645c5793163.scope - libcontainer container c4ba3e1cf299c3b7b76f0c3df349eff32a9dfaa56c7bfa2c353ae645c5793163. Jun 21 04:48:55.663796 containerd[1720]: time="2025-06-21T04:48:55.663767754Z" level=info msg="StartContainer for \"c4ba3e1cf299c3b7b76f0c3df349eff32a9dfaa56c7bfa2c353ae645c5793163\" returns successfully" Jun 21 04:48:55.711596 containerd[1720]: time="2025-06-21T04:48:55.711567044Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4ba3e1cf299c3b7b76f0c3df349eff32a9dfaa56c7bfa2c353ae645c5793163\" id:\"6a7d5b6520f09afe36084791e5933cee8b284d7a0e21ad4735fd7bec45ebe695\" pid:5095 exited_at:{seconds:1750481335 nanos:710699326}" Jun 21 04:48:55.960369 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Jun 21 04:48:57.840885 containerd[1720]: time="2025-06-21T04:48:57.840812194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4ba3e1cf299c3b7b76f0c3df349eff32a9dfaa56c7bfa2c353ae645c5793163\" id:\"908d5a16b48d11f2aa9a2174ae7eda508e9a5c71454e6917c83c81daed297beb\" pid:5402 exit_status:1 exited_at:{seconds:1750481337 nanos:840397837}" Jun 21 04:48:58.337901 systemd-networkd[1357]: lxc_health: Link UP Jun 21 04:48:58.341087 systemd-networkd[1357]: lxc_health: Gained carrier Jun 21 04:48:59.578445 kubelet[3142]: I0621 04:48:59.578389 3142 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bc24b" podStartSLOduration=8.578375487 podStartE2EDuration="8.578375487s" podCreationTimestamp="2025-06-21 04:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 04:48:56.604063932 +0000 UTC m=+155.169986461" watchObservedRunningTime="2025-06-21 04:48:59.578375487 +0000 UTC m=+158.144298018" Jun 21 04:48:59.596532 systemd-networkd[1357]: lxc_health: Gained IPv6LL Jun 21 04:49:00.000444 containerd[1720]: time="2025-06-21T04:49:00.000408902Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4ba3e1cf299c3b7b76f0c3df349eff32a9dfaa56c7bfa2c353ae645c5793163\" id:\"2a11d757c8c9a95687b83b85e2e4de5c9c9c3e5782ada16d37cf3d47b3d2344a\" pid:5623 exited_at:{seconds:1750481340 nanos:85851}" Jun 21 04:49:02.130863 containerd[1720]: time="2025-06-21T04:49:02.130819545Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4ba3e1cf299c3b7b76f0c3df349eff32a9dfaa56c7bfa2c353ae645c5793163\" id:\"0efe2269230b204f793bcceeab49650028ce8840354797afadcdb70e8e3dbf17\" pid:5658 exited_at:{seconds:1750481342 nanos:130602340}" Jun 21 04:49:04.205132 containerd[1720]: time="2025-06-21T04:49:04.205046576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4ba3e1cf299c3b7b76f0c3df349eff32a9dfaa56c7bfa2c353ae645c5793163\" id:\"d8f67a1f417c55435f81bb96ac9baf737802508d13a9cc6f72802b8529dd2bb4\" pid:5680 exited_at:{seconds:1750481344 nanos:204631243}" Jun 21 04:49:04.310046 sshd[4970]: Connection closed by 10.200.16.10 port 45550 Jun 21 04:49:04.310563 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Jun 21 04:49:04.313698 systemd[1]: sshd@23-10.200.8.44:22-10.200.16.10:45550.service: Deactivated successfully. Jun 21 04:49:04.315473 systemd[1]: session-26.scope: Deactivated successfully. Jun 21 04:49:04.316255 systemd-logind[1698]: Session 26 logged out. Waiting for processes to exit. Jun 21 04:49:04.317577 systemd-logind[1698]: Removed session 26.