Aug 13 00:14:53.478632 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:47:31 -00 2025 Aug 13 00:14:53.478648 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:14:53.478655 kernel: BIOS-provided physical RAM map: Aug 13 00:14:53.478661 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Aug 13 00:14:53.478666 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Aug 13 00:14:53.478670 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Aug 13 00:14:53.478676 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Aug 13 00:14:53.478681 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Aug 13 00:14:53.478685 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081a70fff] usable Aug 13 00:14:53.478690 kernel: BIOS-e820: [mem 0x0000000081a71000-0x0000000081a71fff] ACPI NVS Aug 13 00:14:53.478695 kernel: BIOS-e820: [mem 0x0000000081a72000-0x0000000081a72fff] reserved Aug 13 00:14:53.478699 kernel: BIOS-e820: [mem 0x0000000081a73000-0x000000008afcdfff] usable Aug 13 00:14:53.478705 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Aug 13 00:14:53.478710 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Aug 13 00:14:53.478716 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Aug 13 00:14:53.478721 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Aug 13 00:14:53.478728 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Aug 13 00:14:53.478733 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Aug 13 00:14:53.478738 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 13 00:14:53.478743 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Aug 13 00:14:53.478749 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Aug 13 00:14:53.478754 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Aug 13 00:14:53.478759 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Aug 13 00:14:53.478764 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Aug 13 00:14:53.478770 kernel: NX (Execute Disable) protection: active Aug 13 00:14:53.478775 kernel: APIC: Static calls initialized Aug 13 00:14:53.478780 kernel: SMBIOS 3.2.1 present. Aug 13 00:14:53.478785 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 2.6 12/05/2024 Aug 13 00:14:53.478792 kernel: tsc: Detected 3400.000 MHz processor Aug 13 00:14:53.478797 kernel: tsc: Detected 3399.906 MHz TSC Aug 13 00:14:53.478802 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:14:53.478808 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:14:53.478814 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Aug 13 00:14:53.478819 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Aug 13 00:14:53.478825 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:14:53.478830 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Aug 13 00:14:53.478835 kernel: Using GB pages for direct mapping Aug 13 00:14:53.478841 kernel: ACPI: Early table checksum verification disabled Aug 13 00:14:53.478847 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Aug 13 00:14:53.478853 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Aug 13 00:14:53.478861 kernel: ACPI: FACP 0x000000008C58B5F0 000114 (v06 01072009 AMI 00010013) Aug 13 00:14:53.478866 kernel: ACPI: DSDT 0x000000008C54F268 03C386 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Aug 13 00:14:53.478872 kernel: ACPI: FACS 0x000000008C66DF80 000040 Aug 13 00:14:53.478878 kernel: ACPI: APIC 0x000000008C58B708 00012C (v04 01072009 AMI 00010013) Aug 13 00:14:53.478885 kernel: ACPI: FPDT 0x000000008C58B838 000044 (v01 01072009 AMI 00010013) Aug 13 00:14:53.478890 kernel: ACPI: FIDT 0x000000008C58B880 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Aug 13 00:14:53.478896 kernel: ACPI: MCFG 0x000000008C58B920 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Aug 13 00:14:53.478902 kernel: ACPI: SPMI 0x000000008C58B960 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Aug 13 00:14:53.478907 kernel: ACPI: SSDT 0x000000008C58B9A8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Aug 13 00:14:53.478913 kernel: ACPI: SSDT 0x000000008C58D4C8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Aug 13 00:14:53.478919 kernel: ACPI: SSDT 0x000000008C590690 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Aug 13 00:14:53.478925 kernel: ACPI: HPET 0x000000008C5929C0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 00:14:53.478931 kernel: ACPI: SSDT 0x000000008C5929F8 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Aug 13 00:14:53.478937 kernel: ACPI: SSDT 0x000000008C5939A8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Aug 13 00:14:53.478942 kernel: ACPI: UEFI 0x000000008C5942A0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 00:14:53.478948 kernel: ACPI: LPIT 0x000000008C5942E8 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 00:14:53.478953 kernel: ACPI: SSDT 0x000000008C594380 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Aug 13 00:14:53.478959 kernel: ACPI: SSDT 0x000000008C596B60 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Aug 13 00:14:53.478965 kernel: ACPI: DBGP 0x000000008C598048 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 00:14:53.478971 kernel: ACPI: DBG2 0x000000008C598080 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Aug 13 00:14:53.478977 kernel: ACPI: SSDT 0x000000008C5980D8 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Aug 13 00:14:53.478983 kernel: ACPI: DMAR 0x000000008C599C40 000070 (v01 INTEL EDK2 00000002 01000013) Aug 13 00:14:53.478988 kernel: ACPI: SSDT 0x000000008C599CB0 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Aug 13 00:14:53.478994 kernel: ACPI: TPM2 0x000000008C599DF8 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Aug 13 00:14:53.479000 kernel: ACPI: SSDT 0x000000008C599E30 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Aug 13 00:14:53.479006 kernel: ACPI: WSMT 0x000000008C59ABC0 000028 (v01 SUPERM 01072009 AMI 00010013) Aug 13 00:14:53.479011 kernel: ACPI: EINJ 0x000000008C59ABE8 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Aug 13 00:14:53.479017 kernel: ACPI: ERST 0x000000008C59AD18 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Aug 13 00:14:53.479024 kernel: ACPI: BERT 0x000000008C59AF48 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Aug 13 00:14:53.479029 kernel: ACPI: HEST 0x000000008C59AF78 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Aug 13 00:14:53.479035 kernel: ACPI: SSDT 0x000000008C59B1F8 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Aug 13 00:14:53.479041 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b5f0-0x8c58b703] Aug 13 00:14:53.479046 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b5ed] Aug 13 00:14:53.479052 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Aug 13 00:14:53.479058 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b708-0x8c58b833] Aug 13 00:14:53.479063 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b838-0x8c58b87b] Aug 13 00:14:53.479069 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b880-0x8c58b91b] Aug 13 00:14:53.479076 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b920-0x8c58b95b] Aug 13 00:14:53.479081 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b960-0x8c58b9a0] Aug 13 00:14:53.479087 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58b9a8-0x8c58d4c3] Aug 13 00:14:53.479092 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d4c8-0x8c59068d] Aug 13 00:14:53.479098 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590690-0x8c5929ba] Aug 13 00:14:53.479104 kernel: ACPI: Reserving HPET table memory at [mem 0x8c5929c0-0x8c5929f7] Aug 13 00:14:53.479109 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5929f8-0x8c5939a5] Aug 13 00:14:53.479115 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5939a8-0x8c59429e] Aug 13 00:14:53.479120 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c5942a0-0x8c5942e1] Aug 13 00:14:53.479127 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c5942e8-0x8c59437b] Aug 13 00:14:53.479133 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594380-0x8c596b5d] Aug 13 00:14:53.479138 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596b60-0x8c598041] Aug 13 00:14:53.479144 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c598048-0x8c59807b] Aug 13 00:14:53.479149 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598080-0x8c5980d3] Aug 13 00:14:53.479155 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5980d8-0x8c599c3e] Aug 13 00:14:53.479160 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599c40-0x8c599caf] Aug 13 00:14:53.479166 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599cb0-0x8c599df3] Aug 13 00:14:53.479171 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599df8-0x8c599e2b] Aug 13 00:14:53.479177 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599e30-0x8c59abbe] Aug 13 00:14:53.479184 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59abc0-0x8c59abe7] Aug 13 00:14:53.479189 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59abe8-0x8c59ad17] Aug 13 00:14:53.479195 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad18-0x8c59af47] Aug 13 00:14:53.479201 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59af48-0x8c59af77] Aug 13 00:14:53.479206 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59af78-0x8c59b1f3] Aug 13 00:14:53.479212 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b1f8-0x8c59b359] Aug 13 00:14:53.479217 kernel: No NUMA configuration found Aug 13 00:14:53.479223 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Aug 13 00:14:53.479229 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Aug 13 00:14:53.479236 kernel: Zone ranges: Aug 13 00:14:53.479241 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:14:53.479247 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 00:14:53.479252 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Aug 13 00:14:53.479258 kernel: Movable zone start for each node Aug 13 00:14:53.479264 kernel: Early memory node ranges Aug 13 00:14:53.479269 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Aug 13 00:14:53.479275 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Aug 13 00:14:53.479281 kernel: node 0: [mem 0x0000000040400000-0x0000000081a70fff] Aug 13 00:14:53.479287 kernel: node 0: [mem 0x0000000081a73000-0x000000008afcdfff] Aug 13 00:14:53.479293 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Aug 13 00:14:53.479298 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Aug 13 00:14:53.479304 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Aug 13 00:14:53.479314 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Aug 13 00:14:53.479321 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:14:53.479327 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Aug 13 00:14:53.479333 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Aug 13 00:14:53.479340 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Aug 13 00:14:53.479346 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Aug 13 00:14:53.479352 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Aug 13 00:14:53.479358 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Aug 13 00:14:53.479364 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Aug 13 00:14:53.479370 kernel: ACPI: PM-Timer IO Port: 0x1808 Aug 13 00:14:53.479376 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Aug 13 00:14:53.479382 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Aug 13 00:14:53.479388 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Aug 13 00:14:53.479395 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Aug 13 00:14:53.479401 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Aug 13 00:14:53.479407 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Aug 13 00:14:53.479413 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Aug 13 00:14:53.479419 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Aug 13 00:14:53.479425 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Aug 13 00:14:53.479431 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Aug 13 00:14:53.479437 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Aug 13 00:14:53.479443 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Aug 13 00:14:53.479449 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Aug 13 00:14:53.479456 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Aug 13 00:14:53.479465 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Aug 13 00:14:53.479472 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Aug 13 00:14:53.479478 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Aug 13 00:14:53.479484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:14:53.479490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:14:53.479496 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:14:53.479502 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:14:53.479508 kernel: TSC deadline timer available Aug 13 00:14:53.479516 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Aug 13 00:14:53.479522 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Aug 13 00:14:53.479528 kernel: Booting paravirtualized kernel on bare hardware Aug 13 00:14:53.479534 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:14:53.479540 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Aug 13 00:14:53.479546 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Aug 13 00:14:53.479552 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Aug 13 00:14:53.479558 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Aug 13 00:14:53.479565 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:14:53.479572 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:14:53.479578 kernel: random: crng init done Aug 13 00:14:53.479584 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Aug 13 00:14:53.479590 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Aug 13 00:14:53.479596 kernel: Fallback order for Node 0: 0 Aug 13 00:14:53.479602 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Aug 13 00:14:53.479608 kernel: Policy zone: Normal Aug 13 00:14:53.479615 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:14:53.479622 kernel: software IO TLB: area num 16. Aug 13 00:14:53.479628 kernel: Memory: 32718244K/33452984K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43504K init, 1572K bss, 734480K reserved, 0K cma-reserved) Aug 13 00:14:53.479634 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Aug 13 00:14:53.479640 kernel: ftrace: allocating 37942 entries in 149 pages Aug 13 00:14:53.479646 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 00:14:53.479652 kernel: Dynamic Preempt: voluntary Aug 13 00:14:53.479658 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:14:53.479665 kernel: rcu: RCU event tracing is enabled. Aug 13 00:14:53.479671 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Aug 13 00:14:53.479678 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:14:53.479684 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:14:53.479690 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:14:53.479696 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:14:53.479702 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Aug 13 00:14:53.479708 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Aug 13 00:14:53.479714 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:14:53.479720 kernel: Console: colour VGA+ 80x25 Aug 13 00:14:53.479726 kernel: printk: console [tty0] enabled Aug 13 00:14:53.479733 kernel: printk: console [ttyS1] enabled Aug 13 00:14:53.479739 kernel: ACPI: Core revision 20230628 Aug 13 00:14:53.479745 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Aug 13 00:14:53.479751 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:14:53.479758 kernel: DMAR: Host address width 39 Aug 13 00:14:53.479764 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Aug 13 00:14:53.479770 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Aug 13 00:14:53.479776 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Aug 13 00:14:53.479782 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Aug 13 00:14:53.479789 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Aug 13 00:14:53.479795 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Aug 13 00:14:53.479801 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Aug 13 00:14:53.479807 kernel: x2apic enabled Aug 13 00:14:53.479813 kernel: APIC: Switched APIC routing to: cluster x2apic Aug 13 00:14:53.479819 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:14:53.479825 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Aug 13 00:14:53.479832 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Aug 13 00:14:53.479838 kernel: CPU0: Thermal monitoring enabled (TM1) Aug 13 00:14:53.479845 kernel: process: using mwait in idle threads Aug 13 00:14:53.479851 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 00:14:53.479857 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 00:14:53.479863 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:14:53.479869 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 00:14:53.479875 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 00:14:53.479881 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Aug 13 00:14:53.479887 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Aug 13 00:14:53.479893 kernel: RETBleed: Mitigation: Enhanced IBRS Aug 13 00:14:53.479900 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:14:53.479906 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 00:14:53.479912 kernel: TAA: Mitigation: TSX disabled Aug 13 00:14:53.479918 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Aug 13 00:14:53.479924 kernel: SRBDS: Mitigation: Microcode Aug 13 00:14:53.479930 kernel: GDS: Mitigation: Microcode Aug 13 00:14:53.479936 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:14:53.479942 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:14:53.479948 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:14:53.479955 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:14:53.479961 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Aug 13 00:14:53.479967 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Aug 13 00:14:53.479973 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:14:53.479979 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Aug 13 00:14:53.479986 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Aug 13 00:14:53.479992 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Aug 13 00:14:53.479998 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:14:53.480004 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:14:53.480011 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:14:53.480017 kernel: landlock: Up and running. Aug 13 00:14:53.480023 kernel: SELinux: Initializing. Aug 13 00:14:53.480029 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:14:53.480035 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:14:53.480041 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Aug 13 00:14:53.480047 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 00:14:53.480054 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 00:14:53.480060 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 00:14:53.480067 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Aug 13 00:14:53.480073 kernel: ... version: 4 Aug 13 00:14:53.480079 kernel: ... bit width: 48 Aug 13 00:14:53.480085 kernel: ... generic registers: 4 Aug 13 00:14:53.480091 kernel: ... value mask: 0000ffffffffffff Aug 13 00:14:53.480097 kernel: ... max period: 00007fffffffffff Aug 13 00:14:53.480103 kernel: ... fixed-purpose events: 3 Aug 13 00:14:53.480109 kernel: ... event mask: 000000070000000f Aug 13 00:14:53.480115 kernel: signal: max sigframe size: 2032 Aug 13 00:14:53.480122 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Aug 13 00:14:53.480128 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:14:53.480134 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:14:53.480140 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Aug 13 00:14:53.480146 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:14:53.480152 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:14:53.480159 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Aug 13 00:14:53.480165 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 00:14:53.480172 kernel: smp: Brought up 1 node, 16 CPUs Aug 13 00:14:53.480178 kernel: smpboot: Max logical packages: 1 Aug 13 00:14:53.480184 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Aug 13 00:14:53.480190 kernel: devtmpfs: initialized Aug 13 00:14:53.480196 kernel: x86/mm: Memory block size: 128MB Aug 13 00:14:53.480202 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81a71000-0x81a71fff] (4096 bytes) Aug 13 00:14:53.480208 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Aug 13 00:14:53.480215 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:14:53.480221 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Aug 13 00:14:53.480228 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:14:53.480234 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:14:53.480240 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:14:53.480246 kernel: audit: type=2000 audit(1755044088.132:1): state=initialized audit_enabled=0 res=1 Aug 13 00:14:53.480252 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:14:53.480257 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:14:53.480263 kernel: cpuidle: using governor menu Aug 13 00:14:53.480269 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:14:53.480275 kernel: dca service started, version 1.12.1 Aug 13 00:14:53.480283 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Aug 13 00:14:53.480289 kernel: PCI: Using configuration type 1 for base access Aug 13 00:14:53.480295 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Aug 13 00:14:53.480301 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:14:53.480307 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:14:53.480313 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:14:53.480319 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:14:53.480325 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:14:53.480331 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:14:53.480338 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:14:53.480344 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:14:53.480350 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Aug 13 00:14:53.480356 kernel: ACPI: Dynamic OEM Table Load: Aug 13 00:14:53.480362 kernel: ACPI: SSDT 0xFFFF9AAE81AF5C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Aug 13 00:14:53.480368 kernel: ACPI: Dynamic OEM Table Load: Aug 13 00:14:53.480374 kernel: ACPI: SSDT 0xFFFF9AAE81AEB000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Aug 13 00:14:53.480380 kernel: ACPI: Dynamic OEM Table Load: Aug 13 00:14:53.480386 kernel: ACPI: SSDT 0xFFFF9AAE80247900 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Aug 13 00:14:53.480393 kernel: ACPI: Dynamic OEM Table Load: Aug 13 00:14:53.480399 kernel: ACPI: SSDT 0xFFFF9AAE81AED000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Aug 13 00:14:53.480405 kernel: ACPI: Dynamic OEM Table Load: Aug 13 00:14:53.480411 kernel: ACPI: SSDT 0xFFFF9AAE8012E000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Aug 13 00:14:53.480417 kernel: ACPI: Dynamic OEM Table Load: Aug 13 00:14:53.480423 kernel: ACPI: SSDT 0xFFFF9AAE81AF0000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Aug 13 00:14:53.480429 kernel: ACPI: _OSC evaluated successfully for all CPUs Aug 13 00:14:53.480435 kernel: ACPI: Interpreter enabled Aug 13 00:14:53.480441 kernel: ACPI: PM: (supports S0 S5) Aug 13 00:14:53.480447 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:14:53.480454 kernel: HEST: Enabling Firmware First mode for corrected errors. Aug 13 00:14:53.480460 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Aug 13 00:14:53.480468 kernel: HEST: Table parsing has been initialized. Aug 13 00:14:53.480475 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Aug 13 00:14:53.480481 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:14:53.480487 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 00:14:53.480493 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Aug 13 00:14:53.480499 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Aug 13 00:14:53.480505 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Aug 13 00:14:53.480513 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Aug 13 00:14:53.480519 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Aug 13 00:14:53.480525 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Aug 13 00:14:53.480531 kernel: ACPI: \_TZ_.FN00: New power resource Aug 13 00:14:53.480537 kernel: ACPI: \_TZ_.FN01: New power resource Aug 13 00:14:53.480543 kernel: ACPI: \_TZ_.FN02: New power resource Aug 13 00:14:53.480549 kernel: ACPI: \_TZ_.FN03: New power resource Aug 13 00:14:53.480555 kernel: ACPI: \_TZ_.FN04: New power resource Aug 13 00:14:53.480561 kernel: ACPI: \PIN_: New power resource Aug 13 00:14:53.480568 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Aug 13 00:14:53.480651 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:14:53.480708 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Aug 13 00:14:53.480762 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Aug 13 00:14:53.480771 kernel: PCI host bridge to bus 0000:00 Aug 13 00:14:53.480827 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:14:53.480877 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:14:53.480928 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:14:53.480975 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Aug 13 00:14:53.481023 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Aug 13 00:14:53.481069 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Aug 13 00:14:53.481134 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Aug 13 00:14:53.481201 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Aug 13 00:14:53.481262 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Aug 13 00:14:53.481321 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Aug 13 00:14:53.481379 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Aug 13 00:14:53.481438 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Aug 13 00:14:53.481497 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Aug 13 00:14:53.481558 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Aug 13 00:14:53.481616 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Aug 13 00:14:53.481675 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Aug 13 00:14:53.481731 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Aug 13 00:14:53.481785 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Aug 13 00:14:53.481842 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Aug 13 00:14:53.481897 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Aug 13 00:14:53.481955 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Aug 13 00:14:53.482015 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Aug 13 00:14:53.482070 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Aug 13 00:14:53.482128 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Aug 13 00:14:53.482183 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Aug 13 00:14:53.482240 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Aug 13 00:14:53.482298 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Aug 13 00:14:53.482362 kernel: pci 0000:00:16.0: PME# supported from D3hot Aug 13 00:14:53.482420 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Aug 13 00:14:53.482486 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Aug 13 00:14:53.482542 kernel: pci 0000:00:16.1: PME# supported from D3hot Aug 13 00:14:53.482601 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Aug 13 00:14:53.482656 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Aug 13 00:14:53.482713 kernel: pci 0000:00:16.4: PME# supported from D3hot Aug 13 00:14:53.482772 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Aug 13 00:14:53.482826 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Aug 13 00:14:53.482881 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Aug 13 00:14:53.482934 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Aug 13 00:14:53.482989 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Aug 13 00:14:53.483043 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Aug 13 00:14:53.483099 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Aug 13 00:14:53.483153 kernel: pci 0000:00:17.0: PME# supported from D3hot Aug 13 00:14:53.483215 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Aug 13 00:14:53.483271 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Aug 13 00:14:53.483330 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Aug 13 00:14:53.483389 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Aug 13 00:14:53.483448 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Aug 13 00:14:53.483507 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Aug 13 00:14:53.483566 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Aug 13 00:14:53.483621 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Aug 13 00:14:53.483683 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Aug 13 00:14:53.483739 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Aug 13 00:14:53.483798 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Aug 13 00:14:53.483853 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Aug 13 00:14:53.483911 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Aug 13 00:14:53.483973 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Aug 13 00:14:53.484030 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Aug 13 00:14:53.484085 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Aug 13 00:14:53.484142 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Aug 13 00:14:53.484197 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Aug 13 00:14:53.484252 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 00:14:53.484314 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Aug 13 00:14:53.484372 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Aug 13 00:14:53.484432 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Aug 13 00:14:53.484492 kernel: pci 0000:02:00.0: PME# supported from D3cold Aug 13 00:14:53.484547 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Aug 13 00:14:53.484605 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Aug 13 00:14:53.484668 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Aug 13 00:14:53.484725 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Aug 13 00:14:53.484782 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Aug 13 00:14:53.484841 kernel: pci 0000:02:00.1: PME# supported from D3cold Aug 13 00:14:53.484897 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Aug 13 00:14:53.484954 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Aug 13 00:14:53.485011 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Aug 13 00:14:53.485066 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Aug 13 00:14:53.485121 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Aug 13 00:14:53.485177 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Aug 13 00:14:53.485241 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Aug 13 00:14:53.485299 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Aug 13 00:14:53.485356 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Aug 13 00:14:53.485411 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Aug 13 00:14:53.485471 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Aug 13 00:14:53.485527 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Aug 13 00:14:53.485584 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Aug 13 00:14:53.485715 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Aug 13 00:14:53.485774 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Aug 13 00:14:53.485839 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Aug 13 00:14:53.485895 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Aug 13 00:14:53.485952 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Aug 13 00:14:53.486008 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Aug 13 00:14:53.486064 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Aug 13 00:14:53.486122 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Aug 13 00:14:53.486179 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Aug 13 00:14:53.486235 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Aug 13 00:14:53.486289 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Aug 13 00:14:53.486345 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Aug 13 00:14:53.486409 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Aug 13 00:14:53.486470 kernel: pci 0000:07:00.0: enabling Extended Tags Aug 13 00:14:53.486529 kernel: pci 0000:07:00.0: supports D1 D2 Aug 13 00:14:53.486587 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 00:14:53.486644 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Aug 13 00:14:53.486699 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Aug 13 00:14:53.486754 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Aug 13 00:14:53.486814 kernel: pci_bus 0000:08: extended config space not accessible Aug 13 00:14:53.486952 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Aug 13 00:14:53.487012 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Aug 13 00:14:53.487070 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Aug 13 00:14:53.487133 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Aug 13 00:14:53.487193 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:14:53.487251 kernel: pci 0000:08:00.0: supports D1 D2 Aug 13 00:14:53.487310 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 00:14:53.487366 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Aug 13 00:14:53.487423 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Aug 13 00:14:53.487514 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Aug 13 00:14:53.487526 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Aug 13 00:14:53.487533 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Aug 13 00:14:53.487539 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Aug 13 00:14:53.487546 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Aug 13 00:14:53.487552 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Aug 13 00:14:53.487558 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Aug 13 00:14:53.487565 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Aug 13 00:14:53.487571 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Aug 13 00:14:53.487578 kernel: iommu: Default domain type: Translated Aug 13 00:14:53.487586 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:14:53.487592 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:14:53.487598 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:14:53.487605 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Aug 13 00:14:53.487611 kernel: e820: reserve RAM buffer [mem 0x81a71000-0x83ffffff] Aug 13 00:14:53.487617 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Aug 13 00:14:53.487623 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Aug 13 00:14:53.487630 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Aug 13 00:14:53.487636 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Aug 13 00:14:53.487695 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Aug 13 00:14:53.487753 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Aug 13 00:14:53.487812 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:14:53.487821 kernel: vgaarb: loaded Aug 13 00:14:53.487828 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Aug 13 00:14:53.487834 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Aug 13 00:14:53.487841 kernel: clocksource: Switched to clocksource tsc-early Aug 13 00:14:53.487847 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:14:53.487853 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:14:53.487862 kernel: pnp: PnP ACPI init Aug 13 00:14:53.487917 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Aug 13 00:14:53.487973 kernel: pnp 00:02: [dma 0 disabled] Aug 13 00:14:53.488030 kernel: pnp 00:03: [dma 0 disabled] Aug 13 00:14:53.488086 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Aug 13 00:14:53.488138 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Aug 13 00:14:53.488194 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Aug 13 00:14:53.488314 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Aug 13 00:14:53.488364 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Aug 13 00:14:53.488414 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Aug 13 00:14:53.488468 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Aug 13 00:14:53.488518 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Aug 13 00:14:53.488568 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Aug 13 00:14:53.488620 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Aug 13 00:14:53.488674 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Aug 13 00:14:53.488724 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Aug 13 00:14:53.488774 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Aug 13 00:14:53.488823 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Aug 13 00:14:53.488873 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Aug 13 00:14:53.488922 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Aug 13 00:14:53.488974 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Aug 13 00:14:53.489027 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Aug 13 00:14:53.489037 kernel: pnp: PnP ACPI: found 9 devices Aug 13 00:14:53.489044 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:14:53.489050 kernel: NET: Registered PF_INET protocol family Aug 13 00:14:53.489057 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:14:53.489063 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Aug 13 00:14:53.489070 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:14:53.489078 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:14:53.489085 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 00:14:53.489092 kernel: TCP: Hash tables configured (established 262144 bind 65536) Aug 13 00:14:53.489098 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 00:14:53.489104 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 00:14:53.489111 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:14:53.489117 kernel: NET: Registered PF_XDP protocol family Aug 13 00:14:53.489172 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Aug 13 00:14:53.489231 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Aug 13 00:14:53.489285 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Aug 13 00:14:53.489342 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 00:14:53.489401 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Aug 13 00:14:53.489458 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Aug 13 00:14:53.489519 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Aug 13 00:14:53.489577 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Aug 13 00:14:53.489703 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Aug 13 00:14:53.489762 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Aug 13 00:14:53.489818 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Aug 13 00:14:53.489874 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Aug 13 00:14:53.489929 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Aug 13 00:14:53.489986 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Aug 13 00:14:53.490043 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Aug 13 00:14:53.490099 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Aug 13 00:14:53.490154 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Aug 13 00:14:53.490210 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Aug 13 00:14:53.490266 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Aug 13 00:14:53.490323 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Aug 13 00:14:53.490379 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Aug 13 00:14:53.490436 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Aug 13 00:14:53.490494 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Aug 13 00:14:53.490553 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Aug 13 00:14:53.490609 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Aug 13 00:14:53.490660 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Aug 13 00:14:53.490709 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:14:53.490757 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:14:53.490805 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:14:53.490853 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Aug 13 00:14:53.490900 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Aug 13 00:14:53.490958 kernel: pci_bus 0000:02: resource 1 [mem 0x95100000-0x952fffff] Aug 13 00:14:53.491008 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Aug 13 00:14:53.491149 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Aug 13 00:14:53.491195 kernel: pci_bus 0000:04: resource 1 [mem 0x95400000-0x954fffff] Aug 13 00:14:53.491245 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Aug 13 00:14:53.491293 kernel: pci_bus 0000:05: resource 1 [mem 0x95300000-0x953fffff] Aug 13 00:14:53.491345 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Aug 13 00:14:53.491391 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Aug 13 00:14:53.491438 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Aug 13 00:14:53.491522 kernel: pci_bus 0000:08: resource 1 [mem 0x94000000-0x950fffff] Aug 13 00:14:53.491531 kernel: PCI: CLS 64 bytes, default 64 Aug 13 00:14:53.491537 kernel: DMAR: No ATSR found Aug 13 00:14:53.491543 kernel: DMAR: No SATC found Aug 13 00:14:53.491549 kernel: DMAR: dmar0: Using Queued invalidation Aug 13 00:14:53.491600 kernel: pci 0000:00:00.0: Adding to iommu group 0 Aug 13 00:14:53.491651 kernel: pci 0000:00:01.0: Adding to iommu group 1 Aug 13 00:14:53.491702 kernel: pci 0000:00:01.1: Adding to iommu group 1 Aug 13 00:14:53.491752 kernel: pci 0000:00:08.0: Adding to iommu group 2 Aug 13 00:14:53.491801 kernel: pci 0000:00:12.0: Adding to iommu group 3 Aug 13 00:14:53.491850 kernel: pci 0000:00:14.0: Adding to iommu group 4 Aug 13 00:14:53.491900 kernel: pci 0000:00:14.2: Adding to iommu group 4 Aug 13 00:14:53.491949 kernel: pci 0000:00:15.0: Adding to iommu group 5 Aug 13 00:14:53.492001 kernel: pci 0000:00:15.1: Adding to iommu group 5 Aug 13 00:14:53.492051 kernel: pci 0000:00:16.0: Adding to iommu group 6 Aug 13 00:14:53.492100 kernel: pci 0000:00:16.1: Adding to iommu group 6 Aug 13 00:14:53.492149 kernel: pci 0000:00:16.4: Adding to iommu group 6 Aug 13 00:14:53.492199 kernel: pci 0000:00:17.0: Adding to iommu group 7 Aug 13 00:14:53.492248 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Aug 13 00:14:53.492297 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Aug 13 00:14:53.492347 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Aug 13 00:14:53.492398 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Aug 13 00:14:53.492545 kernel: pci 0000:00:1c.1: Adding to iommu group 12 Aug 13 00:14:53.492594 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Aug 13 00:14:53.492645 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Aug 13 00:14:53.492695 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Aug 13 00:14:53.492744 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Aug 13 00:14:53.492795 kernel: pci 0000:02:00.0: Adding to iommu group 1 Aug 13 00:14:53.492847 kernel: pci 0000:02:00.1: Adding to iommu group 1 Aug 13 00:14:53.492901 kernel: pci 0000:04:00.0: Adding to iommu group 15 Aug 13 00:14:53.492951 kernel: pci 0000:05:00.0: Adding to iommu group 16 Aug 13 00:14:53.493003 kernel: pci 0000:07:00.0: Adding to iommu group 17 Aug 13 00:14:53.493055 kernel: pci 0000:08:00.0: Adding to iommu group 17 Aug 13 00:14:53.493064 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Aug 13 00:14:53.493070 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 00:14:53.493076 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Aug 13 00:14:53.493082 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Aug 13 00:14:53.493087 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Aug 13 00:14:53.493103 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Aug 13 00:14:53.493110 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Aug 13 00:14:53.493340 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Aug 13 00:14:53.493381 kernel: Initialise system trusted keyrings Aug 13 00:14:53.493398 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Aug 13 00:14:53.493405 kernel: Key type asymmetric registered Aug 13 00:14:53.493411 kernel: Asymmetric key parser 'x509' registered Aug 13 00:14:53.493417 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 00:14:53.493425 kernel: io scheduler mq-deadline registered Aug 13 00:14:53.493487 kernel: io scheduler kyber registered Aug 13 00:14:53.493515 kernel: io scheduler bfq registered Aug 13 00:14:53.493644 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Aug 13 00:14:53.493783 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 122 Aug 13 00:14:53.493922 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 123 Aug 13 00:14:53.494013 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 124 Aug 13 00:14:53.494070 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 125 Aug 13 00:14:53.494127 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 126 Aug 13 00:14:53.494180 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 127 Aug 13 00:14:53.494237 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Aug 13 00:14:53.494247 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Aug 13 00:14:53.494253 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Aug 13 00:14:53.494259 kernel: pstore: Using crash dump compression: deflate Aug 13 00:14:53.494266 kernel: pstore: Registered erst as persistent store backend Aug 13 00:14:53.494272 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:14:53.494280 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:14:53.494286 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:14:53.494292 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 00:14:53.494348 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Aug 13 00:14:53.494357 kernel: i8042: PNP: No PS/2 controller found. Aug 13 00:14:53.494416 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Aug 13 00:14:53.494469 kernel: rtc_cmos rtc_cmos: registered as rtc0 Aug 13 00:14:53.494521 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-08-13T00:14:52 UTC (1755044092) Aug 13 00:14:53.494569 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Aug 13 00:14:53.494578 kernel: intel_pstate: Intel P-state driver initializing Aug 13 00:14:53.494584 kernel: intel_pstate: Disabling energy efficiency optimization Aug 13 00:14:53.494590 kernel: intel_pstate: HWP enabled Aug 13 00:14:53.494596 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:14:53.494603 kernel: Segment Routing with IPv6 Aug 13 00:14:53.494609 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:14:53.494615 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:14:53.494623 kernel: Key type dns_resolver registered Aug 13 00:14:53.494629 kernel: microcode: Current revision: 0x00000102 Aug 13 00:14:53.494635 kernel: microcode: Microcode Update Driver: v2.2. Aug 13 00:14:53.494641 kernel: IPI shorthand broadcast: enabled Aug 13 00:14:53.494648 kernel: sched_clock: Marking stable (1644017102, 1435129144)->(4560504248, -1481358002) Aug 13 00:14:53.494654 kernel: registered taskstats version 1 Aug 13 00:14:53.494660 kernel: Loading compiled-in X.509 certificates Aug 13 00:14:53.494666 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: dfd2b306eb54324ea79eea0261f8d493924aeeeb' Aug 13 00:14:53.494672 kernel: Key type .fscrypt registered Aug 13 00:14:53.494680 kernel: Key type fscrypt-provisioning registered Aug 13 00:14:53.494686 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:14:53.494692 kernel: ima: No architecture policies found Aug 13 00:14:53.494698 kernel: clk: Disabling unused clocks Aug 13 00:14:53.494705 kernel: Freeing unused kernel image (initmem) memory: 43504K Aug 13 00:14:53.494711 kernel: Write protecting the kernel read-only data: 38912k Aug 13 00:14:53.494717 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Aug 13 00:14:53.494723 kernel: Run /init as init process Aug 13 00:14:53.494729 kernel: with arguments: Aug 13 00:14:53.494737 kernel: /init Aug 13 00:14:53.494743 kernel: with environment: Aug 13 00:14:53.494749 kernel: HOME=/ Aug 13 00:14:53.494754 kernel: TERM=linux Aug 13 00:14:53.494760 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:14:53.494767 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:14:53.494776 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:14:53.494784 systemd[1]: Detected architecture x86-64. Aug 13 00:14:53.494790 systemd[1]: Running in initrd. Aug 13 00:14:53.494796 systemd[1]: No hostname configured, using default hostname. Aug 13 00:14:53.494803 systemd[1]: Hostname set to . Aug 13 00:14:53.494809 systemd[1]: Initializing machine ID from random generator. Aug 13 00:14:53.494816 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:14:53.494822 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:14:53.494829 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:14:53.494837 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:14:53.494843 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:14:53.494850 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:14:53.494856 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:14:53.494864 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:14:53.494870 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:14:53.494877 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:14:53.494884 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:14:53.494891 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:14:53.494897 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:14:53.494903 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:14:53.494910 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:14:53.494916 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:14:53.494923 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:14:53.494929 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:14:53.494935 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:14:53.494943 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:14:53.494950 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:14:53.494956 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:14:53.494962 kernel: tsc: Refined TSC clocksource calibration: 3408.005 MHz Aug 13 00:14:53.494969 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd883e33, max_idle_ns: 440795265572 ns Aug 13 00:14:53.494975 kernel: clocksource: Switched to clocksource tsc Aug 13 00:14:53.494981 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:14:53.494987 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:14:53.494995 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:14:53.495002 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:14:53.495008 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:14:53.495014 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:14:53.495032 systemd-journald[265]: Collecting audit messages is disabled. Aug 13 00:14:53.495049 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:14:53.495056 systemd-journald[265]: Journal started Aug 13 00:14:53.495071 systemd-journald[265]: Runtime Journal (/run/log/journal/3388a63ab0b64bbfbacd8bed52b80cc3) is 8M, max 639.9M, 631.9M free. Aug 13 00:14:53.495311 systemd-modules-load[268]: Inserted module 'overlay' Aug 13 00:14:53.525815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:14:53.525850 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:14:53.533568 systemd-modules-load[268]: Inserted module 'br_netfilter' Aug 13 00:14:53.556689 kernel: Bridge firewalling registered Aug 13 00:14:53.556701 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:14:53.557075 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:14:53.557169 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:14:53.557254 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:14:53.557338 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:14:53.567728 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:14:53.586934 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:14:53.676849 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:14:53.677356 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:14:53.708035 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:14:53.727953 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:14:53.749185 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:14:53.787743 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:14:53.799392 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:14:53.799858 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:14:53.805314 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:14:53.810703 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:14:53.820587 systemd-resolved[294]: Positive Trust Anchors: Aug 13 00:14:53.820591 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:14:53.820615 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:14:53.822141 systemd-resolved[294]: Defaulting to hostname 'linux'. Aug 13 00:14:53.828903 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:14:53.843736 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:14:53.851741 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:14:53.973898 dracut-cmdline[309]: dracut-dracut-053 Aug 13 00:14:53.982696 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:14:54.166509 kernel: SCSI subsystem initialized Aug 13 00:14:54.179501 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:14:54.192495 kernel: iscsi: registered transport (tcp) Aug 13 00:14:54.215528 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:14:54.215545 kernel: QLogic iSCSI HBA Driver Aug 13 00:14:54.238046 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:14:54.260740 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:14:54.296998 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:14:54.297017 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:14:54.305800 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:14:54.341529 kernel: raid6: avx2x4 gen() 47214 MB/s Aug 13 00:14:54.362527 kernel: raid6: avx2x2 gen() 53837 MB/s Aug 13 00:14:54.388592 kernel: raid6: avx2x1 gen() 45235 MB/s Aug 13 00:14:54.388609 kernel: raid6: using algorithm avx2x2 gen() 53837 MB/s Aug 13 00:14:54.415686 kernel: raid6: .... xor() 32470 MB/s, rmw enabled Aug 13 00:14:54.415706 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:14:54.436500 kernel: xor: automatically using best checksumming function avx Aug 13 00:14:54.535504 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:14:54.541142 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:14:54.567772 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:14:54.575503 systemd-udevd[496]: Using default interface naming scheme 'v255'. Aug 13 00:14:54.578604 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:14:54.614691 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:14:54.665535 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Aug 13 00:14:54.734236 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:14:54.759882 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:14:54.852934 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:14:54.894268 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:14:54.894284 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:14:54.894292 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:14:54.894300 kernel: ACPI: bus type USB registered Aug 13 00:14:54.885918 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:14:54.911348 kernel: usbcore: registered new interface driver usbfs Aug 13 00:14:54.911363 kernel: usbcore: registered new interface driver hub Aug 13 00:14:54.911372 kernel: usbcore: registered new device driver usb Aug 13 00:14:54.918468 kernel: PTP clock support registered Aug 13 00:14:54.918507 kernel: libata version 3.00 loaded. Aug 13 00:14:54.934377 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:14:54.934441 kernel: AES CTR mode by8 optimization enabled Aug 13 00:14:54.945470 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Aug 13 00:14:54.945506 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Aug 13 00:14:54.947218 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:14:54.947309 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:14:54.968196 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Aug 13 00:14:54.968325 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Aug 13 00:14:54.969469 kernel: ahci 0000:00:17.0: version 3.0 Aug 13 00:14:54.969564 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Aug 13 00:14:54.977773 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Aug 13 00:14:54.977883 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Aug 13 00:14:54.977968 kernel: igb 0000:04:00.0: added PHC on eth0 Aug 13 00:14:54.978044 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Aug 13 00:14:54.978113 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:08:6a Aug 13 00:14:54.978184 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Aug 13 00:14:54.978251 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Aug 13 00:14:54.995958 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Aug 13 00:14:55.022483 kernel: igb 0000:05:00.0: added PHC on eth1 Aug 13 00:14:55.022579 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Aug 13 00:14:55.030468 kernel: scsi host0: ahci Aug 13 00:14:55.030500 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Aug 13 00:14:55.030595 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Aug 13 00:14:55.030682 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:08:6b Aug 13 00:14:55.030764 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Aug 13 00:14:55.030845 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Aug 13 00:14:55.041176 kernel: scsi host1: ahci Aug 13 00:14:55.042511 kernel: hub 1-0:1.0: USB hub found Aug 13 00:14:55.042608 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Aug 13 00:14:55.052852 kernel: scsi host2: ahci Aug 13 00:14:55.061285 kernel: hub 1-0:1.0: 16 ports detected Aug 13 00:14:55.061511 kernel: scsi host3: ahci Aug 13 00:14:55.080453 kernel: hub 2-0:1.0: USB hub found Aug 13 00:14:55.080604 kernel: scsi host4: ahci Aug 13 00:14:55.080621 kernel: hub 2-0:1.0: 10 ports detected Aug 13 00:14:55.092309 kernel: scsi host5: ahci Aug 13 00:14:55.139511 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:14:55.234601 kernel: scsi host6: ahci Aug 13 00:14:55.234694 kernel: scsi host7: ahci Aug 13 00:14:55.234761 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 133 Aug 13 00:14:55.234771 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 133 Aug 13 00:14:55.234778 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 133 Aug 13 00:14:55.234786 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 133 Aug 13 00:14:55.234793 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 133 Aug 13 00:14:55.234803 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 133 Aug 13 00:14:55.234811 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 133 Aug 13 00:14:55.234818 kernel: ata8: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516480 irq 133 Aug 13 00:14:55.234825 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Aug 13 00:14:55.192629 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:14:55.192729 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:14:55.262589 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:14:55.310646 kernel: mlx5_core 0000:02:00.0: firmware version: 14.31.1014 Aug 13 00:14:55.310742 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Aug 13 00:14:55.312758 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:14:55.338565 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Aug 13 00:14:55.329028 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:14:55.342958 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:14:55.359985 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:14:55.388537 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:14:55.394571 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:14:55.426648 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:14:55.459939 kernel: hub 1-14:1.0: USB hub found Aug 13 00:14:55.460045 kernel: hub 1-14:1.0: 4 ports detected Aug 13 00:14:55.463723 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:14:55.486680 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:14:55.495671 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:14:55.581654 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Aug 13 00:14:55.581666 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 00:14:55.581674 kernel: ata7: SATA link down (SStatus 0 SControl 300) Aug 13 00:14:55.581681 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Aug 13 00:14:55.581688 kernel: ata8: SATA link down (SStatus 0 SControl 300) Aug 13 00:14:55.581695 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:14:55.581702 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:14:55.581712 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:14:55.581720 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Aug 13 00:14:55.581727 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Aug 13 00:14:55.581734 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Aug 13 00:14:55.581741 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Aug 13 00:14:55.581832 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Aug 13 00:14:55.598521 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Aug 13 00:14:55.604999 kernel: ata2.00: Features: NCQ-prio Aug 13 00:14:55.621496 kernel: ata1.00: Features: NCQ-prio Aug 13 00:14:55.631669 kernel: ata2.00: configured for UDMA/133 Aug 13 00:14:55.642469 kernel: ata1.00: configured for UDMA/133 Aug 13 00:14:55.642485 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Aug 13 00:14:55.651542 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Aug 13 00:14:55.674311 kernel: ata2.00: Enabling discard_zeroes_data Aug 13 00:14:55.674333 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 00:14:55.674342 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Aug 13 00:14:55.679004 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Aug 13 00:14:55.693965 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Aug 13 00:14:55.694051 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Aug 13 00:14:55.694119 kernel: sd 0:0:0:0: [sdb] Write Protect is off Aug 13 00:14:55.699225 kernel: sd 1:0:0:0: [sda] Write Protect is off Aug 13 00:14:55.704437 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Aug 13 00:14:55.704529 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 00:14:55.709233 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Aug 13 00:14:55.714024 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Aug 13 00:14:55.723329 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 00:14:55.723506 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 00:14:55.753469 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Aug 13 00:14:55.753555 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:14:55.753565 kernel: ata2.00: Enabling discard_zeroes_data Aug 13 00:14:55.761530 kernel: GPT:9289727 != 937703087 Aug 13 00:14:55.761546 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Aug 13 00:14:55.768469 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Aug 13 00:14:55.768585 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:14:55.768601 kernel: GPT:9289727 != 937703087 Aug 13 00:14:55.768614 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:14:55.768627 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Aug 13 00:14:55.768640 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Aug 13 00:14:55.837128 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Aug 13 00:14:55.924577 kernel: BTRFS: device fsid 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 devid 1 transid 45 /dev/sdb3 scanned by (udev-worker) (579) Aug 13 00:14:55.924592 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 13 00:14:55.924689 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by (udev-worker) (565) Aug 13 00:14:55.924698 kernel: mlx5_core 0000:02:00.1: firmware version: 14.31.1014 Aug 13 00:14:55.924771 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Aug 13 00:14:55.924839 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:14:55.924849 kernel: usbcore: registered new interface driver usbhid Aug 13 00:14:55.924857 kernel: usbhid: USB HID core driver Aug 13 00:14:55.924864 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Aug 13 00:14:55.924850 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Aug 13 00:14:55.946687 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Aug 13 00:14:56.014580 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Aug 13 00:14:56.014746 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Aug 13 00:14:56.014755 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Aug 13 00:14:55.957552 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Aug 13 00:14:55.964892 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Aug 13 00:14:56.035605 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:14:56.080567 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 00:14:56.080582 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Aug 13 00:14:56.080643 disk-uuid[720]: Primary Header is updated. Aug 13 00:14:56.080643 disk-uuid[720]: Secondary Entries is updated. Aug 13 00:14:56.080643 disk-uuid[720]: Secondary Header is updated. Aug 13 00:14:56.173511 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Aug 13 00:14:56.185897 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Aug 13 00:14:56.452544 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 13 00:14:56.466682 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Aug 13 00:14:56.482661 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Aug 13 00:14:57.065187 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 00:14:57.072533 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Aug 13 00:14:57.072937 disk-uuid[721]: The operation has completed successfully. Aug 13 00:14:57.108296 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:14:57.108346 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:14:57.165705 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:14:57.191524 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 00:14:57.191581 sh[746]: Success Aug 13 00:14:57.223935 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:14:57.244270 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:14:57.261823 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:14:57.312237 kernel: BTRFS info (device dm-0): first mount of filesystem 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 Aug 13 00:14:57.312261 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:14:57.321858 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:14:57.328880 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:14:57.334741 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:14:57.347501 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 00:14:57.348808 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:14:57.349143 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:14:57.358819 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:14:57.409533 kernel: BTRFS info (device sdb6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:14:57.409552 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:14:57.409561 kernel: BTRFS info (device sdb6): using free space tree Aug 13 00:14:57.425624 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:14:57.469603 kernel: BTRFS info (device sdb6): enabling ssd optimizations Aug 13 00:14:57.469619 kernel: BTRFS info (device sdb6): auto enabling async discard Aug 13 00:14:57.469627 kernel: BTRFS info (device sdb6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:14:57.456745 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:14:57.480723 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:14:57.524972 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:14:57.535274 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:14:57.581590 systemd-networkd[927]: lo: Link UP Aug 13 00:14:57.581594 systemd-networkd[927]: lo: Gained carrier Aug 13 00:14:57.584215 systemd-networkd[927]: Enumeration completed Aug 13 00:14:57.600913 ignition[926]: Ignition 2.20.0 Aug 13 00:14:57.584288 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:14:57.600918 ignition[926]: Stage: fetch-offline Aug 13 00:14:57.584932 systemd-networkd[927]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:14:57.600938 ignition[926]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:57.588658 systemd[1]: Reached target network.target - Network. Aug 13 00:14:57.600943 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 00:14:57.603181 unknown[926]: fetched base config from "system" Aug 13 00:14:57.600994 ignition[926]: parsed url from cmdline: "" Aug 13 00:14:57.603185 unknown[926]: fetched user config from "system" Aug 13 00:14:57.600996 ignition[926]: no config URL provided Aug 13 00:14:57.604026 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:14:57.600999 ignition[926]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:14:57.611517 systemd-networkd[927]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:14:57.601020 ignition[926]: parsing config with SHA512: c93eea1a1098272b4cde974ffd8a87e7c1919ca4fd10b4d66eebc411cc051e28b1598eb36c38ae1e287619adfecdb9cb62f48db51c159e5543411c75d03e55b2 Aug 13 00:14:57.629805 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:14:57.603398 ignition[926]: fetch-offline: fetch-offline passed Aug 13 00:14:57.636767 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:14:57.603401 ignition[926]: POST message to Packet Timeline Aug 13 00:14:57.639209 systemd-networkd[927]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:14:57.603404 ignition[926]: POST Status error: resource requires networking Aug 13 00:14:57.603442 ignition[926]: Ignition finished successfully Aug 13 00:14:57.645421 ignition[940]: Ignition 2.20.0 Aug 13 00:14:57.837596 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Aug 13 00:14:57.834285 systemd-networkd[927]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:14:57.645426 ignition[940]: Stage: kargs Aug 13 00:14:57.645535 ignition[940]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:14:57.645542 ignition[940]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 00:14:57.646063 ignition[940]: kargs: kargs passed Aug 13 00:14:57.646066 ignition[940]: POST message to Packet Timeline Aug 13 00:14:57.646077 ignition[940]: GET https://metadata.packet.net/metadata: attempt #1 Aug 13 00:14:57.646511 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41993->[::1]:53: read: connection refused Aug 13 00:14:57.846999 ignition[940]: GET https://metadata.packet.net/metadata: attempt #2 Aug 13 00:14:57.847967 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49826->[::1]:53: read: connection refused Aug 13 00:14:58.078583 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Aug 13 00:14:58.080629 systemd-networkd[927]: eno1: Link UP Aug 13 00:14:58.081041 systemd-networkd[927]: eno2: Link UP Aug 13 00:14:58.081414 systemd-networkd[927]: enp2s0f0np0: Link UP Aug 13 00:14:58.081894 systemd-networkd[927]: enp2s0f0np0: Gained carrier Aug 13 00:14:58.097978 systemd-networkd[927]: enp2s0f1np1: Link UP Aug 13 00:14:58.133713 systemd-networkd[927]: enp2s0f0np0: DHCPv4 address 147.75.71.157/31, gateway 147.75.71.156 acquired from 145.40.83.140 Aug 13 00:14:58.248330 ignition[940]: GET https://metadata.packet.net/metadata: attempt #3 Aug 13 00:14:58.249780 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59309->[::1]:53: read: connection refused Aug 13 00:14:58.887126 systemd-networkd[927]: enp2s0f1np1: Gained carrier Aug 13 00:14:59.050172 ignition[940]: GET https://metadata.packet.net/metadata: attempt #4 Aug 13 00:14:59.051436 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53846->[::1]:53: read: connection refused Aug 13 00:14:59.142970 systemd-networkd[927]: enp2s0f0np0: Gained IPv6LL Aug 13 00:15:00.231016 systemd-networkd[927]: enp2s0f1np1: Gained IPv6LL Aug 13 00:15:00.652618 ignition[940]: GET https://metadata.packet.net/metadata: attempt #5 Aug 13 00:15:00.654288 ignition[940]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36818->[::1]:53: read: connection refused Aug 13 00:15:03.857427 ignition[940]: GET https://metadata.packet.net/metadata: attempt #6 Aug 13 00:15:05.919837 ignition[940]: GET result: OK Aug 13 00:15:07.029851 ignition[940]: Ignition finished successfully Aug 13 00:15:07.034907 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:15:07.060747 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:15:07.066953 ignition[958]: Ignition 2.20.0 Aug 13 00:15:07.066957 ignition[958]: Stage: disks Aug 13 00:15:07.067058 ignition[958]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:15:07.067064 ignition[958]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 00:15:07.067579 ignition[958]: disks: disks passed Aug 13 00:15:07.067581 ignition[958]: POST message to Packet Timeline Aug 13 00:15:07.067592 ignition[958]: GET https://metadata.packet.net/metadata: attempt #1 Aug 13 00:15:08.305427 ignition[958]: GET result: OK Aug 13 00:15:08.862109 ignition[958]: Ignition finished successfully Aug 13 00:15:08.866635 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:15:08.881250 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:15:08.900910 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:15:08.921797 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:15:08.942903 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:15:08.963798 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:15:08.997751 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:15:09.033970 systemd-fsck[977]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 00:15:09.044117 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:15:09.069673 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:15:09.143362 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:15:09.160690 kernel: EXT4-fs (sdb9): mounted filesystem 27db109b-2440-48a3-909e-fd8973275523 r/w with ordered data mode. Quota mode: none. Aug 13 00:15:09.152882 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:15:09.181625 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:15:09.211166 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sdb6 scanned by mount (986) Aug 13 00:15:09.211180 kernel: BTRFS info (device sdb6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:15:09.182429 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:15:09.254697 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:15:09.254710 kernel: BTRFS info (device sdb6): using free space tree Aug 13 00:15:09.254718 kernel: BTRFS info (device sdb6): enabling ssd optimizations Aug 13 00:15:09.254725 kernel: BTRFS info (device sdb6): auto enabling async discard Aug 13 00:15:09.252748 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 00:15:09.290728 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Aug 13 00:15:09.313606 coreos-metadata[1004]: Aug 13 00:15:09.309 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Aug 13 00:15:09.337662 coreos-metadata[1003]: Aug 13 00:15:09.309 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Aug 13 00:15:09.302603 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:15:09.302624 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:15:09.325775 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:15:09.345911 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:15:09.378830 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:15:09.434508 initrd-setup-root[1018]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:15:09.444536 initrd-setup-root[1025]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:15:09.455527 initrd-setup-root[1032]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:15:09.466542 initrd-setup-root[1039]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:15:09.494870 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:15:09.520724 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:15:09.549696 kernel: BTRFS info (device sdb6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:15:09.539024 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:15:09.560275 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:15:09.578424 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:15:09.582427 ignition[1106]: INFO : Ignition 2.20.0 Aug 13 00:15:09.582427 ignition[1106]: INFO : Stage: mount Aug 13 00:15:09.608665 ignition[1106]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:15:09.608665 ignition[1106]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 00:15:09.608665 ignition[1106]: INFO : mount: mount passed Aug 13 00:15:09.608665 ignition[1106]: INFO : POST message to Packet Timeline Aug 13 00:15:09.608665 ignition[1106]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Aug 13 00:15:10.545162 coreos-metadata[1004]: Aug 13 00:15:10.545 INFO Fetch successful Aug 13 00:15:10.583178 systemd[1]: flatcar-static-network.service: Deactivated successfully. Aug 13 00:15:10.583237 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Aug 13 00:15:10.763248 coreos-metadata[1003]: Aug 13 00:15:10.763 INFO Fetch successful Aug 13 00:15:10.833434 coreos-metadata[1003]: Aug 13 00:15:10.833 INFO wrote hostname ci-4230.2.2-a-e75a6b4c18 to /sysroot/etc/hostname Aug 13 00:15:10.834942 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:15:10.869676 ignition[1106]: INFO : GET result: OK Aug 13 00:15:11.394203 ignition[1106]: INFO : Ignition finished successfully Aug 13 00:15:11.397356 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:15:11.424776 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:15:11.436041 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:15:11.485494 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (1131) Aug 13 00:15:11.503059 kernel: BTRFS info (device sdb6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:15:11.503075 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:15:11.508963 kernel: BTRFS info (device sdb6): using free space tree Aug 13 00:15:11.523860 kernel: BTRFS info (device sdb6): enabling ssd optimizations Aug 13 00:15:11.523875 kernel: BTRFS info (device sdb6): auto enabling async discard Aug 13 00:15:11.525734 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:15:11.558737 ignition[1148]: INFO : Ignition 2.20.0 Aug 13 00:15:11.558737 ignition[1148]: INFO : Stage: files Aug 13 00:15:11.572699 ignition[1148]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:15:11.572699 ignition[1148]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 00:15:11.572699 ignition[1148]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:15:11.572699 ignition[1148]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:15:11.572699 ignition[1148]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:15:11.572699 ignition[1148]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:15:11.572699 ignition[1148]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:15:11.572699 ignition[1148]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:15:11.572699 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:15:11.572699 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 00:15:11.562460 unknown[1148]: wrote ssh authorized keys file for user: core Aug 13 00:15:11.704615 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:15:11.725058 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:15:11.725058 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:15:11.757803 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:15:12.039619 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:15:12.128514 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:15:12.143756 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:15:12.143756 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:15:12.143756 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:15:12.143756 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:15:12.143756 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:15:12.143756 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:15:12.143756 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:15:12.143756 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:15:12.143756 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:15:12.143756 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:15:12.143756 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:15:12.143756 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:15:12.143756 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:15:12.143756 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 00:15:12.525245 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:15:12.758198 ignition[1148]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:15:12.758198 ignition[1148]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:15:12.787698 ignition[1148]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:15:12.787698 ignition[1148]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:15:12.787698 ignition[1148]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:15:12.787698 ignition[1148]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:15:12.787698 ignition[1148]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:15:12.787698 ignition[1148]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:15:12.787698 ignition[1148]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:15:12.787698 ignition[1148]: INFO : files: files passed Aug 13 00:15:12.787698 ignition[1148]: INFO : POST message to Packet Timeline Aug 13 00:15:12.787698 ignition[1148]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Aug 13 00:15:13.909069 ignition[1148]: INFO : GET result: OK Aug 13 00:15:14.810402 ignition[1148]: INFO : Ignition finished successfully Aug 13 00:15:14.812159 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:15:14.853706 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:15:14.864147 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:15:14.874958 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:15:14.875030 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:15:14.928109 initrd-setup-root-after-ignition[1187]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:15:14.928109 initrd-setup-root-after-ignition[1187]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:15:14.966755 initrd-setup-root-after-ignition[1191]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:15:14.932699 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:15:14.943810 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:15:14.987737 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:15:15.036289 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:15:15.036337 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:15:15.054858 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:15:15.065714 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:15:15.092854 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:15:15.108878 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:15:15.178298 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:15:15.203887 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:15:15.236225 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:15:15.247996 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:15:15.269185 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:15:15.287283 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:15:15.287736 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:15:15.316263 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:15:15.337102 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:15:15.355216 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:15:15.374099 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:15:15.396104 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:15:15.417101 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:15:15.437105 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:15:15.459271 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:15:15.480129 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:15:15.501049 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:15:15.518995 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:15:15.519414 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:15:15.546372 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:15:15.566125 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:15:15.586981 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:15:15.587334 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:15:15.608990 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:15:15.609409 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:15:15.640094 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:15:15.640588 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:15:15.660306 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:15:15.679960 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:15:15.683691 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:15:15.701166 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:15:15.721106 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:15:15.739200 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:15:15.739543 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:15:15.759123 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:15:15.759413 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:15:15.783265 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:15:15.783711 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:15:15.802216 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:15:15.914691 ignition[1211]: INFO : Ignition 2.20.0 Aug 13 00:15:15.914691 ignition[1211]: INFO : Stage: umount Aug 13 00:15:15.914691 ignition[1211]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:15:15.914691 ignition[1211]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 00:15:15.914691 ignition[1211]: INFO : umount: umount passed Aug 13 00:15:15.914691 ignition[1211]: INFO : POST message to Packet Timeline Aug 13 00:15:15.914691 ignition[1211]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Aug 13 00:15:15.802639 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:15:15.820165 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:15:15.820596 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:15:15.848600 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:15:15.872586 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:15:15.872727 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:15:15.912864 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:15:15.922728 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:15:15.923152 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:15:15.930221 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:15:15.930617 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:15:15.982743 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:15:15.985352 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:15:15.985613 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:15:15.996679 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:15:15.996923 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:15:19.839506 ignition[1211]: INFO : GET result: OK Aug 13 00:15:20.226519 ignition[1211]: INFO : Ignition finished successfully Aug 13 00:15:20.228271 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:15:20.228434 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:15:20.246722 systemd[1]: Stopped target network.target - Network. Aug 13 00:15:20.261772 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:15:20.261986 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:15:20.280876 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:15:20.281054 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:15:20.298909 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:15:20.299080 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:15:20.318024 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:15:20.318194 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:15:20.336993 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:15:20.337173 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:15:20.356385 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:15:20.376011 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:15:20.394640 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:15:20.394931 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:15:20.418290 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:15:20.418908 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:15:20.419180 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:15:20.436377 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:15:20.438978 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:15:20.439102 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:15:20.463704 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:15:20.482659 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:15:20.482699 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:15:20.502771 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:15:20.502853 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:15:20.524350 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:15:20.524524 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:15:20.542854 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:15:20.543029 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:15:20.564120 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:15:20.589110 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:15:20.589312 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:15:20.590440 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:15:20.590816 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:15:20.622708 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:15:20.622848 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:15:20.627009 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:15:20.627119 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:15:20.654721 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:15:20.654878 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:15:20.686086 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:15:20.686260 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:15:20.725652 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:15:20.725838 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:15:20.784813 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:15:20.788820 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:15:20.788998 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:15:20.820144 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:15:21.066610 systemd-journald[265]: Received SIGTERM from PID 1 (systemd). Aug 13 00:15:20.820294 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:15:20.844327 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:15:20.844634 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:15:20.845752 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:15:20.846097 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:15:20.917167 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:15:20.917444 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:15:20.924987 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:15:20.966864 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:15:21.002938 systemd[1]: Switching root. Aug 13 00:15:21.166608 systemd-journald[265]: Journal stopped Aug 13 00:15:22.897561 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:15:22.897577 kernel: SELinux: policy capability open_perms=1 Aug 13 00:15:22.897584 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:15:22.897590 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:15:22.897597 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:15:22.897602 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:15:22.897609 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:15:22.897614 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:15:22.897620 kernel: audit: type=1403 audit(1755044121.286:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:15:22.897627 systemd[1]: Successfully loaded SELinux policy in 74.241ms. Aug 13 00:15:22.897636 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.119ms. Aug 13 00:15:22.897643 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:15:22.897650 systemd[1]: Detected architecture x86-64. Aug 13 00:15:22.897656 systemd[1]: Detected first boot. Aug 13 00:15:22.897663 systemd[1]: Hostname set to . Aug 13 00:15:22.897671 systemd[1]: Initializing machine ID from random generator. Aug 13 00:15:22.897678 zram_generator::config[1265]: No configuration found. Aug 13 00:15:22.897685 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:15:22.897692 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:15:22.897699 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:15:22.897706 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:15:22.897712 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:15:22.897720 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:15:22.897727 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:15:22.897734 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:15:22.897741 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:15:22.897748 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:15:22.897755 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:15:22.897762 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:15:22.897770 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:15:22.897777 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:15:22.897784 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:15:22.897791 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:15:22.897798 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:15:22.897805 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:15:22.897812 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:15:22.897818 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Aug 13 00:15:22.897826 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:15:22.897833 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:15:22.897841 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:15:22.897849 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:15:22.897856 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:15:22.897864 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:15:22.897870 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:15:22.897877 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:15:22.897885 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:15:22.897893 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:15:22.897899 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:15:22.897907 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:15:22.897914 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:15:22.897922 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:15:22.897930 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:15:22.897937 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:15:22.897944 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:15:22.897952 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:15:22.897959 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:15:22.897966 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:15:22.897973 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:15:22.897981 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:15:22.897988 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:15:22.897996 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:15:22.898003 systemd[1]: Reached target machines.target - Containers. Aug 13 00:15:22.898010 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:15:22.898018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:15:22.898025 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:15:22.898032 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:15:22.898040 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:15:22.898048 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:15:22.898055 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:15:22.898062 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:15:22.898069 kernel: ACPI: bus type drm_connector registered Aug 13 00:15:22.898075 kernel: fuse: init (API version 7.39) Aug 13 00:15:22.898082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:15:22.898089 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:15:22.898097 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:15:22.898104 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:15:22.898111 kernel: loop: module loaded Aug 13 00:15:22.898118 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:15:22.898125 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:15:22.898132 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:15:22.898140 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:15:22.898157 systemd-journald[1369]: Collecting audit messages is disabled. Aug 13 00:15:22.898174 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:15:22.898183 systemd-journald[1369]: Journal started Aug 13 00:15:22.898198 systemd-journald[1369]: Runtime Journal (/run/log/journal/b94386657ebf41888b5aace992af5b47) is 8M, max 639.9M, 631.9M free. Aug 13 00:15:21.731118 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:15:21.751257 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Aug 13 00:15:21.751944 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:15:22.925520 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:15:22.936508 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:15:22.968530 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:15:22.988512 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:15:23.009627 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:15:23.009653 systemd[1]: Stopped verity-setup.service. Aug 13 00:15:23.034505 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:15:23.042508 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:15:23.051922 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:15:23.061604 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:15:23.071768 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:15:23.081752 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:15:23.091744 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:15:23.101724 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:15:23.111835 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:15:23.122874 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:15:23.133897 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:15:23.134066 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:15:23.144979 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:15:23.145200 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:15:23.158615 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:15:23.159080 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:15:23.170456 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:15:23.170974 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:15:23.184453 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:15:23.184961 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:15:23.195438 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:15:23.195960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:15:23.206537 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:15:23.218518 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:15:23.230518 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:15:23.242578 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:15:23.254666 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:15:23.290785 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:15:23.318750 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:15:23.331641 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:15:23.341702 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:15:23.341722 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:15:23.342349 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:15:23.365461 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:15:23.385528 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:15:23.395927 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:15:23.398444 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:15:23.408143 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:15:23.419594 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:15:23.426037 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:15:23.429424 systemd-journald[1369]: Time spent on flushing to /var/log/journal/b94386657ebf41888b5aace992af5b47 is 12.980ms for 1380 entries. Aug 13 00:15:23.429424 systemd-journald[1369]: System Journal (/var/log/journal/b94386657ebf41888b5aace992af5b47) is 8M, max 195.6M, 187.6M free. Aug 13 00:15:23.454351 systemd-journald[1369]: Received client request to flush runtime journal. Aug 13 00:15:23.444593 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:15:23.445452 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:15:23.456408 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:15:23.468251 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:15:23.478522 kernel: loop0: detected capacity change from 0 to 8 Aug 13 00:15:23.483252 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:15:23.489471 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:15:23.492161 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:15:23.511679 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:15:23.523841 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:15:23.534514 kernel: loop1: detected capacity change from 0 to 138176 Aug 13 00:15:23.541772 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:15:23.553697 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:15:23.564702 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:15:23.574703 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:15:23.587649 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:15:23.606513 kernel: loop2: detected capacity change from 0 to 224512 Aug 13 00:15:23.618685 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:15:23.630255 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:15:23.642391 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:15:23.643155 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:15:23.654027 systemd-tmpfiles[1424]: ACLs are not supported, ignoring. Aug 13 00:15:23.654037 systemd-tmpfiles[1424]: ACLs are not supported, ignoring. Aug 13 00:15:23.656099 udevadm[1412]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:15:23.657098 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:15:23.673480 kernel: loop3: detected capacity change from 0 to 147912 Aug 13 00:15:23.733473 kernel: loop4: detected capacity change from 0 to 8 Aug 13 00:15:23.740468 kernel: loop5: detected capacity change from 0 to 138176 Aug 13 00:15:23.757688 ldconfig[1400]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:15:23.759201 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:15:23.762470 kernel: loop6: detected capacity change from 0 to 224512 Aug 13 00:15:23.781474 kernel: loop7: detected capacity change from 0 to 147912 Aug 13 00:15:23.794252 (sd-merge)[1429]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Aug 13 00:15:23.794550 (sd-merge)[1429]: Merged extensions into '/usr'. Aug 13 00:15:23.797578 systemd[1]: Reload requested from client PID 1406 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:15:23.797589 systemd[1]: Reloading... Aug 13 00:15:23.825506 zram_generator::config[1456]: No configuration found. Aug 13 00:15:23.900601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:15:23.952852 systemd[1]: Reloading finished in 154 ms. Aug 13 00:15:23.970303 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:15:23.981863 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:15:24.006604 systemd[1]: Starting ensure-sysext.service... Aug 13 00:15:24.014497 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:15:24.026683 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:15:24.040424 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:15:24.040630 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:15:24.041110 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:15:24.041273 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Aug 13 00:15:24.041311 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Aug 13 00:15:24.043183 systemd-tmpfiles[1515]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:15:24.043187 systemd-tmpfiles[1515]: Skipping /boot Aug 13 00:15:24.043461 systemd[1]: Reload requested from client PID 1514 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:15:24.043473 systemd[1]: Reloading... Aug 13 00:15:24.048436 systemd-tmpfiles[1515]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:15:24.048440 systemd-tmpfiles[1515]: Skipping /boot Aug 13 00:15:24.054732 systemd-udevd[1516]: Using default interface naming scheme 'v255'. Aug 13 00:15:24.075513 zram_generator::config[1545]: No configuration found. Aug 13 00:15:24.115317 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Aug 13 00:15:24.115381 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 45 scanned by (udev-worker) (1560) Aug 13 00:15:24.115401 kernel: ACPI: button: Sleep Button [SLPB] Aug 13 00:15:24.128098 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 00:15:24.137758 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:15:24.148480 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:15:24.156568 kernel: IPMI message handler: version 39.2 Aug 13 00:15:24.174141 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Aug 13 00:15:24.174676 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Aug 13 00:15:24.182472 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Aug 13 00:15:24.184469 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Aug 13 00:15:24.184590 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Aug 13 00:15:24.202387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:15:24.247469 kernel: iTCO_vendor_support: vendor-support=0 Aug 13 00:15:24.247507 kernel: ipmi device interface Aug 13 00:15:24.268477 kernel: ipmi_si: IPMI System Interface driver Aug 13 00:15:24.268520 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Aug 13 00:15:24.268636 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Aug 13 00:15:24.281447 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Aug 13 00:15:24.281494 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Aug 13 00:15:24.281628 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Aug 13 00:15:24.295361 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Aug 13 00:15:24.306272 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Aug 13 00:15:24.316414 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Aug 13 00:15:24.323520 kernel: ipmi_si: Adding ACPI-specified kcs state machine Aug 13 00:15:24.334803 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Aug 13 00:15:24.341594 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Aug 13 00:15:24.341671 systemd[1]: Reloading finished in 297 ms. Aug 13 00:15:24.358583 kernel: intel_rapl_common: Found RAPL domain package Aug 13 00:15:24.358617 kernel: intel_rapl_common: Found RAPL domain core Aug 13 00:15:24.363916 kernel: intel_rapl_common: Found RAPL domain dram Aug 13 00:15:24.370143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:15:24.404118 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:15:24.427187 systemd[1]: Finished ensure-sysext.service. Aug 13 00:15:24.434551 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Aug 13 00:15:24.460848 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Aug 13 00:15:24.470543 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:15:24.487468 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Aug 13 00:15:24.496632 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:15:24.505306 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:15:24.516416 augenrules[1720]: No rules Aug 13 00:15:24.516663 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:15:24.517318 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:15:24.528086 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:15:24.539037 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:15:24.547518 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Aug 13 00:15:24.556528 kernel: ipmi_ssif: IPMI SSIF Interface driver Aug 13 00:15:24.561218 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:15:24.570649 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:15:24.571267 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:15:24.582564 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:15:24.583242 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:15:24.595474 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:15:24.596458 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:15:24.597434 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:15:24.630594 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:15:24.642158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:15:24.651546 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:15:24.652211 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:15:24.664637 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:15:24.664763 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:15:24.665005 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:15:24.665157 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:15:24.665266 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:15:24.665425 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:15:24.665533 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:15:24.665690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:15:24.665792 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:15:24.665950 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:15:24.666122 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:15:24.666292 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:15:24.666471 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:15:24.671445 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:15:24.678715 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:15:24.678758 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:15:24.678803 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:15:24.679592 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:15:24.680431 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:15:24.680455 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:15:24.685295 lvm[1749]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:15:24.687807 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:15:24.704289 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:15:24.720704 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:15:24.748513 systemd-resolved[1733]: Positive Trust Anchors: Aug 13 00:15:24.748518 systemd-resolved[1733]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:15:24.748546 systemd-resolved[1733]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:15:24.751241 systemd-resolved[1733]: Using system hostname 'ci-4230.2.2-a-e75a6b4c18'. Aug 13 00:15:24.754941 systemd-networkd[1732]: lo: Link UP Aug 13 00:15:24.754943 systemd-networkd[1732]: lo: Gained carrier Aug 13 00:15:24.757799 systemd-networkd[1732]: bond0: netdev ready Aug 13 00:15:24.758901 systemd-networkd[1732]: Enumeration completed Aug 13 00:15:24.770257 systemd-networkd[1732]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d7:69:1a.network. Aug 13 00:15:24.794682 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:15:24.805738 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:15:24.815535 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:15:24.826660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:15:24.838449 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:15:24.847509 systemd[1]: Reached target network.target - Network. Aug 13 00:15:24.855542 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:15:24.866546 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:15:24.876558 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:15:24.887521 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:15:24.899508 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:15:24.910529 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:15:24.910546 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:15:24.918544 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:15:24.928627 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:15:24.938587 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:15:24.949499 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:15:24.958100 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:15:24.968167 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:15:24.977502 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:15:24.999844 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:15:25.009721 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:15:25.026598 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:15:25.028586 lvm[1774]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:15:25.039225 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:15:25.051160 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:15:25.061918 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:15:25.071748 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:15:25.083022 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:15:25.092544 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:15:25.100583 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:15:25.100603 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:15:25.101235 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:15:25.111270 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:15:25.121107 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:15:25.130098 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:15:25.133873 coreos-metadata[1779]: Aug 13 00:15:25.133 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Aug 13 00:15:25.134766 coreos-metadata[1779]: Aug 13 00:15:25.134 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Aug 13 00:15:25.140186 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:15:25.141200 dbus-daemon[1780]: [system] SELinux support is enabled Aug 13 00:15:25.141946 jq[1783]: false Aug 13 00:15:25.149561 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:15:25.150172 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:15:25.157824 extend-filesystems[1785]: Found loop4 Aug 13 00:15:25.157824 extend-filesystems[1785]: Found loop5 Aug 13 00:15:25.189572 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Aug 13 00:15:25.189678 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 45 scanned by (udev-worker) (1647) Aug 13 00:15:25.160161 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:15:25.189728 extend-filesystems[1785]: Found loop6 Aug 13 00:15:25.189728 extend-filesystems[1785]: Found loop7 Aug 13 00:15:25.189728 extend-filesystems[1785]: Found sda Aug 13 00:15:25.189728 extend-filesystems[1785]: Found sdb Aug 13 00:15:25.189728 extend-filesystems[1785]: Found sdb1 Aug 13 00:15:25.189728 extend-filesystems[1785]: Found sdb2 Aug 13 00:15:25.189728 extend-filesystems[1785]: Found sdb3 Aug 13 00:15:25.189728 extend-filesystems[1785]: Found usr Aug 13 00:15:25.189728 extend-filesystems[1785]: Found sdb4 Aug 13 00:15:25.189728 extend-filesystems[1785]: Found sdb6 Aug 13 00:15:25.189728 extend-filesystems[1785]: Found sdb7 Aug 13 00:15:25.189728 extend-filesystems[1785]: Found sdb9 Aug 13 00:15:25.189728 extend-filesystems[1785]: Checking size of /dev/sdb9 Aug 13 00:15:25.189728 extend-filesystems[1785]: Resized partition /dev/sdb9 Aug 13 00:15:25.190548 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:15:25.346625 extend-filesystems[1793]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:15:25.199368 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:15:25.229015 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:15:25.235677 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Aug 13 00:15:25.253873 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:15:25.369888 sshd_keygen[1810]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:15:25.254234 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:15:25.370030 update_engine[1807]: I20250813 00:15:25.290597 1807 main.cc:92] Flatcar Update Engine starting Aug 13 00:15:25.370030 update_engine[1807]: I20250813 00:15:25.291426 1807 update_check_scheduler.cc:74] Next update check in 9m42s Aug 13 00:15:25.269297 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:15:25.376689 jq[1811]: true Aug 13 00:15:25.279785 systemd-logind[1805]: Watching system buttons on /dev/input/event3 (Power Button) Aug 13 00:15:25.279795 systemd-logind[1805]: Watching system buttons on /dev/input/event2 (Sleep Button) Aug 13 00:15:25.279805 systemd-logind[1805]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Aug 13 00:15:25.280036 systemd-logind[1805]: New seat seat0. Aug 13 00:15:25.281110 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:15:25.310089 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:15:25.338661 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:15:25.338787 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:15:25.338947 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:15:25.339065 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:15:25.376837 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:15:25.376950 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:15:25.387793 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:15:25.411354 jq[1823]: true Aug 13 00:15:25.412105 (ntainerd)[1824]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:15:25.416580 tar[1819]: linux-amd64/LICENSE Aug 13 00:15:25.416753 tar[1819]: linux-amd64/helm Aug 13 00:15:25.416878 dbus-daemon[1780]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:15:25.419453 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Aug 13 00:15:25.419593 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Aug 13 00:15:25.432015 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:15:25.453626 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:15:25.461559 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:15:25.461728 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:15:25.464114 bash[1851]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:15:25.472608 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:15:25.472688 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:15:25.489660 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:15:25.502257 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:15:25.513757 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:15:25.513867 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:15:25.520276 locksmithd[1859]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:15:25.532674 systemd[1]: Starting sshkeys.service... Aug 13 00:15:25.540480 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:15:25.553390 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:15:25.582473 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Aug 13 00:15:25.582727 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:15:25.587964 containerd[1824]: time="2025-08-13T00:15:25.587923168Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 13 00:15:25.593893 coreos-metadata[1873]: Aug 13 00:15:25.593 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Aug 13 00:15:25.594664 coreos-metadata[1873]: Aug 13 00:15:25.594 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Aug 13 00:15:25.595469 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Aug 13 00:15:25.599645 containerd[1824]: time="2025-08-13T00:15:25.599600207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:15:25.599984 systemd-networkd[1732]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d7:69:1b.network. Aug 13 00:15:25.600421 containerd[1824]: time="2025-08-13T00:15:25.600405222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:15:25.600421 containerd[1824]: time="2025-08-13T00:15:25.600419963Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:15:25.600460 containerd[1824]: time="2025-08-13T00:15:25.600429581Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:15:25.600538 containerd[1824]: time="2025-08-13T00:15:25.600529560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:15:25.600557 containerd[1824]: time="2025-08-13T00:15:25.600543256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:15:25.600587 containerd[1824]: time="2025-08-13T00:15:25.600578413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:15:25.600587 containerd[1824]: time="2025-08-13T00:15:25.600586799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:15:25.600707 containerd[1824]: time="2025-08-13T00:15:25.600698352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:15:25.600725 containerd[1824]: time="2025-08-13T00:15:25.600707196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:15:25.600725 containerd[1824]: time="2025-08-13T00:15:25.600714530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:15:25.600725 containerd[1824]: time="2025-08-13T00:15:25.600719496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:15:25.600774 containerd[1824]: time="2025-08-13T00:15:25.600765404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:15:25.600905 containerd[1824]: time="2025-08-13T00:15:25.600896386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:15:25.600984 containerd[1824]: time="2025-08-13T00:15:25.600975405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:15:25.601007 containerd[1824]: time="2025-08-13T00:15:25.600984443Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:15:25.601040 containerd[1824]: time="2025-08-13T00:15:25.601032622Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:15:25.601069 containerd[1824]: time="2025-08-13T00:15:25.601062519Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:15:25.601990 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:15:25.611090 containerd[1824]: time="2025-08-13T00:15:25.611048569Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:15:25.611090 containerd[1824]: time="2025-08-13T00:15:25.611073081Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:15:25.611090 containerd[1824]: time="2025-08-13T00:15:25.611082899Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:15:25.611154 containerd[1824]: time="2025-08-13T00:15:25.611091778Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:15:25.611154 containerd[1824]: time="2025-08-13T00:15:25.611099592Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:15:25.611181 containerd[1824]: time="2025-08-13T00:15:25.611168332Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:15:25.611476 containerd[1824]: time="2025-08-13T00:15:25.611431793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:15:25.611563 containerd[1824]: time="2025-08-13T00:15:25.611537252Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:15:25.611596 containerd[1824]: time="2025-08-13T00:15:25.611569231Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:15:25.611596 containerd[1824]: time="2025-08-13T00:15:25.611582701Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:15:25.611627 containerd[1824]: time="2025-08-13T00:15:25.611596714Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:15:25.611627 containerd[1824]: time="2025-08-13T00:15:25.611607150Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:15:25.611627 containerd[1824]: time="2025-08-13T00:15:25.611623020Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:15:25.611774 containerd[1824]: time="2025-08-13T00:15:25.611731752Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:15:25.611774 containerd[1824]: time="2025-08-13T00:15:25.611746899Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:15:25.611774 containerd[1824]: time="2025-08-13T00:15:25.611755975Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:15:25.611774 containerd[1824]: time="2025-08-13T00:15:25.611765165Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:15:25.611774 containerd[1824]: time="2025-08-13T00:15:25.611771666Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:15:25.611866 containerd[1824]: time="2025-08-13T00:15:25.611784368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.611866 containerd[1824]: time="2025-08-13T00:15:25.611792653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.611866 containerd[1824]: time="2025-08-13T00:15:25.611800069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.611866 containerd[1824]: time="2025-08-13T00:15:25.611807643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.611866 containerd[1824]: time="2025-08-13T00:15:25.611814879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.611866 containerd[1824]: time="2025-08-13T00:15:25.611821675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.611866 containerd[1824]: time="2025-08-13T00:15:25.611828057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.611866 containerd[1824]: time="2025-08-13T00:15:25.611835070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.611866 containerd[1824]: time="2025-08-13T00:15:25.611842194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.611866 containerd[1824]: time="2025-08-13T00:15:25.611850474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.611866 containerd[1824]: time="2025-08-13T00:15:25.611857015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.611866 containerd[1824]: time="2025-08-13T00:15:25.611863550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.612051 containerd[1824]: time="2025-08-13T00:15:25.611870282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.612051 containerd[1824]: time="2025-08-13T00:15:25.611878076Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:15:25.612051 containerd[1824]: time="2025-08-13T00:15:25.611890037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.612051 containerd[1824]: time="2025-08-13T00:15:25.611897850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.612051 containerd[1824]: time="2025-08-13T00:15:25.611904182Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:15:25.612051 containerd[1824]: time="2025-08-13T00:15:25.611928595Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:15:25.612051 containerd[1824]: time="2025-08-13T00:15:25.611939383Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:15:25.612051 containerd[1824]: time="2025-08-13T00:15:25.611945243Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:15:25.612051 containerd[1824]: time="2025-08-13T00:15:25.611951749Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:15:25.612051 containerd[1824]: time="2025-08-13T00:15:25.611957097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.612051 containerd[1824]: time="2025-08-13T00:15:25.611963862Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:15:25.612051 containerd[1824]: time="2025-08-13T00:15:25.611969893Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:15:25.612051 containerd[1824]: time="2025-08-13T00:15:25.611975391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:15:25.612223 containerd[1824]: time="2025-08-13T00:15:25.612138869Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:15:25.612223 containerd[1824]: time="2025-08-13T00:15:25.612165431Z" level=info msg="Connect containerd service" Aug 13 00:15:25.612223 containerd[1824]: time="2025-08-13T00:15:25.612181623Z" level=info msg="using legacy CRI server" Aug 13 00:15:25.612223 containerd[1824]: time="2025-08-13T00:15:25.612186201Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:15:25.612343 containerd[1824]: time="2025-08-13T00:15:25.612240929Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:15:25.612609 containerd[1824]: time="2025-08-13T00:15:25.612568214Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:15:25.612701 containerd[1824]: time="2025-08-13T00:15:25.612681994Z" level=info msg="Start subscribing containerd event" Aug 13 00:15:25.612701 containerd[1824]: time="2025-08-13T00:15:25.612696778Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:15:25.612755 containerd[1824]: time="2025-08-13T00:15:25.612712065Z" level=info msg="Start recovering state" Aug 13 00:15:25.612755 containerd[1824]: time="2025-08-13T00:15:25.612722597Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:15:25.612805 containerd[1824]: time="2025-08-13T00:15:25.612762350Z" level=info msg="Start event monitor" Aug 13 00:15:25.612805 containerd[1824]: time="2025-08-13T00:15:25.612778923Z" level=info msg="Start snapshots syncer" Aug 13 00:15:25.612805 containerd[1824]: time="2025-08-13T00:15:25.612787770Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:15:25.612805 containerd[1824]: time="2025-08-13T00:15:25.612795660Z" level=info msg="Start streaming server" Aug 13 00:15:25.612888 containerd[1824]: time="2025-08-13T00:15:25.612839854Z" level=info msg="containerd successfully booted in 0.025673s" Aug 13 00:15:25.616709 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:15:25.638832 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:15:25.659690 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Aug 13 00:15:25.669654 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:15:25.702509 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Aug 13 00:15:25.728136 tar[1819]: linux-amd64/README.md Aug 13 00:15:25.728275 extend-filesystems[1793]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Aug 13 00:15:25.728275 extend-filesystems[1793]: old_desc_blocks = 1, new_desc_blocks = 56 Aug 13 00:15:25.728275 extend-filesystems[1793]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Aug 13 00:15:25.773607 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Aug 13 00:15:25.773913 extend-filesystems[1785]: Resized filesystem in /dev/sdb9 Aug 13 00:15:25.799708 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Aug 13 00:15:25.799795 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Aug 13 00:15:25.728736 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:15:25.728857 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:15:25.786210 systemd-networkd[1732]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Aug 13 00:15:25.788545 systemd-networkd[1732]: enp2s0f0np0: Link UP Aug 13 00:15:25.789403 systemd-networkd[1732]: enp2s0f0np0: Gained carrier Aug 13 00:15:25.793875 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:15:25.808413 systemd-networkd[1732]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d7:69:1a.network. Aug 13 00:15:25.809210 systemd-networkd[1732]: enp2s0f1np1: Link UP Aug 13 00:15:25.810144 systemd-networkd[1732]: enp2s0f1np1: Gained carrier Aug 13 00:15:25.829044 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:15:25.830032 systemd-networkd[1732]: bond0: Link UP Aug 13 00:15:25.830888 systemd-networkd[1732]: bond0: Gained carrier Aug 13 00:15:25.831513 systemd-timesyncd[1734]: Network configuration changed, trying to establish connection. Aug 13 00:15:25.833124 systemd-timesyncd[1734]: Network configuration changed, trying to establish connection. Aug 13 00:15:25.834072 systemd-timesyncd[1734]: Network configuration changed, trying to establish connection. Aug 13 00:15:25.834525 systemd-timesyncd[1734]: Network configuration changed, trying to establish connection. Aug 13 00:15:25.908393 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Aug 13 00:15:25.908422 kernel: bond0: active interface up! Aug 13 00:15:26.024515 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Aug 13 00:15:26.134933 coreos-metadata[1779]: Aug 13 00:15:26.134 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Aug 13 00:15:26.594824 coreos-metadata[1873]: Aug 13 00:15:26.594 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Aug 13 00:15:26.982601 systemd-networkd[1732]: bond0: Gained IPv6LL Aug 13 00:15:26.982895 systemd-timesyncd[1734]: Network configuration changed, trying to establish connection. Aug 13 00:15:27.430860 systemd-timesyncd[1734]: Network configuration changed, trying to establish connection. Aug 13 00:15:27.430929 systemd-timesyncd[1734]: Network configuration changed, trying to establish connection. Aug 13 00:15:27.431956 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:15:27.443459 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:15:27.460713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:27.471400 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:15:27.491783 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:15:28.166179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:28.189656 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:15:28.629215 kubelet[1917]: E0813 00:15:28.629106 1917 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:15:28.630616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:15:28.630706 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:15:28.630893 systemd[1]: kubelet.service: Consumed 596ms CPU time, 275.6M memory peak. Aug 13 00:15:29.482417 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Aug 13 00:15:29.482627 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Aug 13 00:15:29.592566 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:15:29.617679 systemd[1]: Started sshd@0-147.75.71.157:22-147.75.109.163:57096.service - OpenSSH per-connection server daemon (147.75.109.163:57096). Aug 13 00:15:29.659615 sshd[1939]: Accepted publickey for core from 147.75.109.163 port 57096 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:15:29.660409 sshd-session[1939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:29.667416 systemd-logind[1805]: New session 1 of user core. Aug 13 00:15:29.668289 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:15:29.685682 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:15:29.698774 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:15:29.721761 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:15:29.738902 (systemd)[1943]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:15:29.740439 systemd-logind[1805]: New session c1 of user core. Aug 13 00:15:29.842701 systemd[1943]: Queued start job for default target default.target. Aug 13 00:15:29.855085 systemd[1943]: Created slice app.slice - User Application Slice. Aug 13 00:15:29.855100 systemd[1943]: Reached target paths.target - Paths. Aug 13 00:15:29.855152 systemd[1943]: Reached target timers.target - Timers. Aug 13 00:15:29.855847 systemd[1943]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:15:29.861580 systemd[1943]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:15:29.861609 systemd[1943]: Reached target sockets.target - Sockets. Aug 13 00:15:29.861639 systemd[1943]: Reached target basic.target - Basic System. Aug 13 00:15:29.861675 systemd[1943]: Reached target default.target - Main User Target. Aug 13 00:15:29.861700 systemd[1943]: Startup finished in 117ms. Aug 13 00:15:29.861727 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:15:29.872668 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:15:29.940258 systemd[1]: Started sshd@1-147.75.71.157:22-147.75.109.163:49222.service - OpenSSH per-connection server daemon (147.75.109.163:49222). Aug 13 00:15:29.977435 sshd[1954]: Accepted publickey for core from 147.75.109.163 port 49222 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:15:29.978108 sshd-session[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:29.980766 systemd-logind[1805]: New session 2 of user core. Aug 13 00:15:29.988647 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:15:30.044599 sshd[1956]: Connection closed by 147.75.109.163 port 49222 Aug 13 00:15:30.044684 sshd-session[1954]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:30.054566 systemd[1]: sshd@1-147.75.71.157:22-147.75.109.163:49222.service: Deactivated successfully. Aug 13 00:15:30.055361 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:15:30.056101 systemd-logind[1805]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:15:30.056799 systemd[1]: Started sshd@2-147.75.71.157:22-147.75.109.163:49232.service - OpenSSH per-connection server daemon (147.75.109.163:49232). Aug 13 00:15:30.068263 systemd-logind[1805]: Removed session 2. Aug 13 00:15:30.093763 sshd[1961]: Accepted publickey for core from 147.75.109.163 port 49232 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:15:30.094360 sshd-session[1961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:30.096971 systemd-logind[1805]: New session 3 of user core. Aug 13 00:15:30.104654 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:15:30.160668 sshd[1964]: Connection closed by 147.75.109.163 port 49232 Aug 13 00:15:30.160823 sshd-session[1961]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:30.162084 systemd[1]: sshd@2-147.75.71.157:22-147.75.109.163:49232.service: Deactivated successfully. Aug 13 00:15:30.162948 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:15:30.163686 systemd-logind[1805]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:15:30.164321 systemd-logind[1805]: Removed session 3. Aug 13 00:15:30.188354 coreos-metadata[1873]: Aug 13 00:15:30.188 INFO Fetch successful Aug 13 00:15:30.222403 unknown[1873]: wrote ssh authorized keys file for user: core Aug 13 00:15:30.243584 update-ssh-keys[1969]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:15:30.243884 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:15:30.255278 systemd[1]: Finished sshkeys.service. Aug 13 00:15:30.390867 coreos-metadata[1779]: Aug 13 00:15:30.390 INFO Fetch successful Aug 13 00:15:30.434749 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:15:30.445823 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Aug 13 00:15:30.666989 login[1893]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:15:30.669563 login[1892]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:15:30.669863 systemd-logind[1805]: New session 4 of user core. Aug 13 00:15:30.670444 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:15:30.672107 systemd-logind[1805]: New session 5 of user core. Aug 13 00:15:30.672559 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:15:30.885334 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Aug 13 00:15:30.887792 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:15:30.888424 systemd[1]: Startup finished in 1.831s (kernel) + 28.444s (initrd) + 9.674s (userspace) = 39.950s. Aug 13 00:15:32.980227 systemd-timesyncd[1734]: Network configuration changed, trying to establish connection. Aug 13 00:15:38.838637 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:15:38.848728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:39.092885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:39.094920 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:15:39.115660 kubelet[2014]: E0813 00:15:39.115583 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:15:39.117894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:15:39.117983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:15:39.118152 systemd[1]: kubelet.service: Consumed 140ms CPU time, 118.4M memory peak. Aug 13 00:15:40.177834 systemd[1]: Started sshd@3-147.75.71.157:22-147.75.109.163:49310.service - OpenSSH per-connection server daemon (147.75.109.163:49310). Aug 13 00:15:40.205917 sshd[2032]: Accepted publickey for core from 147.75.109.163 port 49310 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:15:40.206623 sshd-session[2032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:40.209469 systemd-logind[1805]: New session 6 of user core. Aug 13 00:15:40.221728 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:15:40.274510 sshd[2034]: Connection closed by 147.75.109.163 port 49310 Aug 13 00:15:40.274646 sshd-session[2032]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:40.289789 systemd[1]: sshd@3-147.75.71.157:22-147.75.109.163:49310.service: Deactivated successfully. Aug 13 00:15:40.290623 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:15:40.291093 systemd-logind[1805]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:15:40.292085 systemd[1]: Started sshd@4-147.75.71.157:22-147.75.109.163:49320.service - OpenSSH per-connection server daemon (147.75.109.163:49320). Aug 13 00:15:40.292614 systemd-logind[1805]: Removed session 6. Aug 13 00:15:40.324952 sshd[2039]: Accepted publickey for core from 147.75.109.163 port 49320 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:15:40.325870 sshd-session[2039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:40.329941 systemd-logind[1805]: New session 7 of user core. Aug 13 00:15:40.342881 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:15:40.400645 sshd[2043]: Connection closed by 147.75.109.163 port 49320 Aug 13 00:15:40.400851 sshd-session[2039]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:40.418013 systemd[1]: sshd@4-147.75.71.157:22-147.75.109.163:49320.service: Deactivated successfully. Aug 13 00:15:40.418957 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:15:40.419832 systemd-logind[1805]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:15:40.420567 systemd[1]: Started sshd@5-147.75.71.157:22-147.75.109.163:49328.service - OpenSSH per-connection server daemon (147.75.109.163:49328). Aug 13 00:15:40.421274 systemd-logind[1805]: Removed session 7. Aug 13 00:15:40.456319 sshd[2048]: Accepted publickey for core from 147.75.109.163 port 49328 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:15:40.457331 sshd-session[2048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:40.461262 systemd-logind[1805]: New session 8 of user core. Aug 13 00:15:40.480831 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:15:40.547034 sshd[2052]: Connection closed by 147.75.109.163 port 49328 Aug 13 00:15:40.547837 sshd-session[2048]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:40.566294 systemd[1]: sshd@5-147.75.71.157:22-147.75.109.163:49328.service: Deactivated successfully. Aug 13 00:15:40.568396 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:15:40.569306 systemd-logind[1805]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:15:40.583951 systemd[1]: Started sshd@6-147.75.71.157:22-147.75.109.163:49338.service - OpenSSH per-connection server daemon (147.75.109.163:49338). Aug 13 00:15:40.585417 systemd-logind[1805]: Removed session 8. Aug 13 00:15:40.634047 sshd[2057]: Accepted publickey for core from 147.75.109.163 port 49338 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:15:40.635831 sshd-session[2057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:40.643062 systemd-logind[1805]: New session 9 of user core. Aug 13 00:15:40.654820 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:15:40.726086 sudo[2061]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:15:40.726232 sudo[2061]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:15:40.745309 sudo[2061]: pam_unix(sudo:session): session closed for user root Aug 13 00:15:40.746334 sshd[2060]: Connection closed by 147.75.109.163 port 49338 Aug 13 00:15:40.746516 sshd-session[2057]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:40.767005 systemd[1]: sshd@6-147.75.71.157:22-147.75.109.163:49338.service: Deactivated successfully. Aug 13 00:15:40.768426 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:15:40.769250 systemd-logind[1805]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:15:40.770860 systemd[1]: Started sshd@7-147.75.71.157:22-147.75.109.163:49354.service - OpenSSH per-connection server daemon (147.75.109.163:49354). Aug 13 00:15:40.771708 systemd-logind[1805]: Removed session 9. Aug 13 00:15:40.818389 sshd[2066]: Accepted publickey for core from 147.75.109.163 port 49354 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:15:40.821840 sshd-session[2066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:40.834072 systemd-logind[1805]: New session 10 of user core. Aug 13 00:15:40.843909 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:15:40.908389 sudo[2071]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:15:40.908602 sudo[2071]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:15:40.910792 sudo[2071]: pam_unix(sudo:session): session closed for user root Aug 13 00:15:40.913527 sudo[2070]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:15:40.913672 sudo[2070]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:15:40.929821 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:15:40.953561 augenrules[2093]: No rules Aug 13 00:15:40.954302 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:15:40.954604 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:15:40.955744 sudo[2070]: pam_unix(sudo:session): session closed for user root Aug 13 00:15:40.957254 sshd[2069]: Connection closed by 147.75.109.163 port 49354 Aug 13 00:15:40.957766 sshd-session[2066]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:40.964296 systemd[1]: sshd@7-147.75.71.157:22-147.75.109.163:49354.service: Deactivated successfully. Aug 13 00:15:40.966765 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:15:40.968099 systemd-logind[1805]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:15:40.970778 systemd[1]: Started sshd@8-147.75.71.157:22-147.75.109.163:49362.service - OpenSSH per-connection server daemon (147.75.109.163:49362). Aug 13 00:15:40.972491 systemd-logind[1805]: Removed session 10. Aug 13 00:15:41.029418 sshd[2101]: Accepted publickey for core from 147.75.109.163 port 49362 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:15:41.030946 sshd-session[2101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:41.036829 systemd-logind[1805]: New session 11 of user core. Aug 13 00:15:41.045777 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:15:41.112445 sudo[2105]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:15:41.113283 sudo[2105]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:15:41.473223 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:15:41.473531 (dockerd)[2130]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:15:41.868638 dockerd[2130]: time="2025-08-13T00:15:41.868582879Z" level=info msg="Starting up" Aug 13 00:15:41.932541 dockerd[2130]: time="2025-08-13T00:15:41.932519354Z" level=info msg="Loading containers: start." Aug 13 00:15:42.094477 kernel: Initializing XFRM netlink socket Aug 13 00:15:42.110627 systemd-timesyncd[1734]: Network configuration changed, trying to establish connection. Aug 13 00:15:42.159484 systemd-networkd[1732]: docker0: Link UP Aug 13 00:15:42.195062 dockerd[2130]: time="2025-08-13T00:15:42.194943510Z" level=info msg="Loading containers: done." Aug 13 00:15:42.221048 dockerd[2130]: time="2025-08-13T00:15:42.221000133Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:15:42.221048 dockerd[2130]: time="2025-08-13T00:15:42.221045468Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Aug 13 00:15:42.221140 dockerd[2130]: time="2025-08-13T00:15:42.221099400Z" level=info msg="Daemon has completed initialization" Aug 13 00:15:42.221593 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2382023953-merged.mount: Deactivated successfully. Aug 13 00:15:42.234754 dockerd[2130]: time="2025-08-13T00:15:42.234700404Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:15:42.234825 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:15:42.444818 systemd-timesyncd[1734]: Contacted time server [2604:2dc0:101:200::151]:123 (2.flatcar.pool.ntp.org). Aug 13 00:15:42.444879 systemd-timesyncd[1734]: Initial clock synchronization to Wed 2025-08-13 00:15:42.652792 UTC. Aug 13 00:15:43.003065 containerd[1824]: time="2025-08-13T00:15:43.002932855Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 00:15:43.621974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3900173118.mount: Deactivated successfully. Aug 13 00:15:44.396483 containerd[1824]: time="2025-08-13T00:15:44.396429495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:44.396708 containerd[1824]: time="2025-08-13T00:15:44.396581373Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 00:15:44.397115 containerd[1824]: time="2025-08-13T00:15:44.397075176Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:44.398756 containerd[1824]: time="2025-08-13T00:15:44.398708670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:44.399410 containerd[1824]: time="2025-08-13T00:15:44.399365084Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 1.396359817s" Aug 13 00:15:44.399410 containerd[1824]: time="2025-08-13T00:15:44.399387503Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 00:15:44.399777 containerd[1824]: time="2025-08-13T00:15:44.399726359Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 00:15:45.498809 containerd[1824]: time="2025-08-13T00:15:45.498779041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:45.499029 containerd[1824]: time="2025-08-13T00:15:45.498952209Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 00:15:45.499443 containerd[1824]: time="2025-08-13T00:15:45.499403073Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:45.500980 containerd[1824]: time="2025-08-13T00:15:45.500938722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:45.501544 containerd[1824]: time="2025-08-13T00:15:45.501527693Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.10178633s" Aug 13 00:15:45.501544 containerd[1824]: time="2025-08-13T00:15:45.501543010Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 00:15:45.501835 containerd[1824]: time="2025-08-13T00:15:45.501795475Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 00:15:46.415715 containerd[1824]: time="2025-08-13T00:15:46.415667712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:46.415927 containerd[1824]: time="2025-08-13T00:15:46.415874602Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 00:15:46.416265 containerd[1824]: time="2025-08-13T00:15:46.416252179Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:46.418330 containerd[1824]: time="2025-08-13T00:15:46.418314641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:46.418804 containerd[1824]: time="2025-08-13T00:15:46.418792001Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 916.981609ms" Aug 13 00:15:46.418834 containerd[1824]: time="2025-08-13T00:15:46.418805802Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 00:15:46.419335 containerd[1824]: time="2025-08-13T00:15:46.419317570Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 00:15:47.249260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159238432.mount: Deactivated successfully. Aug 13 00:15:47.444463 containerd[1824]: time="2025-08-13T00:15:47.444438164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:47.444671 containerd[1824]: time="2025-08-13T00:15:47.444652080Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 00:15:47.445013 containerd[1824]: time="2025-08-13T00:15:47.445001847Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:47.445902 containerd[1824]: time="2025-08-13T00:15:47.445887486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:47.446323 containerd[1824]: time="2025-08-13T00:15:47.446311675Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.026973323s" Aug 13 00:15:47.446348 containerd[1824]: time="2025-08-13T00:15:47.446327140Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 00:15:47.446618 containerd[1824]: time="2025-08-13T00:15:47.446608597Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:15:47.962269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount509592554.mount: Deactivated successfully. Aug 13 00:15:48.502742 containerd[1824]: time="2025-08-13T00:15:48.502713690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:48.503022 containerd[1824]: time="2025-08-13T00:15:48.502905457Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 00:15:48.503422 containerd[1824]: time="2025-08-13T00:15:48.503408304Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:48.505088 containerd[1824]: time="2025-08-13T00:15:48.505075078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:48.505804 containerd[1824]: time="2025-08-13T00:15:48.505765972Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.059142541s" Aug 13 00:15:48.505804 containerd[1824]: time="2025-08-13T00:15:48.505783637Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:15:48.506066 containerd[1824]: time="2025-08-13T00:15:48.506040194Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:15:48.997440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184193391.mount: Deactivated successfully. Aug 13 00:15:48.998519 containerd[1824]: time="2025-08-13T00:15:48.998455052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:48.998721 containerd[1824]: time="2025-08-13T00:15:48.998675916Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:15:48.999251 containerd[1824]: time="2025-08-13T00:15:48.999215478Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:49.000578 containerd[1824]: time="2025-08-13T00:15:49.000564900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:49.001076 containerd[1824]: time="2025-08-13T00:15:49.001065151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 495.011062ms" Aug 13 00:15:49.001101 containerd[1824]: time="2025-08-13T00:15:49.001079269Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:15:49.001362 containerd[1824]: time="2025-08-13T00:15:49.001352379Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:15:49.337212 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:15:49.349676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:49.575935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89864074.mount: Deactivated successfully. Aug 13 00:15:49.607433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:49.609729 (kubelet)[2481]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:15:49.635032 kubelet[2481]: E0813 00:15:49.634992 2481 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:15:49.636747 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:15:49.636911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:15:49.637161 systemd[1]: kubelet.service: Consumed 105ms CPU time, 118.2M memory peak. Aug 13 00:15:50.695246 containerd[1824]: time="2025-08-13T00:15:50.695190276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:50.695464 containerd[1824]: time="2025-08-13T00:15:50.695382698Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 00:15:50.695903 containerd[1824]: time="2025-08-13T00:15:50.695860721Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:50.698065 containerd[1824]: time="2025-08-13T00:15:50.698025666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:15:50.698638 containerd[1824]: time="2025-08-13T00:15:50.698595284Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.697227346s" Aug 13 00:15:50.698638 containerd[1824]: time="2025-08-13T00:15:50.698612829Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:15:52.451589 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:52.451717 systemd[1]: kubelet.service: Consumed 105ms CPU time, 118.2M memory peak. Aug 13 00:15:52.465654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:52.481750 systemd[1]: Reload requested from client PID 2596 ('systemctl') (unit session-11.scope)... Aug 13 00:15:52.481759 systemd[1]: Reloading... Aug 13 00:15:52.524544 zram_generator::config[2642]: No configuration found. Aug 13 00:15:52.595635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:15:52.679975 systemd[1]: Reloading finished in 198 ms. Aug 13 00:15:52.716738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:52.718816 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:52.719245 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:15:52.719371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:52.719391 systemd[1]: kubelet.service: Consumed 56ms CPU time, 98.3M memory peak. Aug 13 00:15:52.720330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:52.960841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:52.963047 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:15:52.987843 kubelet[2711]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:15:52.987843 kubelet[2711]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:15:52.987843 kubelet[2711]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:15:52.987843 kubelet[2711]: I0813 00:15:52.987804 2711 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:15:53.322346 kubelet[2711]: I0813 00:15:53.322263 2711 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:15:53.322346 kubelet[2711]: I0813 00:15:53.322278 2711 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:15:53.322463 kubelet[2711]: I0813 00:15:53.322438 2711 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:15:53.346383 kubelet[2711]: E0813 00:15:53.346349 2711 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.71.157:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.71.157:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:53.346815 kubelet[2711]: I0813 00:15:53.346784 2711 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:15:53.351357 kubelet[2711]: E0813 00:15:53.351317 2711 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:15:53.351357 kubelet[2711]: I0813 00:15:53.351351 2711 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:15:53.359739 kubelet[2711]: I0813 00:15:53.359731 2711 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:15:53.361640 kubelet[2711]: I0813 00:15:53.361597 2711 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:15:53.361762 kubelet[2711]: I0813 00:15:53.361614 2711 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-a-e75a6b4c18","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:15:53.361762 kubelet[2711]: I0813 00:15:53.361740 2711 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:15:53.361762 kubelet[2711]: I0813 00:15:53.361761 2711 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:15:53.361900 kubelet[2711]: I0813 00:15:53.361829 2711 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:15:53.365910 kubelet[2711]: I0813 00:15:53.365884 2711 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:15:53.365944 kubelet[2711]: I0813 00:15:53.365914 2711 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:15:53.365944 kubelet[2711]: I0813 00:15:53.365924 2711 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:15:53.365944 kubelet[2711]: I0813 00:15:53.365932 2711 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:15:53.367913 kubelet[2711]: I0813 00:15:53.367878 2711 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 00:15:53.368174 kubelet[2711]: I0813 00:15:53.368167 2711 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:15:53.369277 kubelet[2711]: W0813 00:15:53.369268 2711 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:15:53.371077 kubelet[2711]: I0813 00:15:53.371061 2711 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:15:53.371114 kubelet[2711]: I0813 00:15:53.371090 2711 server.go:1287] "Started kubelet" Aug 13 00:15:53.371235 kubelet[2711]: I0813 00:15:53.371177 2711 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:15:53.371405 kubelet[2711]: W0813 00:15:53.371376 2711 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.71.157:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.71.157:6443: connect: connection refused Aug 13 00:15:53.371463 kubelet[2711]: E0813 00:15:53.371443 2711 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.71.157:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.71.157:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:53.371519 kubelet[2711]: I0813 00:15:53.371462 2711 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:15:53.371519 kubelet[2711]: I0813 00:15:53.371443 2711 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:15:53.371994 kubelet[2711]: W0813 00:15:53.371963 2711 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.71.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-a-e75a6b4c18&limit=500&resourceVersion=0": dial tcp 147.75.71.157:6443: connect: connection refused Aug 13 00:15:53.372031 kubelet[2711]: E0813 00:15:53.372009 2711 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.71.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-a-e75a6b4c18&limit=500&resourceVersion=0\": dial tcp 147.75.71.157:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:53.372557 kubelet[2711]: I0813 00:15:53.372548 2711 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:15:53.372614 kubelet[2711]: I0813 00:15:53.372605 2711 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:15:53.372646 kubelet[2711]: E0813 00:15:53.372634 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:53.372675 kubelet[2711]: I0813 00:15:53.372641 2711 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:15:53.372675 kubelet[2711]: I0813 00:15:53.372667 2711 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:15:53.372735 kubelet[2711]: I0813 00:15:53.372706 2711 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:15:53.372839 kubelet[2711]: E0813 00:15:53.372823 2711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.71.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-a-e75a6b4c18?timeout=10s\": dial tcp 147.75.71.157:6443: connect: connection refused" interval="200ms" Aug 13 00:15:53.372921 kubelet[2711]: W0813 00:15:53.372896 2711 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.71.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.71.157:6443: connect: connection refused Aug 13 00:15:53.372949 kubelet[2711]: E0813 00:15:53.372939 2711 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.71.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.71.157:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:53.372970 kubelet[2711]: I0813 00:15:53.372953 2711 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:15:53.375465 kubelet[2711]: I0813 00:15:53.375457 2711 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:15:53.375494 kubelet[2711]: I0813 00:15:53.375484 2711 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:15:53.375637 kubelet[2711]: E0813 00:15:53.375629 2711 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:15:53.375844 kubelet[2711]: I0813 00:15:53.375836 2711 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:15:53.377263 kubelet[2711]: E0813 00:15:53.376351 2711 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.71.157:6443/api/v1/namespaces/default/events\": dial tcp 147.75.71.157:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-a-e75a6b4c18.185b2b5bdecf2503 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-a-e75a6b4c18,UID:ci-4230.2.2-a-e75a6b4c18,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-a-e75a6b4c18,},FirstTimestamp:2025-08-13 00:15:53.371075843 +0000 UTC m=+0.406173403,LastTimestamp:2025-08-13 00:15:53.371075843 +0000 UTC m=+0.406173403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-a-e75a6b4c18,}" Aug 13 00:15:53.382441 kubelet[2711]: I0813 00:15:53.382434 2711 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:15:53.382441 kubelet[2711]: I0813 00:15:53.382440 2711 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:15:53.382515 kubelet[2711]: I0813 00:15:53.382449 2711 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:15:53.383418 kubelet[2711]: I0813 00:15:53.383409 2711 policy_none.go:49] "None policy: Start" Aug 13 00:15:53.383418 kubelet[2711]: I0813 00:15:53.383418 2711 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:15:53.383500 kubelet[2711]: I0813 00:15:53.383424 2711 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:15:53.383870 kubelet[2711]: I0813 00:15:53.383841 2711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:15:53.384465 kubelet[2711]: I0813 00:15:53.384455 2711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:15:53.384500 kubelet[2711]: I0813 00:15:53.384474 2711 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:15:53.384500 kubelet[2711]: I0813 00:15:53.384488 2711 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:15:53.384500 kubelet[2711]: I0813 00:15:53.384495 2711 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:15:53.384549 kubelet[2711]: E0813 00:15:53.384525 2711 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:15:53.384737 kubelet[2711]: W0813 00:15:53.384725 2711 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.71.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.71.157:6443: connect: connection refused Aug 13 00:15:53.384774 kubelet[2711]: E0813 00:15:53.384746 2711 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.71.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.71.157:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:53.386030 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:15:53.396987 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:15:53.398811 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:15:53.418245 kubelet[2711]: I0813 00:15:53.418201 2711 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:15:53.418363 kubelet[2711]: I0813 00:15:53.418353 2711 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:15:53.418400 kubelet[2711]: I0813 00:15:53.418363 2711 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:15:53.418536 kubelet[2711]: I0813 00:15:53.418525 2711 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:15:53.419225 kubelet[2711]: E0813 00:15:53.419207 2711 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:15:53.419283 kubelet[2711]: E0813 00:15:53.419250 2711 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:53.509221 systemd[1]: Created slice kubepods-burstable-pod25b93cdd837deb03c359fe016cae677c.slice - libcontainer container kubepods-burstable-pod25b93cdd837deb03c359fe016cae677c.slice. Aug 13 00:15:53.522293 kubelet[2711]: I0813 00:15:53.522193 2711 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.523036 kubelet[2711]: E0813 00:15:53.522938 2711 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.71.157:6443/api/v1/nodes\": dial tcp 147.75.71.157:6443: connect: connection refused" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.531403 kubelet[2711]: E0813 00:15:53.531316 2711 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.537954 systemd[1]: Created slice kubepods-burstable-podeded66a79d0501494d676da030e3cb17.slice - libcontainer container kubepods-burstable-podeded66a79d0501494d676da030e3cb17.slice. Aug 13 00:15:53.542180 kubelet[2711]: E0813 00:15:53.542121 2711 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.547065 systemd[1]: Created slice kubepods-burstable-pod0dfaeb1ca1f8a4daa142df3e2bd8054e.slice - libcontainer container kubepods-burstable-pod0dfaeb1ca1f8a4daa142df3e2bd8054e.slice. Aug 13 00:15:53.550930 kubelet[2711]: E0813 00:15:53.550846 2711 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.573984 kubelet[2711]: E0813 00:15:53.573721 2711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.71.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-a-e75a6b4c18?timeout=10s\": dial tcp 147.75.71.157:6443: connect: connection refused" interval="400ms" Aug 13 00:15:53.573984 kubelet[2711]: I0813 00:15:53.573774 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eded66a79d0501494d676da030e3cb17-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-a-e75a6b4c18\" (UID: \"eded66a79d0501494d676da030e3cb17\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.573984 kubelet[2711]: I0813 00:15:53.573853 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eded66a79d0501494d676da030e3cb17-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-a-e75a6b4c18\" (UID: \"eded66a79d0501494d676da030e3cb17\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.573984 kubelet[2711]: I0813 00:15:53.573916 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25b93cdd837deb03c359fe016cae677c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-a-e75a6b4c18\" (UID: \"25b93cdd837deb03c359fe016cae677c\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.573984 kubelet[2711]: I0813 00:15:53.573976 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25b93cdd837deb03c359fe016cae677c-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-a-e75a6b4c18\" (UID: \"25b93cdd837deb03c359fe016cae677c\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.574544 kubelet[2711]: I0813 00:15:53.574053 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25b93cdd837deb03c359fe016cae677c-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-a-e75a6b4c18\" (UID: \"25b93cdd837deb03c359fe016cae677c\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.574544 kubelet[2711]: I0813 00:15:53.574128 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eded66a79d0501494d676da030e3cb17-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-a-e75a6b4c18\" (UID: \"eded66a79d0501494d676da030e3cb17\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.574544 kubelet[2711]: I0813 00:15:53.574206 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eded66a79d0501494d676da030e3cb17-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-a-e75a6b4c18\" (UID: \"eded66a79d0501494d676da030e3cb17\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.574544 kubelet[2711]: I0813 00:15:53.574282 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eded66a79d0501494d676da030e3cb17-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-a-e75a6b4c18\" (UID: \"eded66a79d0501494d676da030e3cb17\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.574544 kubelet[2711]: I0813 00:15:53.574357 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0dfaeb1ca1f8a4daa142df3e2bd8054e-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-a-e75a6b4c18\" (UID: \"0dfaeb1ca1f8a4daa142df3e2bd8054e\") " pod="kube-system/kube-scheduler-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.727254 kubelet[2711]: I0813 00:15:53.727184 2711 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.727996 kubelet[2711]: E0813 00:15:53.727912 2711 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.71.157:6443/api/v1/nodes\": dial tcp 147.75.71.157:6443: connect: connection refused" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:53.834446 containerd[1824]: time="2025-08-13T00:15:53.834196228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-a-e75a6b4c18,Uid:25b93cdd837deb03c359fe016cae677c,Namespace:kube-system,Attempt:0,}" Aug 13 00:15:53.843739 containerd[1824]: time="2025-08-13T00:15:53.843693503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-a-e75a6b4c18,Uid:eded66a79d0501494d676da030e3cb17,Namespace:kube-system,Attempt:0,}" Aug 13 00:15:53.852466 containerd[1824]: time="2025-08-13T00:15:53.852452459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-a-e75a6b4c18,Uid:0dfaeb1ca1f8a4daa142df3e2bd8054e,Namespace:kube-system,Attempt:0,}" Aug 13 00:15:53.975064 kubelet[2711]: E0813 00:15:53.975029 2711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.71.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-a-e75a6b4c18?timeout=10s\": dial tcp 147.75.71.157:6443: connect: connection refused" interval="800ms" Aug 13 00:15:54.129592 kubelet[2711]: I0813 00:15:54.129557 2711 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:54.129813 kubelet[2711]: E0813 00:15:54.129798 2711 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.71.157:6443/api/v1/nodes\": dial tcp 147.75.71.157:6443: connect: connection refused" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:54.179702 kubelet[2711]: W0813 00:15:54.179621 2711 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.71.157:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.71.157:6443: connect: connection refused Aug 13 00:15:54.179702 kubelet[2711]: E0813 00:15:54.179692 2711 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.71.157:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.71.157:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:54.282062 kubelet[2711]: W0813 00:15:54.281997 2711 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.71.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-a-e75a6b4c18&limit=500&resourceVersion=0": dial tcp 147.75.71.157:6443: connect: connection refused Aug 13 00:15:54.282062 kubelet[2711]: E0813 00:15:54.282040 2711 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.71.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-a-e75a6b4c18&limit=500&resourceVersion=0\": dial tcp 147.75.71.157:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:54.313561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1033375701.mount: Deactivated successfully. Aug 13 00:15:54.314996 containerd[1824]: time="2025-08-13T00:15:54.314932102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:15:54.315962 containerd[1824]: time="2025-08-13T00:15:54.315922597Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 00:15:54.316235 containerd[1824]: time="2025-08-13T00:15:54.316205260Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:15:54.316784 containerd[1824]: time="2025-08-13T00:15:54.316721918Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:15:54.317067 containerd[1824]: time="2025-08-13T00:15:54.317022714Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:15:54.317557 containerd[1824]: time="2025-08-13T00:15:54.317505260Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:15:54.317642 containerd[1824]: time="2025-08-13T00:15:54.317595082Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:15:54.319250 containerd[1824]: time="2025-08-13T00:15:54.319209367Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 484.789225ms" Aug 13 00:15:54.319446 containerd[1824]: time="2025-08-13T00:15:54.319408639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:15:54.320329 containerd[1824]: time="2025-08-13T00:15:54.320316096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 467.823085ms" Aug 13 00:15:54.321913 containerd[1824]: time="2025-08-13T00:15:54.321875985Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 478.141759ms" Aug 13 00:15:54.363213 kubelet[2711]: W0813 00:15:54.363181 2711 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.71.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.71.157:6443: connect: connection refused Aug 13 00:15:54.363255 kubelet[2711]: E0813 00:15:54.363219 2711 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.71.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.71.157:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:54.395832 kubelet[2711]: W0813 00:15:54.395731 2711 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.71.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.71.157:6443: connect: connection refused Aug 13 00:15:54.395832 kubelet[2711]: E0813 00:15:54.395769 2711 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.71.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.71.157:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:15:54.423855 containerd[1824]: time="2025-08-13T00:15:54.423796540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:15:54.423972 containerd[1824]: time="2025-08-13T00:15:54.423730617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:15:54.423996 containerd[1824]: time="2025-08-13T00:15:54.423967550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:15:54.423996 containerd[1824]: time="2025-08-13T00:15:54.423975347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:54.424066 containerd[1824]: time="2025-08-13T00:15:54.423971686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:15:54.424066 containerd[1824]: time="2025-08-13T00:15:54.424005578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:15:54.424066 containerd[1824]: time="2025-08-13T00:15:54.424016483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:54.424066 containerd[1824]: time="2025-08-13T00:15:54.424017246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:54.424066 containerd[1824]: time="2025-08-13T00:15:54.424032496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:15:54.424066 containerd[1824]: time="2025-08-13T00:15:54.424047929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:54.424168 containerd[1824]: time="2025-08-13T00:15:54.424068463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:54.424168 containerd[1824]: time="2025-08-13T00:15:54.424095987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:15:54.455788 systemd[1]: Started cri-containerd-2fe999332cdca0de4a3bf72679d8ab862b71356e47da1d43e16321a229ee1097.scope - libcontainer container 2fe999332cdca0de4a3bf72679d8ab862b71356e47da1d43e16321a229ee1097. Aug 13 00:15:54.456632 systemd[1]: Started cri-containerd-725217e45b38195d5f27b95137291498660b940ca16fe2c778c6be0e25ead554.scope - libcontainer container 725217e45b38195d5f27b95137291498660b940ca16fe2c778c6be0e25ead554. Aug 13 00:15:54.457420 systemd[1]: Started cri-containerd-b3695664283c5d5ffdab324c799ed52739b3689d354d1842bd8ba95cd894296a.scope - libcontainer container b3695664283c5d5ffdab324c799ed52739b3689d354d1842bd8ba95cd894296a. Aug 13 00:15:54.480919 containerd[1824]: time="2025-08-13T00:15:54.480829088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-a-e75a6b4c18,Uid:25b93cdd837deb03c359fe016cae677c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fe999332cdca0de4a3bf72679d8ab862b71356e47da1d43e16321a229ee1097\"" Aug 13 00:15:54.481779 containerd[1824]: time="2025-08-13T00:15:54.481762905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-a-e75a6b4c18,Uid:eded66a79d0501494d676da030e3cb17,Namespace:kube-system,Attempt:0,} returns sandbox id \"725217e45b38195d5f27b95137291498660b940ca16fe2c778c6be0e25ead554\"" Aug 13 00:15:54.481946 containerd[1824]: time="2025-08-13T00:15:54.481932131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-a-e75a6b4c18,Uid:0dfaeb1ca1f8a4daa142df3e2bd8054e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3695664283c5d5ffdab324c799ed52739b3689d354d1842bd8ba95cd894296a\"" Aug 13 00:15:54.482482 containerd[1824]: time="2025-08-13T00:15:54.482459285Z" level=info msg="CreateContainer within sandbox \"2fe999332cdca0de4a3bf72679d8ab862b71356e47da1d43e16321a229ee1097\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:15:54.482711 containerd[1824]: time="2025-08-13T00:15:54.482699719Z" level=info msg="CreateContainer within sandbox \"725217e45b38195d5f27b95137291498660b940ca16fe2c778c6be0e25ead554\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:15:54.482834 containerd[1824]: time="2025-08-13T00:15:54.482819291Z" level=info msg="CreateContainer within sandbox \"b3695664283c5d5ffdab324c799ed52739b3689d354d1842bd8ba95cd894296a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:15:54.489011 containerd[1824]: time="2025-08-13T00:15:54.488998342Z" level=info msg="CreateContainer within sandbox \"725217e45b38195d5f27b95137291498660b940ca16fe2c778c6be0e25ead554\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b6fde3cdeb33adc9ba786e4804c067a5ad17176c422b7650a4af7c6b84c73cc5\"" Aug 13 00:15:54.489221 containerd[1824]: time="2025-08-13T00:15:54.489208987Z" level=info msg="StartContainer for \"b6fde3cdeb33adc9ba786e4804c067a5ad17176c422b7650a4af7c6b84c73cc5\"" Aug 13 00:15:54.490053 containerd[1824]: time="2025-08-13T00:15:54.490040002Z" level=info msg="CreateContainer within sandbox \"b3695664283c5d5ffdab324c799ed52739b3689d354d1842bd8ba95cd894296a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fd4d76cc1ae0d82fc1d5364dd9ff2a516ae245d02fcc6a793b7c06872a4d5395\"" Aug 13 00:15:54.490217 containerd[1824]: time="2025-08-13T00:15:54.490206416Z" level=info msg="StartContainer for \"fd4d76cc1ae0d82fc1d5364dd9ff2a516ae245d02fcc6a793b7c06872a4d5395\"" Aug 13 00:15:54.490841 containerd[1824]: time="2025-08-13T00:15:54.490829421Z" level=info msg="CreateContainer within sandbox \"2fe999332cdca0de4a3bf72679d8ab862b71356e47da1d43e16321a229ee1097\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"97c517c0d396431033d87fb709a09d6196c95febfe850bc94c2c7e047b072c5f\"" Aug 13 00:15:54.490978 containerd[1824]: time="2025-08-13T00:15:54.490964636Z" level=info msg="StartContainer for \"97c517c0d396431033d87fb709a09d6196c95febfe850bc94c2c7e047b072c5f\"" Aug 13 00:15:54.515650 systemd[1]: Started cri-containerd-97c517c0d396431033d87fb709a09d6196c95febfe850bc94c2c7e047b072c5f.scope - libcontainer container 97c517c0d396431033d87fb709a09d6196c95febfe850bc94c2c7e047b072c5f. Aug 13 00:15:54.516299 systemd[1]: Started cri-containerd-b6fde3cdeb33adc9ba786e4804c067a5ad17176c422b7650a4af7c6b84c73cc5.scope - libcontainer container b6fde3cdeb33adc9ba786e4804c067a5ad17176c422b7650a4af7c6b84c73cc5. Aug 13 00:15:54.517018 systemd[1]: Started cri-containerd-fd4d76cc1ae0d82fc1d5364dd9ff2a516ae245d02fcc6a793b7c06872a4d5395.scope - libcontainer container fd4d76cc1ae0d82fc1d5364dd9ff2a516ae245d02fcc6a793b7c06872a4d5395. Aug 13 00:15:54.540152 containerd[1824]: time="2025-08-13T00:15:54.540128082Z" level=info msg="StartContainer for \"fd4d76cc1ae0d82fc1d5364dd9ff2a516ae245d02fcc6a793b7c06872a4d5395\" returns successfully" Aug 13 00:15:54.540152 containerd[1824]: time="2025-08-13T00:15:54.540148187Z" level=info msg="StartContainer for \"97c517c0d396431033d87fb709a09d6196c95febfe850bc94c2c7e047b072c5f\" returns successfully" Aug 13 00:15:54.541636 containerd[1824]: time="2025-08-13T00:15:54.541611847Z" level=info msg="StartContainer for \"b6fde3cdeb33adc9ba786e4804c067a5ad17176c422b7650a4af7c6b84c73cc5\" returns successfully" Aug 13 00:15:54.931309 kubelet[2711]: I0813 00:15:54.931259 2711 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:55.048001 kubelet[2711]: E0813 00:15:55.047979 2711 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.2-a-e75a6b4c18\" not found" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:55.149907 kubelet[2711]: I0813 00:15:55.149887 2711 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:55.149907 kubelet[2711]: E0813 00:15:55.149910 2711 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.2-a-e75a6b4c18\": node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:55.155240 kubelet[2711]: E0813 00:15:55.155199 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:55.256592 kubelet[2711]: E0813 00:15:55.256335 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:55.357020 kubelet[2711]: E0813 00:15:55.356959 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:55.394095 kubelet[2711]: E0813 00:15:55.394043 2711 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:55.396752 kubelet[2711]: E0813 00:15:55.396671 2711 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:55.399719 kubelet[2711]: E0813 00:15:55.399629 2711 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:55.457229 kubelet[2711]: E0813 00:15:55.457098 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:55.558441 kubelet[2711]: E0813 00:15:55.558185 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:55.659006 kubelet[2711]: E0813 00:15:55.658883 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:55.759687 kubelet[2711]: E0813 00:15:55.759581 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:55.860373 kubelet[2711]: E0813 00:15:55.860250 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:55.961190 kubelet[2711]: E0813 00:15:55.961078 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:56.061451 kubelet[2711]: E0813 00:15:56.061427 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:56.162226 kubelet[2711]: E0813 00:15:56.162163 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:56.263384 kubelet[2711]: E0813 00:15:56.263296 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:56.364516 kubelet[2711]: E0813 00:15:56.364395 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:56.403707 kubelet[2711]: E0813 00:15:56.403650 2711 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:56.403707 kubelet[2711]: E0813 00:15:56.403683 2711 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:56.404207 kubelet[2711]: E0813 00:15:56.403939 2711 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:56.465672 kubelet[2711]: E0813 00:15:56.465504 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:56.565760 kubelet[2711]: E0813 00:15:56.565675 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:56.666125 kubelet[2711]: E0813 00:15:56.665988 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:56.767085 kubelet[2711]: E0813 00:15:56.766816 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:56.867988 kubelet[2711]: E0813 00:15:56.867880 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:56.968204 kubelet[2711]: E0813 00:15:56.968122 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:57.069267 kubelet[2711]: E0813 00:15:57.069107 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:57.173509 kubelet[2711]: I0813 00:15:57.173382 2711 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:57.187108 kubelet[2711]: W0813 00:15:57.187027 2711 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:15:57.187326 kubelet[2711]: I0813 00:15:57.187298 2711 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:57.194291 kubelet[2711]: W0813 00:15:57.194216 2711 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:15:57.194533 kubelet[2711]: I0813 00:15:57.194396 2711 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:57.200158 kubelet[2711]: W0813 00:15:57.200103 2711 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:15:57.367832 kubelet[2711]: I0813 00:15:57.367704 2711 apiserver.go:52] "Watching apiserver" Aug 13 00:15:57.373375 kubelet[2711]: I0813 00:15:57.373290 2711 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:15:57.404948 systemd[1]: Reload requested from client PID 3029 ('systemctl') (unit session-11.scope)... Aug 13 00:15:57.404958 systemd[1]: Reloading... Aug 13 00:15:57.457536 zram_generator::config[3075]: No configuration found. Aug 13 00:15:57.526669 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:15:57.619333 systemd[1]: Reloading finished in 214 ms. Aug 13 00:15:57.640330 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:57.644320 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:15:57.644485 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:57.644536 systemd[1]: kubelet.service: Consumed 854ms CPU time, 140.7M memory peak. Aug 13 00:15:57.661927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:57.916975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:57.921022 (kubelet)[3139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:15:57.950206 kubelet[3139]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:15:57.950206 kubelet[3139]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:15:57.950206 kubelet[3139]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:15:57.950441 kubelet[3139]: I0813 00:15:57.950242 3139 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:15:57.953962 kubelet[3139]: I0813 00:15:57.953922 3139 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:15:57.953962 kubelet[3139]: I0813 00:15:57.953933 3139 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:15:57.954087 kubelet[3139]: I0813 00:15:57.954052 3139 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:15:57.954746 kubelet[3139]: I0813 00:15:57.954735 3139 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:15:57.955903 kubelet[3139]: I0813 00:15:57.955894 3139 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:15:57.957789 kubelet[3139]: E0813 00:15:57.957774 3139 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:15:57.957833 kubelet[3139]: I0813 00:15:57.957789 3139 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:15:57.964866 kubelet[3139]: I0813 00:15:57.964829 3139 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:15:57.964970 kubelet[3139]: I0813 00:15:57.964924 3139 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:15:57.965058 kubelet[3139]: I0813 00:15:57.964941 3139 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-a-e75a6b4c18","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:15:57.965058 kubelet[3139]: I0813 00:15:57.965035 3139 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:15:57.965058 kubelet[3139]: I0813 00:15:57.965041 3139 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:15:57.965155 kubelet[3139]: I0813 00:15:57.965069 3139 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:15:57.965173 kubelet[3139]: I0813 00:15:57.965165 3139 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:15:57.965258 kubelet[3139]: I0813 00:15:57.965178 3139 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:15:57.965258 kubelet[3139]: I0813 00:15:57.965187 3139 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:15:57.965258 kubelet[3139]: I0813 00:15:57.965193 3139 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:15:57.965641 kubelet[3139]: I0813 00:15:57.965632 3139 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 00:15:57.965873 kubelet[3139]: I0813 00:15:57.965866 3139 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:15:57.966089 kubelet[3139]: I0813 00:15:57.966083 3139 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:15:57.966108 kubelet[3139]: I0813 00:15:57.966098 3139 server.go:1287] "Started kubelet" Aug 13 00:15:57.966222 kubelet[3139]: I0813 00:15:57.966206 3139 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:15:57.966290 kubelet[3139]: I0813 00:15:57.966252 3139 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:15:57.966434 kubelet[3139]: I0813 00:15:57.966425 3139 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:15:57.967697 kubelet[3139]: I0813 00:15:57.967685 3139 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:15:57.967770 kubelet[3139]: I0813 00:15:57.967686 3139 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:15:57.967818 kubelet[3139]: I0813 00:15:57.967808 3139 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:15:57.967854 kubelet[3139]: E0813 00:15:57.967831 3139 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:15:57.968102 kubelet[3139]: E0813 00:15:57.968066 3139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-e75a6b4c18\" not found" Aug 13 00:15:57.968102 kubelet[3139]: I0813 00:15:57.968083 3139 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:15:57.968176 kubelet[3139]: I0813 00:15:57.968127 3139 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:15:57.968281 kubelet[3139]: I0813 00:15:57.968261 3139 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:15:57.968548 kubelet[3139]: I0813 00:15:57.968528 3139 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:15:57.969465 kubelet[3139]: I0813 00:15:57.969452 3139 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:15:57.969517 kubelet[3139]: I0813 00:15:57.969470 3139 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:15:57.972931 kubelet[3139]: I0813 00:15:57.972910 3139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:15:57.973516 kubelet[3139]: I0813 00:15:57.973507 3139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:15:57.973553 kubelet[3139]: I0813 00:15:57.973525 3139 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:15:57.973553 kubelet[3139]: I0813 00:15:57.973536 3139 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:15:57.973553 kubelet[3139]: I0813 00:15:57.973541 3139 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:15:57.973645 kubelet[3139]: E0813 00:15:57.973569 3139 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:15:57.983318 kubelet[3139]: I0813 00:15:57.983270 3139 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:15:57.983318 kubelet[3139]: I0813 00:15:57.983279 3139 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:15:57.983318 kubelet[3139]: I0813 00:15:57.983290 3139 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:15:57.983428 kubelet[3139]: I0813 00:15:57.983373 3139 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:15:57.983428 kubelet[3139]: I0813 00:15:57.983379 3139 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:15:57.983428 kubelet[3139]: I0813 00:15:57.983390 3139 policy_none.go:49] "None policy: Start" Aug 13 00:15:57.983428 kubelet[3139]: I0813 00:15:57.983395 3139 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:15:57.983428 kubelet[3139]: I0813 00:15:57.983400 3139 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:15:57.983519 kubelet[3139]: I0813 00:15:57.983457 3139 state_mem.go:75] "Updated machine memory state" Aug 13 00:15:57.985480 kubelet[3139]: I0813 00:15:57.985471 3139 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:15:57.985577 kubelet[3139]: I0813 00:15:57.985569 3139 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:15:57.985620 kubelet[3139]: I0813 00:15:57.985576 3139 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:15:57.985675 kubelet[3139]: I0813 00:15:57.985667 3139 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:15:57.985991 kubelet[3139]: E0813 00:15:57.985981 3139 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:15:58.075443 kubelet[3139]: I0813 00:15:58.075340 3139 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.075731 kubelet[3139]: I0813 00:15:58.075525 3139 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.075853 kubelet[3139]: I0813 00:15:58.075730 3139 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.083589 kubelet[3139]: W0813 00:15:58.083542 3139 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:15:58.083810 kubelet[3139]: W0813 00:15:58.083566 3139 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:15:58.083810 kubelet[3139]: E0813 00:15:58.083671 3139 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-a-e75a6b4c18\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.083810 kubelet[3139]: W0813 00:15:58.083661 3139 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:15:58.084105 kubelet[3139]: E0813 00:15:58.083850 3139 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.2-a-e75a6b4c18\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.084105 kubelet[3139]: E0813 00:15:58.083778 3139 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-a-e75a6b4c18\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.092343 kubelet[3139]: I0813 00:15:58.092256 3139 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.101153 kubelet[3139]: I0813 00:15:58.101099 3139 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.101374 kubelet[3139]: I0813 00:15:58.101261 3139 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.270253 kubelet[3139]: I0813 00:15:58.270043 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25b93cdd837deb03c359fe016cae677c-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-a-e75a6b4c18\" (UID: \"25b93cdd837deb03c359fe016cae677c\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.270253 kubelet[3139]: I0813 00:15:58.270190 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25b93cdd837deb03c359fe016cae677c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-a-e75a6b4c18\" (UID: \"25b93cdd837deb03c359fe016cae677c\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.270632 kubelet[3139]: I0813 00:15:58.270280 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eded66a79d0501494d676da030e3cb17-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-a-e75a6b4c18\" (UID: \"eded66a79d0501494d676da030e3cb17\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.270632 kubelet[3139]: I0813 00:15:58.270338 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eded66a79d0501494d676da030e3cb17-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-a-e75a6b4c18\" (UID: \"eded66a79d0501494d676da030e3cb17\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.270632 kubelet[3139]: I0813 00:15:58.270396 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eded66a79d0501494d676da030e3cb17-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-a-e75a6b4c18\" (UID: \"eded66a79d0501494d676da030e3cb17\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.270632 kubelet[3139]: I0813 00:15:58.270448 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0dfaeb1ca1f8a4daa142df3e2bd8054e-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-a-e75a6b4c18\" (UID: \"0dfaeb1ca1f8a4daa142df3e2bd8054e\") " pod="kube-system/kube-scheduler-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.270632 kubelet[3139]: I0813 00:15:58.270526 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25b93cdd837deb03c359fe016cae677c-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-a-e75a6b4c18\" (UID: \"25b93cdd837deb03c359fe016cae677c\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.271135 kubelet[3139]: I0813 00:15:58.270575 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eded66a79d0501494d676da030e3cb17-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-a-e75a6b4c18\" (UID: \"eded66a79d0501494d676da030e3cb17\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.271135 kubelet[3139]: I0813 00:15:58.270625 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eded66a79d0501494d676da030e3cb17-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-a-e75a6b4c18\" (UID: \"eded66a79d0501494d676da030e3cb17\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.410363 sudo[3183]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:15:58.410528 sudo[3183]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 00:15:58.737875 sudo[3183]: pam_unix(sudo:session): session closed for user root Aug 13 00:15:58.966225 kubelet[3139]: I0813 00:15:58.966179 3139 apiserver.go:52] "Watching apiserver" Aug 13 00:15:58.968393 kubelet[3139]: I0813 00:15:58.968379 3139 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:15:58.976369 kubelet[3139]: I0813 00:15:58.976332 3139 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.976426 kubelet[3139]: I0813 00:15:58.976417 3139 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.979328 kubelet[3139]: W0813 00:15:58.979318 3139 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:15:58.979362 kubelet[3139]: E0813 00:15:58.979338 3139 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-a-e75a6b4c18\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.979565 kubelet[3139]: W0813 00:15:58.979558 3139 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:15:58.979601 kubelet[3139]: E0813 00:15:58.979576 3139 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-a-e75a6b4c18\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-a-e75a6b4c18" Aug 13 00:15:58.987106 kubelet[3139]: I0813 00:15:58.987081 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.2-a-e75a6b4c18" podStartSLOduration=1.987061143 podStartE2EDuration="1.987061143s" podCreationTimestamp="2025-08-13 00:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:15:58.98704261 +0000 UTC m=+1.062742878" watchObservedRunningTime="2025-08-13 00:15:58.987061143 +0000 UTC m=+1.062761406" Aug 13 00:15:58.994970 kubelet[3139]: I0813 00:15:58.994913 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.2-a-e75a6b4c18" podStartSLOduration=1.994902525 podStartE2EDuration="1.994902525s" podCreationTimestamp="2025-08-13 00:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:15:58.990882792 +0000 UTC m=+1.066583055" watchObservedRunningTime="2025-08-13 00:15:58.994902525 +0000 UTC m=+1.070602787" Aug 13 00:15:58.999636 kubelet[3139]: I0813 00:15:58.999593 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.2-a-e75a6b4c18" podStartSLOduration=1.999585535 podStartE2EDuration="1.999585535s" podCreationTimestamp="2025-08-13 00:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:15:58.994937243 +0000 UTC m=+1.070637508" watchObservedRunningTime="2025-08-13 00:15:58.999585535 +0000 UTC m=+1.075285796" Aug 13 00:16:00.167804 sudo[2105]: pam_unix(sudo:session): session closed for user root Aug 13 00:16:00.168652 sshd[2104]: Connection closed by 147.75.109.163 port 49362 Aug 13 00:16:00.168854 sshd-session[2101]: pam_unix(sshd:session): session closed for user core Aug 13 00:16:00.170705 systemd[1]: sshd@8-147.75.71.157:22-147.75.109.163:49362.service: Deactivated successfully. Aug 13 00:16:00.171941 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:16:00.172065 systemd[1]: session-11.scope: Consumed 3.223s CPU time, 265.9M memory peak. Aug 13 00:16:00.173369 systemd-logind[1805]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:16:00.174084 systemd-logind[1805]: Removed session 11. Aug 13 00:16:03.657999 kubelet[3139]: I0813 00:16:03.657885 3139 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:16:03.659339 containerd[1824]: time="2025-08-13T00:16:03.658769786Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:16:03.660420 kubelet[3139]: I0813 00:16:03.659518 3139 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:16:03.672211 systemd[1]: Created slice kubepods-besteffort-pod59ecad4b_b205_4b7f_811d_fb3edfe7d3c6.slice - libcontainer container kubepods-besteffort-pod59ecad4b_b205_4b7f_811d_fb3edfe7d3c6.slice. Aug 13 00:16:03.689159 systemd[1]: Created slice kubepods-burstable-poda2d6792a_70aa_41d5_9193_3307acec6362.slice - libcontainer container kubepods-burstable-poda2d6792a_70aa_41d5_9193_3307acec6362.slice. Aug 13 00:16:03.712746 kubelet[3139]: I0813 00:16:03.712715 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-cilium-run\") pod \"cilium-9z4ff\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " pod="kube-system/cilium-9z4ff" Aug 13 00:16:03.712871 kubelet[3139]: I0813 00:16:03.712757 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-lib-modules\") pod \"cilium-9z4ff\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " pod="kube-system/cilium-9z4ff" Aug 13 00:16:03.712871 kubelet[3139]: I0813 00:16:03.712783 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-etc-cni-netd\") pod \"cilium-9z4ff\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " pod="kube-system/cilium-9z4ff" Aug 13 00:16:03.712871 kubelet[3139]: I0813 00:16:03.712803 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59ecad4b-b205-4b7f-811d-fb3edfe7d3c6-xtables-lock\") pod \"kube-proxy-4k7wg\" (UID: \"59ecad4b-b205-4b7f-811d-fb3edfe7d3c6\") " pod="kube-system/kube-proxy-4k7wg" Aug 13 00:16:03.712871 kubelet[3139]: I0813 00:16:03.712823 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-hostproc\") pod \"cilium-9z4ff\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " pod="kube-system/cilium-9z4ff" Aug 13 00:16:03.712871 kubelet[3139]: I0813 00:16:03.712855 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-cilium-cgroup\") pod \"cilium-9z4ff\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " pod="kube-system/cilium-9z4ff" Aug 13 00:16:03.713072 kubelet[3139]: I0813 00:16:03.712875 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-xtables-lock\") pod \"cilium-9z4ff\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " pod="kube-system/cilium-9z4ff" Aug 13 00:16:03.713072 kubelet[3139]: I0813 00:16:03.712915 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2d6792a-70aa-41d5-9193-3307acec6362-clustermesh-secrets\") pod \"cilium-9z4ff\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " pod="kube-system/cilium-9z4ff" Aug 13 00:16:03.713072 kubelet[3139]: I0813 00:16:03.712955 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/59ecad4b-b205-4b7f-811d-fb3edfe7d3c6-kube-proxy\") pod \"kube-proxy-4k7wg\" (UID: \"59ecad4b-b205-4b7f-811d-fb3edfe7d3c6\") " pod="kube-system/kube-proxy-4k7wg" Aug 13 00:16:03.713072 kubelet[3139]: I0813 00:16:03.712977 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rv2l\" (UniqueName: \"kubernetes.io/projected/59ecad4b-b205-4b7f-811d-fb3edfe7d3c6-kube-api-access-7rv2l\") pod \"kube-proxy-4k7wg\" (UID: \"59ecad4b-b205-4b7f-811d-fb3edfe7d3c6\") " pod="kube-system/kube-proxy-4k7wg" Aug 13 00:16:03.713072 kubelet[3139]: I0813 00:16:03.713001 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-bpf-maps\") pod \"cilium-9z4ff\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " pod="kube-system/cilium-9z4ff" Aug 13 00:16:03.713072 kubelet[3139]: I0813 00:16:03.713020 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-cni-path\") pod \"cilium-9z4ff\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " pod="kube-system/cilium-9z4ff" Aug 13 00:16:03.713319 kubelet[3139]: I0813 00:16:03.713040 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2d6792a-70aa-41d5-9193-3307acec6362-cilium-config-path\") pod \"cilium-9z4ff\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " pod="kube-system/cilium-9z4ff" Aug 13 00:16:03.713319 kubelet[3139]: I0813 00:16:03.713060 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-host-proc-sys-net\") pod \"cilium-9z4ff\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " pod="kube-system/cilium-9z4ff" Aug 13 00:16:03.713319 kubelet[3139]: I0813 00:16:03.713079 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2d6792a-70aa-41d5-9193-3307acec6362-hubble-tls\") pod \"cilium-9z4ff\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " pod="kube-system/cilium-9z4ff" Aug 13 00:16:03.713319 kubelet[3139]: I0813 00:16:03.713113 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-host-proc-sys-kernel\") pod \"cilium-9z4ff\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " pod="kube-system/cilium-9z4ff" Aug 13 00:16:03.713319 kubelet[3139]: I0813 00:16:03.713143 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd4sz\" (UniqueName: \"kubernetes.io/projected/a2d6792a-70aa-41d5-9193-3307acec6362-kube-api-access-jd4sz\") pod \"cilium-9z4ff\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " pod="kube-system/cilium-9z4ff" Aug 13 00:16:03.713529 kubelet[3139]: I0813 00:16:03.713165 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59ecad4b-b205-4b7f-811d-fb3edfe7d3c6-lib-modules\") pod \"kube-proxy-4k7wg\" (UID: \"59ecad4b-b205-4b7f-811d-fb3edfe7d3c6\") " pod="kube-system/kube-proxy-4k7wg" Aug 13 00:16:03.826682 kubelet[3139]: E0813 00:16:03.826575 3139 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 00:16:03.826682 kubelet[3139]: E0813 00:16:03.826588 3139 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 00:16:03.826682 kubelet[3139]: E0813 00:16:03.826650 3139 projected.go:194] Error preparing data for projected volume kube-api-access-7rv2l for pod kube-system/kube-proxy-4k7wg: configmap "kube-root-ca.crt" not found Aug 13 00:16:03.826682 kubelet[3139]: E0813 00:16:03.826678 3139 projected.go:194] Error preparing data for projected volume kube-api-access-jd4sz for pod kube-system/cilium-9z4ff: configmap "kube-root-ca.crt" not found Aug 13 00:16:03.827139 kubelet[3139]: E0813 00:16:03.826789 3139 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59ecad4b-b205-4b7f-811d-fb3edfe7d3c6-kube-api-access-7rv2l podName:59ecad4b-b205-4b7f-811d-fb3edfe7d3c6 nodeName:}" failed. No retries permitted until 2025-08-13 00:16:04.326739511 +0000 UTC m=+6.402439845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7rv2l" (UniqueName: "kubernetes.io/projected/59ecad4b-b205-4b7f-811d-fb3edfe7d3c6-kube-api-access-7rv2l") pod "kube-proxy-4k7wg" (UID: "59ecad4b-b205-4b7f-811d-fb3edfe7d3c6") : configmap "kube-root-ca.crt" not found Aug 13 00:16:03.827139 kubelet[3139]: E0813 00:16:03.826837 3139 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a2d6792a-70aa-41d5-9193-3307acec6362-kube-api-access-jd4sz podName:a2d6792a-70aa-41d5-9193-3307acec6362 nodeName:}" failed. No retries permitted until 2025-08-13 00:16:04.326812492 +0000 UTC m=+6.402512807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jd4sz" (UniqueName: "kubernetes.io/projected/a2d6792a-70aa-41d5-9193-3307acec6362-kube-api-access-jd4sz") pod "cilium-9z4ff" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362") : configmap "kube-root-ca.crt" not found Aug 13 00:16:04.419838 kubelet[3139]: E0813 00:16:04.419764 3139 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 00:16:04.419838 kubelet[3139]: E0813 00:16:04.419789 3139 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 00:16:04.419838 kubelet[3139]: E0813 00:16:04.419836 3139 projected.go:194] Error preparing data for projected volume kube-api-access-jd4sz for pod kube-system/cilium-9z4ff: configmap "kube-root-ca.crt" not found Aug 13 00:16:04.419838 kubelet[3139]: E0813 00:16:04.419858 3139 projected.go:194] Error preparing data for projected volume kube-api-access-7rv2l for pod kube-system/kube-proxy-4k7wg: configmap "kube-root-ca.crt" not found Aug 13 00:16:04.420507 kubelet[3139]: E0813 00:16:04.419987 3139 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a2d6792a-70aa-41d5-9193-3307acec6362-kube-api-access-jd4sz podName:a2d6792a-70aa-41d5-9193-3307acec6362 nodeName:}" failed. No retries permitted until 2025-08-13 00:16:05.419927511 +0000 UTC m=+7.495627856 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jd4sz" (UniqueName: "kubernetes.io/projected/a2d6792a-70aa-41d5-9193-3307acec6362-kube-api-access-jd4sz") pod "cilium-9z4ff" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362") : configmap "kube-root-ca.crt" not found Aug 13 00:16:04.420507 kubelet[3139]: E0813 00:16:04.420058 3139 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59ecad4b-b205-4b7f-811d-fb3edfe7d3c6-kube-api-access-7rv2l podName:59ecad4b-b205-4b7f-811d-fb3edfe7d3c6 nodeName:}" failed. No retries permitted until 2025-08-13 00:16:05.420017287 +0000 UTC m=+7.495717610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7rv2l" (UniqueName: "kubernetes.io/projected/59ecad4b-b205-4b7f-811d-fb3edfe7d3c6-kube-api-access-7rv2l") pod "kube-proxy-4k7wg" (UID: "59ecad4b-b205-4b7f-811d-fb3edfe7d3c6") : configmap "kube-root-ca.crt" not found Aug 13 00:16:04.776403 systemd[1]: Created slice kubepods-besteffort-podb0534553_bcfe_41d4_a3c8_21477efb11c7.slice - libcontainer container kubepods-besteffort-podb0534553_bcfe_41d4_a3c8_21477efb11c7.slice. Aug 13 00:16:04.824060 kubelet[3139]: I0813 00:16:04.823931 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0534553-bcfe-41d4-a3c8-21477efb11c7-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-qftg6\" (UID: \"b0534553-bcfe-41d4-a3c8-21477efb11c7\") " pod="kube-system/cilium-operator-6c4d7847fc-qftg6" Aug 13 00:16:04.824985 kubelet[3139]: I0813 00:16:04.824115 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v5bl\" (UniqueName: \"kubernetes.io/projected/b0534553-bcfe-41d4-a3c8-21477efb11c7-kube-api-access-5v5bl\") pod \"cilium-operator-6c4d7847fc-qftg6\" (UID: \"b0534553-bcfe-41d4-a3c8-21477efb11c7\") " pod="kube-system/cilium-operator-6c4d7847fc-qftg6" Aug 13 00:16:05.080767 containerd[1824]: time="2025-08-13T00:16:05.080566920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qftg6,Uid:b0534553-bcfe-41d4-a3c8-21477efb11c7,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:05.105644 containerd[1824]: time="2025-08-13T00:16:05.105548658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:05.105835 containerd[1824]: time="2025-08-13T00:16:05.105804537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:05.105835 containerd[1824]: time="2025-08-13T00:16:05.105828843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:05.105893 containerd[1824]: time="2025-08-13T00:16:05.105881694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:05.133730 systemd[1]: Started cri-containerd-b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182.scope - libcontainer container b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182. Aug 13 00:16:05.164548 containerd[1824]: time="2025-08-13T00:16:05.164495970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qftg6,Uid:b0534553-bcfe-41d4-a3c8-21477efb11c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182\"" Aug 13 00:16:05.165733 containerd[1824]: time="2025-08-13T00:16:05.165689133Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:16:05.487393 containerd[1824]: time="2025-08-13T00:16:05.487308264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4k7wg,Uid:59ecad4b-b205-4b7f-811d-fb3edfe7d3c6,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:05.491982 containerd[1824]: time="2025-08-13T00:16:05.491949954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9z4ff,Uid:a2d6792a-70aa-41d5-9193-3307acec6362,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:05.501365 containerd[1824]: time="2025-08-13T00:16:05.501324991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:05.501365 containerd[1824]: time="2025-08-13T00:16:05.501353007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:05.501365 containerd[1824]: time="2025-08-13T00:16:05.501360005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:05.501516 containerd[1824]: time="2025-08-13T00:16:05.501398668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:05.502960 containerd[1824]: time="2025-08-13T00:16:05.502927781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:05.502993 containerd[1824]: time="2025-08-13T00:16:05.502968136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:05.503171 containerd[1824]: time="2025-08-13T00:16:05.502987463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:05.503216 containerd[1824]: time="2025-08-13T00:16:05.503204424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:05.527641 systemd[1]: Started cri-containerd-d37d6f509de528ffb01ee4295390e93be37d1c27605d415ed47e1cee15c84fec.scope - libcontainer container d37d6f509de528ffb01ee4295390e93be37d1c27605d415ed47e1cee15c84fec. Aug 13 00:16:05.529528 systemd[1]: Started cri-containerd-4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca.scope - libcontainer container 4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca. Aug 13 00:16:05.540666 containerd[1824]: time="2025-08-13T00:16:05.540637570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4k7wg,Uid:59ecad4b-b205-4b7f-811d-fb3edfe7d3c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d37d6f509de528ffb01ee4295390e93be37d1c27605d415ed47e1cee15c84fec\"" Aug 13 00:16:05.541584 containerd[1824]: time="2025-08-13T00:16:05.541567525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9z4ff,Uid:a2d6792a-70aa-41d5-9193-3307acec6362,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\"" Aug 13 00:16:05.542441 containerd[1824]: time="2025-08-13T00:16:05.542422109Z" level=info msg="CreateContainer within sandbox \"d37d6f509de528ffb01ee4295390e93be37d1c27605d415ed47e1cee15c84fec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:16:05.555801 containerd[1824]: time="2025-08-13T00:16:05.555785809Z" level=info msg="CreateContainer within sandbox \"d37d6f509de528ffb01ee4295390e93be37d1c27605d415ed47e1cee15c84fec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"17245f8c148cc4dfc9cf8e4b54c8e3260fdfbaf7f07560a016baedf25497c7c7\"" Aug 13 00:16:05.556043 containerd[1824]: time="2025-08-13T00:16:05.556032176Z" level=info msg="StartContainer for \"17245f8c148cc4dfc9cf8e4b54c8e3260fdfbaf7f07560a016baedf25497c7c7\"" Aug 13 00:16:05.582747 systemd[1]: Started cri-containerd-17245f8c148cc4dfc9cf8e4b54c8e3260fdfbaf7f07560a016baedf25497c7c7.scope - libcontainer container 17245f8c148cc4dfc9cf8e4b54c8e3260fdfbaf7f07560a016baedf25497c7c7. Aug 13 00:16:05.603823 containerd[1824]: time="2025-08-13T00:16:05.603791290Z" level=info msg="StartContainer for \"17245f8c148cc4dfc9cf8e4b54c8e3260fdfbaf7f07560a016baedf25497c7c7\" returns successfully" Aug 13 00:16:06.005040 kubelet[3139]: I0813 00:16:06.004958 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4k7wg" podStartSLOduration=3.004931454 podStartE2EDuration="3.004931454s" podCreationTimestamp="2025-08-13 00:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:06.00492032 +0000 UTC m=+8.080620615" watchObservedRunningTime="2025-08-13 00:16:06.004931454 +0000 UTC m=+8.080631738" Aug 13 00:16:06.359243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4154817471.mount: Deactivated successfully. Aug 13 00:16:06.705119 containerd[1824]: time="2025-08-13T00:16:06.705070347Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:06.705322 containerd[1824]: time="2025-08-13T00:16:06.705302844Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 00:16:06.705793 containerd[1824]: time="2025-08-13T00:16:06.705749179Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:06.706520 containerd[1824]: time="2025-08-13T00:16:06.706455048Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.540742207s" Aug 13 00:16:06.706520 containerd[1824]: time="2025-08-13T00:16:06.706475396Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:16:06.706992 containerd[1824]: time="2025-08-13T00:16:06.706981773Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:16:06.707442 containerd[1824]: time="2025-08-13T00:16:06.707430410Z" level=info msg="CreateContainer within sandbox \"b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:16:06.711711 containerd[1824]: time="2025-08-13T00:16:06.711665673Z" level=info msg="CreateContainer within sandbox \"b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085\"" Aug 13 00:16:06.711920 containerd[1824]: time="2025-08-13T00:16:06.711906043Z" level=info msg="StartContainer for \"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085\"" Aug 13 00:16:06.736949 systemd[1]: Started cri-containerd-504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085.scope - libcontainer container 504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085. Aug 13 00:16:06.791137 containerd[1824]: time="2025-08-13T00:16:06.791098724Z" level=info msg="StartContainer for \"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085\" returns successfully" Aug 13 00:16:07.001493 kubelet[3139]: I0813 00:16:07.001383 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-qftg6" podStartSLOduration=1.45979535 podStartE2EDuration="3.001368291s" podCreationTimestamp="2025-08-13 00:16:04 +0000 UTC" firstStartedPulling="2025-08-13 00:16:05.165360701 +0000 UTC m=+7.241060976" lastFinishedPulling="2025-08-13 00:16:06.706933654 +0000 UTC m=+8.782633917" observedRunningTime="2025-08-13 00:16:07.001254799 +0000 UTC m=+9.076955088" watchObservedRunningTime="2025-08-13 00:16:07.001368291 +0000 UTC m=+9.077068560" Aug 13 00:16:10.609286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2346941779.mount: Deactivated successfully. Aug 13 00:16:10.942549 update_engine[1807]: I20250813 00:16:10.942479 1807 update_attempter.cc:509] Updating boot flags... Aug 13 00:16:10.973475 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 45 scanned by (udev-worker) (3663) Aug 13 00:16:11.001475 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 45 scanned by (udev-worker) (3666) Aug 13 00:16:11.436103 containerd[1824]: time="2025-08-13T00:16:11.436049100Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:11.436323 containerd[1824]: time="2025-08-13T00:16:11.436261488Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 00:16:11.436596 containerd[1824]: time="2025-08-13T00:16:11.436554040Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:11.438248 containerd[1824]: time="2025-08-13T00:16:11.438233472Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.73123212s" Aug 13 00:16:11.438287 containerd[1824]: time="2025-08-13T00:16:11.438252377Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:16:11.439230 containerd[1824]: time="2025-08-13T00:16:11.439218458Z" level=info msg="CreateContainer within sandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:16:11.443414 containerd[1824]: time="2025-08-13T00:16:11.443396818Z" level=info msg="CreateContainer within sandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6\"" Aug 13 00:16:11.443649 containerd[1824]: time="2025-08-13T00:16:11.443618182Z" level=info msg="StartContainer for \"698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6\"" Aug 13 00:16:11.467812 systemd[1]: Started cri-containerd-698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6.scope - libcontainer container 698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6. Aug 13 00:16:11.478734 containerd[1824]: time="2025-08-13T00:16:11.478710318Z" level=info msg="StartContainer for \"698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6\" returns successfully" Aug 13 00:16:11.483133 systemd[1]: cri-containerd-698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6.scope: Deactivated successfully. Aug 13 00:16:12.448108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6-rootfs.mount: Deactivated successfully. Aug 13 00:16:12.750567 containerd[1824]: time="2025-08-13T00:16:12.750415921Z" level=info msg="shim disconnected" id=698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6 namespace=k8s.io Aug 13 00:16:12.750567 containerd[1824]: time="2025-08-13T00:16:12.750446545Z" level=warning msg="cleaning up after shim disconnected" id=698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6 namespace=k8s.io Aug 13 00:16:12.750567 containerd[1824]: time="2025-08-13T00:16:12.750452484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:16:13.015336 containerd[1824]: time="2025-08-13T00:16:13.015124895Z" level=info msg="CreateContainer within sandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:16:13.022999 containerd[1824]: time="2025-08-13T00:16:13.022934651Z" level=info msg="CreateContainer within sandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611\"" Aug 13 00:16:13.023391 containerd[1824]: time="2025-08-13T00:16:13.023377389Z" level=info msg="StartContainer for \"b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611\"" Aug 13 00:16:13.043818 systemd[1]: Started cri-containerd-b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611.scope - libcontainer container b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611. Aug 13 00:16:13.061760 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:16:13.061912 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:16:13.062000 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:16:13.062702 containerd[1824]: time="2025-08-13T00:16:13.062653566Z" level=info msg="StartContainer for \"b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611\" returns successfully" Aug 13 00:16:13.074910 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:16:13.075157 systemd[1]: cri-containerd-b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611.scope: Deactivated successfully. Aug 13 00:16:13.081958 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:16:13.083644 containerd[1824]: time="2025-08-13T00:16:13.083582991Z" level=info msg="shim disconnected" id=b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611 namespace=k8s.io Aug 13 00:16:13.083644 containerd[1824]: time="2025-08-13T00:16:13.083614020Z" level=warning msg="cleaning up after shim disconnected" id=b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611 namespace=k8s.io Aug 13 00:16:13.083644 containerd[1824]: time="2025-08-13T00:16:13.083620990Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:16:13.447705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611-rootfs.mount: Deactivated successfully. Aug 13 00:16:14.020766 containerd[1824]: time="2025-08-13T00:16:14.020678242Z" level=info msg="CreateContainer within sandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:16:14.035392 containerd[1824]: time="2025-08-13T00:16:14.035371642Z" level=info msg="CreateContainer within sandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23\"" Aug 13 00:16:14.035643 containerd[1824]: time="2025-08-13T00:16:14.035630974Z" level=info msg="StartContainer for \"41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23\"" Aug 13 00:16:14.066930 systemd[1]: Started cri-containerd-41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23.scope - libcontainer container 41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23. Aug 13 00:16:14.125103 containerd[1824]: time="2025-08-13T00:16:14.125040618Z" level=info msg="StartContainer for \"41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23\" returns successfully" Aug 13 00:16:14.127906 systemd[1]: cri-containerd-41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23.scope: Deactivated successfully. Aug 13 00:16:14.147904 containerd[1824]: time="2025-08-13T00:16:14.147871678Z" level=info msg="shim disconnected" id=41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23 namespace=k8s.io Aug 13 00:16:14.147904 containerd[1824]: time="2025-08-13T00:16:14.147902219Z" level=warning msg="cleaning up after shim disconnected" id=41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23 namespace=k8s.io Aug 13 00:16:14.148014 containerd[1824]: time="2025-08-13T00:16:14.147910319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:16:14.448060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23-rootfs.mount: Deactivated successfully. Aug 13 00:16:15.028104 containerd[1824]: time="2025-08-13T00:16:15.028010885Z" level=info msg="CreateContainer within sandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:16:15.039542 containerd[1824]: time="2025-08-13T00:16:15.039483301Z" level=info msg="CreateContainer within sandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a\"" Aug 13 00:16:15.039871 containerd[1824]: time="2025-08-13T00:16:15.039853132Z" level=info msg="StartContainer for \"3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a\"" Aug 13 00:16:15.069753 systemd[1]: Started cri-containerd-3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a.scope - libcontainer container 3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a. Aug 13 00:16:15.082820 systemd[1]: cri-containerd-3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a.scope: Deactivated successfully. Aug 13 00:16:15.083277 containerd[1824]: time="2025-08-13T00:16:15.083253670Z" level=info msg="StartContainer for \"3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a\" returns successfully" Aug 13 00:16:15.129495 containerd[1824]: time="2025-08-13T00:16:15.129345825Z" level=info msg="shim disconnected" id=3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a namespace=k8s.io Aug 13 00:16:15.129495 containerd[1824]: time="2025-08-13T00:16:15.129456516Z" level=warning msg="cleaning up after shim disconnected" id=3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a namespace=k8s.io Aug 13 00:16:15.129981 containerd[1824]: time="2025-08-13T00:16:15.129523741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:16:15.448658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a-rootfs.mount: Deactivated successfully. Aug 13 00:16:16.036129 containerd[1824]: time="2025-08-13T00:16:16.036040396Z" level=info msg="CreateContainer within sandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:16:16.047030 containerd[1824]: time="2025-08-13T00:16:16.046983325Z" level=info msg="CreateContainer within sandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce\"" Aug 13 00:16:16.047278 containerd[1824]: time="2025-08-13T00:16:16.047266523Z" level=info msg="StartContainer for \"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce\"" Aug 13 00:16:16.073019 systemd[1]: Started cri-containerd-684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce.scope - libcontainer container 684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce. Aug 13 00:16:16.134497 containerd[1824]: time="2025-08-13T00:16:16.134446625Z" level=info msg="StartContainer for \"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce\" returns successfully" Aug 13 00:16:16.248764 kubelet[3139]: I0813 00:16:16.248745 3139 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:16:16.261974 systemd[1]: Created slice kubepods-burstable-pod3654ed06_cd75_4648_aa70_eabd96ba7338.slice - libcontainer container kubepods-burstable-pod3654ed06_cd75_4648_aa70_eabd96ba7338.slice. Aug 13 00:16:16.264392 systemd[1]: Created slice kubepods-burstable-pod4251e5b6_40eb_4ca7_80cd_759592e6f813.slice - libcontainer container kubepods-burstable-pod4251e5b6_40eb_4ca7_80cd_759592e6f813.slice. Aug 13 00:16:16.306351 kubelet[3139]: I0813 00:16:16.306287 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltl8p\" (UniqueName: \"kubernetes.io/projected/3654ed06-cd75-4648-aa70-eabd96ba7338-kube-api-access-ltl8p\") pod \"coredns-668d6bf9bc-lvbrw\" (UID: \"3654ed06-cd75-4648-aa70-eabd96ba7338\") " pod="kube-system/coredns-668d6bf9bc-lvbrw" Aug 13 00:16:16.306351 kubelet[3139]: I0813 00:16:16.306321 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3654ed06-cd75-4648-aa70-eabd96ba7338-config-volume\") pod \"coredns-668d6bf9bc-lvbrw\" (UID: \"3654ed06-cd75-4648-aa70-eabd96ba7338\") " pod="kube-system/coredns-668d6bf9bc-lvbrw" Aug 13 00:16:16.306351 kubelet[3139]: I0813 00:16:16.306336 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4251e5b6-40eb-4ca7-80cd-759592e6f813-config-volume\") pod \"coredns-668d6bf9bc-64hwq\" (UID: \"4251e5b6-40eb-4ca7-80cd-759592e6f813\") " pod="kube-system/coredns-668d6bf9bc-64hwq" Aug 13 00:16:16.306457 kubelet[3139]: I0813 00:16:16.306360 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9n98\" (UniqueName: \"kubernetes.io/projected/4251e5b6-40eb-4ca7-80cd-759592e6f813-kube-api-access-s9n98\") pod \"coredns-668d6bf9bc-64hwq\" (UID: \"4251e5b6-40eb-4ca7-80cd-759592e6f813\") " pod="kube-system/coredns-668d6bf9bc-64hwq" Aug 13 00:16:16.565426 containerd[1824]: time="2025-08-13T00:16:16.565153724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lvbrw,Uid:3654ed06-cd75-4648-aa70-eabd96ba7338,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:16.566698 containerd[1824]: time="2025-08-13T00:16:16.566652604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-64hwq,Uid:4251e5b6-40eb-4ca7-80cd-759592e6f813,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:17.058049 kubelet[3139]: I0813 00:16:17.057990 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9z4ff" podStartSLOduration=8.161483009 podStartE2EDuration="14.057978726s" podCreationTimestamp="2025-08-13 00:16:03 +0000 UTC" firstStartedPulling="2025-08-13 00:16:05.542173841 +0000 UTC m=+7.617874114" lastFinishedPulling="2025-08-13 00:16:11.438669568 +0000 UTC m=+13.514369831" observedRunningTime="2025-08-13 00:16:17.057739312 +0000 UTC m=+19.133439575" watchObservedRunningTime="2025-08-13 00:16:17.057978726 +0000 UTC m=+19.133678987" Aug 13 00:16:17.933150 systemd-networkd[1732]: cilium_host: Link UP Aug 13 00:16:17.933249 systemd-networkd[1732]: cilium_net: Link UP Aug 13 00:16:17.933359 systemd-networkd[1732]: cilium_net: Gained carrier Aug 13 00:16:17.933476 systemd-networkd[1732]: cilium_host: Gained carrier Aug 13 00:16:17.979184 systemd-networkd[1732]: cilium_vxlan: Link UP Aug 13 00:16:17.979188 systemd-networkd[1732]: cilium_vxlan: Gained carrier Aug 13 00:16:18.132510 kernel: NET: Registered PF_ALG protocol family Aug 13 00:16:18.502592 systemd-networkd[1732]: cilium_host: Gained IPv6LL Aug 13 00:16:18.636000 systemd-networkd[1732]: lxc_health: Link UP Aug 13 00:16:18.636184 systemd-networkd[1732]: lxc_health: Gained carrier Aug 13 00:16:18.950594 systemd-networkd[1732]: cilium_net: Gained IPv6LL Aug 13 00:16:19.114477 kernel: eth0: renamed from tmp3cb52 Aug 13 00:16:19.135475 kernel: eth0: renamed from tmp1c922 Aug 13 00:16:19.146210 systemd-networkd[1732]: lxc7d9e384c6f20: Link UP Aug 13 00:16:19.146407 systemd-networkd[1732]: lxcfc0b63bbe13b: Link UP Aug 13 00:16:19.146687 systemd-networkd[1732]: lxc7d9e384c6f20: Gained carrier Aug 13 00:16:19.146878 systemd-networkd[1732]: cilium_vxlan: Gained IPv6LL Aug 13 00:16:19.146984 systemd-networkd[1732]: lxcfc0b63bbe13b: Gained carrier Aug 13 00:16:20.166632 systemd-networkd[1732]: lxcfc0b63bbe13b: Gained IPv6LL Aug 13 00:16:20.550596 systemd-networkd[1732]: lxc7d9e384c6f20: Gained IPv6LL Aug 13 00:16:20.550820 systemd-networkd[1732]: lxc_health: Gained IPv6LL Aug 13 00:16:21.410836 containerd[1824]: time="2025-08-13T00:16:21.410713633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:21.410836 containerd[1824]: time="2025-08-13T00:16:21.410761342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:21.410836 containerd[1824]: time="2025-08-13T00:16:21.410768479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:21.411203 containerd[1824]: time="2025-08-13T00:16:21.410847224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:21.411366 containerd[1824]: time="2025-08-13T00:16:21.411315224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:21.411366 containerd[1824]: time="2025-08-13T00:16:21.411341088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:21.411366 containerd[1824]: time="2025-08-13T00:16:21.411348425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:21.411422 containerd[1824]: time="2025-08-13T00:16:21.411384670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:21.432806 systemd[1]: Started cri-containerd-1c922a1dd3ab9f7ba1b066f0180c138f051a567df824be27afbcf74a8f61ed09.scope - libcontainer container 1c922a1dd3ab9f7ba1b066f0180c138f051a567df824be27afbcf74a8f61ed09. Aug 13 00:16:21.433698 systemd[1]: Started cri-containerd-3cb52dd5556e68c54fe16702a5ec83736d2ee10cda5edca8c1f924ef5e58f352.scope - libcontainer container 3cb52dd5556e68c54fe16702a5ec83736d2ee10cda5edca8c1f924ef5e58f352. Aug 13 00:16:21.455320 containerd[1824]: time="2025-08-13T00:16:21.455293471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-64hwq,Uid:4251e5b6-40eb-4ca7-80cd-759592e6f813,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c922a1dd3ab9f7ba1b066f0180c138f051a567df824be27afbcf74a8f61ed09\"" Aug 13 00:16:21.455320 containerd[1824]: time="2025-08-13T00:16:21.455316120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lvbrw,Uid:3654ed06-cd75-4648-aa70-eabd96ba7338,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cb52dd5556e68c54fe16702a5ec83736d2ee10cda5edca8c1f924ef5e58f352\"" Aug 13 00:16:21.456387 containerd[1824]: time="2025-08-13T00:16:21.456372359Z" level=info msg="CreateContainer within sandbox \"1c922a1dd3ab9f7ba1b066f0180c138f051a567df824be27afbcf74a8f61ed09\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:16:21.456445 containerd[1824]: time="2025-08-13T00:16:21.456371721Z" level=info msg="CreateContainer within sandbox \"3cb52dd5556e68c54fe16702a5ec83736d2ee10cda5edca8c1f924ef5e58f352\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:16:21.461189 containerd[1824]: time="2025-08-13T00:16:21.461171895Z" level=info msg="CreateContainer within sandbox \"1c922a1dd3ab9f7ba1b066f0180c138f051a567df824be27afbcf74a8f61ed09\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db0ad110a89aca293df4f2438fe076800b51fd8f126568f86108fae7aec2435b\"" Aug 13 00:16:21.461389 containerd[1824]: time="2025-08-13T00:16:21.461377718Z" level=info msg="StartContainer for \"db0ad110a89aca293df4f2438fe076800b51fd8f126568f86108fae7aec2435b\"" Aug 13 00:16:21.462081 containerd[1824]: time="2025-08-13T00:16:21.462066730Z" level=info msg="CreateContainer within sandbox \"3cb52dd5556e68c54fe16702a5ec83736d2ee10cda5edca8c1f924ef5e58f352\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23d027b58ac659f9e0a70b9306b41a6aa299a9fdedb136694fb51c3aef938126\"" Aug 13 00:16:21.462253 containerd[1824]: time="2025-08-13T00:16:21.462241625Z" level=info msg="StartContainer for \"23d027b58ac659f9e0a70b9306b41a6aa299a9fdedb136694fb51c3aef938126\"" Aug 13 00:16:21.489820 systemd[1]: Started cri-containerd-23d027b58ac659f9e0a70b9306b41a6aa299a9fdedb136694fb51c3aef938126.scope - libcontainer container 23d027b58ac659f9e0a70b9306b41a6aa299a9fdedb136694fb51c3aef938126. Aug 13 00:16:21.490454 systemd[1]: Started cri-containerd-db0ad110a89aca293df4f2438fe076800b51fd8f126568f86108fae7aec2435b.scope - libcontainer container db0ad110a89aca293df4f2438fe076800b51fd8f126568f86108fae7aec2435b. Aug 13 00:16:21.502385 containerd[1824]: time="2025-08-13T00:16:21.502355304Z" level=info msg="StartContainer for \"23d027b58ac659f9e0a70b9306b41a6aa299a9fdedb136694fb51c3aef938126\" returns successfully" Aug 13 00:16:21.503009 containerd[1824]: time="2025-08-13T00:16:21.502991470Z" level=info msg="StartContainer for \"db0ad110a89aca293df4f2438fe076800b51fd8f126568f86108fae7aec2435b\" returns successfully" Aug 13 00:16:22.062979 kubelet[3139]: I0813 00:16:22.062857 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-64hwq" podStartSLOduration=18.062844511 podStartE2EDuration="18.062844511s" podCreationTimestamp="2025-08-13 00:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:22.062827612 +0000 UTC m=+24.138527876" watchObservedRunningTime="2025-08-13 00:16:22.062844511 +0000 UTC m=+24.138544775" Aug 13 00:16:22.067801 kubelet[3139]: I0813 00:16:22.067750 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lvbrw" podStartSLOduration=18.067724362 podStartE2EDuration="18.067724362s" podCreationTimestamp="2025-08-13 00:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:22.067529816 +0000 UTC m=+24.143230079" watchObservedRunningTime="2025-08-13 00:16:22.067724362 +0000 UTC m=+24.143424623" Aug 13 00:16:33.168649 kubelet[3139]: I0813 00:16:33.168528 3139 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:22:08.271303 systemd[1]: Started sshd@9-147.75.71.157:22-147.75.109.163:33134.service - OpenSSH per-connection server daemon (147.75.109.163:33134). Aug 13 00:22:08.313997 sshd[4758]: Accepted publickey for core from 147.75.109.163 port 33134 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:08.315307 sshd-session[4758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:08.320755 systemd-logind[1805]: New session 12 of user core. Aug 13 00:22:08.347051 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:22:08.505667 sshd[4760]: Connection closed by 147.75.109.163 port 33134 Aug 13 00:22:08.505886 sshd-session[4758]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:08.507991 systemd[1]: sshd@9-147.75.71.157:22-147.75.109.163:33134.service: Deactivated successfully. Aug 13 00:22:08.509170 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:22:08.510112 systemd-logind[1805]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:22:08.510740 systemd-logind[1805]: Removed session 12. Aug 13 00:22:13.533671 systemd[1]: Started sshd@10-147.75.71.157:22-147.75.109.163:33138.service - OpenSSH per-connection server daemon (147.75.109.163:33138). Aug 13 00:22:13.560411 sshd[4788]: Accepted publickey for core from 147.75.109.163 port 33138 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:13.561185 sshd-session[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:13.564163 systemd-logind[1805]: New session 13 of user core. Aug 13 00:22:13.580742 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:22:13.667265 sshd[4790]: Connection closed by 147.75.109.163 port 33138 Aug 13 00:22:13.667498 sshd-session[4788]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:13.669203 systemd[1]: sshd@10-147.75.71.157:22-147.75.109.163:33138.service: Deactivated successfully. Aug 13 00:22:13.670196 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:22:13.670948 systemd-logind[1805]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:22:13.671448 systemd-logind[1805]: Removed session 13. Aug 13 00:22:18.689184 systemd[1]: Started sshd@11-147.75.71.157:22-147.75.109.163:36638.service - OpenSSH per-connection server daemon (147.75.109.163:36638). Aug 13 00:22:18.718043 sshd[4816]: Accepted publickey for core from 147.75.109.163 port 36638 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:18.718764 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:18.722071 systemd-logind[1805]: New session 14 of user core. Aug 13 00:22:18.735698 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:22:18.832689 sshd[4818]: Connection closed by 147.75.109.163 port 36638 Aug 13 00:22:18.832902 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:18.834332 systemd[1]: sshd@11-147.75.71.157:22-147.75.109.163:36638.service: Deactivated successfully. Aug 13 00:22:18.835270 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:22:18.836007 systemd-logind[1805]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:22:18.836608 systemd-logind[1805]: Removed session 14. Aug 13 00:22:23.878754 systemd[1]: Started sshd@12-147.75.71.157:22-147.75.109.163:36644.service - OpenSSH per-connection server daemon (147.75.109.163:36644). Aug 13 00:22:23.907740 sshd[4844]: Accepted publickey for core from 147.75.109.163 port 36644 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:23.908528 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:23.911974 systemd-logind[1805]: New session 15 of user core. Aug 13 00:22:23.913196 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:22:24.000644 sshd[4846]: Connection closed by 147.75.109.163 port 36644 Aug 13 00:22:24.000837 sshd-session[4844]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:24.014051 systemd[1]: sshd@12-147.75.71.157:22-147.75.109.163:36644.service: Deactivated successfully. Aug 13 00:22:24.015124 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:22:24.016123 systemd-logind[1805]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:22:24.016999 systemd[1]: Started sshd@13-147.75.71.157:22-147.75.109.163:36650.service - OpenSSH per-connection server daemon (147.75.109.163:36650). Aug 13 00:22:24.017605 systemd-logind[1805]: Removed session 15. Aug 13 00:22:24.052629 sshd[4871]: Accepted publickey for core from 147.75.109.163 port 36650 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:24.053621 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:24.057701 systemd-logind[1805]: New session 16 of user core. Aug 13 00:22:24.077697 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:22:24.230285 sshd[4874]: Connection closed by 147.75.109.163 port 36650 Aug 13 00:22:24.230901 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:24.254177 systemd[1]: sshd@13-147.75.71.157:22-147.75.109.163:36650.service: Deactivated successfully. Aug 13 00:22:24.261589 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:22:24.266876 systemd-logind[1805]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:22:24.290163 systemd[1]: Started sshd@14-147.75.71.157:22-147.75.109.163:36654.service - OpenSSH per-connection server daemon (147.75.109.163:36654). Aug 13 00:22:24.291380 systemd-logind[1805]: Removed session 16. Aug 13 00:22:24.337720 sshd[4896]: Accepted publickey for core from 147.75.109.163 port 36654 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:24.341252 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:24.353976 systemd-logind[1805]: New session 17 of user core. Aug 13 00:22:24.362883 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:22:24.509633 sshd[4901]: Connection closed by 147.75.109.163 port 36654 Aug 13 00:22:24.509805 sshd-session[4896]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:24.511642 systemd[1]: sshd@14-147.75.71.157:22-147.75.109.163:36654.service: Deactivated successfully. Aug 13 00:22:24.512783 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:22:24.513665 systemd-logind[1805]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:22:24.514388 systemd-logind[1805]: Removed session 17. Aug 13 00:22:29.528698 systemd[1]: Started sshd@15-147.75.71.157:22-147.75.109.163:53234.service - OpenSSH per-connection server daemon (147.75.109.163:53234). Aug 13 00:22:29.558159 sshd[4927]: Accepted publickey for core from 147.75.109.163 port 53234 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:29.561553 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:29.573756 systemd-logind[1805]: New session 18 of user core. Aug 13 00:22:29.590940 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:22:29.681714 sshd[4929]: Connection closed by 147.75.109.163 port 53234 Aug 13 00:22:29.681920 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:29.683734 systemd[1]: sshd@15-147.75.71.157:22-147.75.109.163:53234.service: Deactivated successfully. Aug 13 00:22:29.684822 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:22:29.685663 systemd-logind[1805]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:22:29.686331 systemd-logind[1805]: Removed session 18. Aug 13 00:22:34.701758 systemd[1]: Started sshd@16-147.75.71.157:22-147.75.109.163:53248.service - OpenSSH per-connection server daemon (147.75.109.163:53248). Aug 13 00:22:34.730038 sshd[4953]: Accepted publickey for core from 147.75.109.163 port 53248 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:34.730768 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:34.733799 systemd-logind[1805]: New session 19 of user core. Aug 13 00:22:34.751746 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:22:34.840650 sshd[4955]: Connection closed by 147.75.109.163 port 53248 Aug 13 00:22:34.840839 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:34.855939 systemd[1]: sshd@16-147.75.71.157:22-147.75.109.163:53248.service: Deactivated successfully. Aug 13 00:22:34.857002 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:22:34.857908 systemd-logind[1805]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:22:34.858852 systemd[1]: Started sshd@17-147.75.71.157:22-147.75.109.163:53260.service - OpenSSH per-connection server daemon (147.75.109.163:53260). Aug 13 00:22:34.859407 systemd-logind[1805]: Removed session 19. Aug 13 00:22:34.891212 sshd[4979]: Accepted publickey for core from 147.75.109.163 port 53260 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:34.891801 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:34.894404 systemd-logind[1805]: New session 20 of user core. Aug 13 00:22:34.910054 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:22:35.055581 sshd[4983]: Connection closed by 147.75.109.163 port 53260 Aug 13 00:22:35.055694 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:35.067521 systemd[1]: sshd@17-147.75.71.157:22-147.75.109.163:53260.service: Deactivated successfully. Aug 13 00:22:35.068427 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:22:35.069224 systemd-logind[1805]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:22:35.069983 systemd[1]: Started sshd@18-147.75.71.157:22-147.75.109.163:53270.service - OpenSSH per-connection server daemon (147.75.109.163:53270). Aug 13 00:22:35.070405 systemd-logind[1805]: Removed session 20. Aug 13 00:22:35.098932 sshd[5002]: Accepted publickey for core from 147.75.109.163 port 53270 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:35.099650 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:35.102740 systemd-logind[1805]: New session 21 of user core. Aug 13 00:22:35.111766 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:22:35.704738 sshd[5005]: Connection closed by 147.75.109.163 port 53270 Aug 13 00:22:35.704945 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:35.731126 systemd[1]: sshd@18-147.75.71.157:22-147.75.109.163:53270.service: Deactivated successfully. Aug 13 00:22:35.736880 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:22:35.740691 systemd-logind[1805]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:22:35.754500 systemd[1]: Started sshd@19-147.75.71.157:22-147.75.109.163:53280.service - OpenSSH per-connection server daemon (147.75.109.163:53280). Aug 13 00:22:35.757878 systemd-logind[1805]: Removed session 21. Aug 13 00:22:35.815875 sshd[5037]: Accepted publickey for core from 147.75.109.163 port 53280 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:35.817046 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:35.821885 systemd-logind[1805]: New session 22 of user core. Aug 13 00:22:35.832603 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:22:36.029890 sshd[5043]: Connection closed by 147.75.109.163 port 53280 Aug 13 00:22:36.030020 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:36.059624 systemd[1]: sshd@19-147.75.71.157:22-147.75.109.163:53280.service: Deactivated successfully. Aug 13 00:22:36.063939 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:22:36.067750 systemd-logind[1805]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:22:36.084412 systemd[1]: Started sshd@20-147.75.71.157:22-147.75.109.163:53284.service - OpenSSH per-connection server daemon (147.75.109.163:53284). Aug 13 00:22:36.087147 systemd-logind[1805]: Removed session 22. Aug 13 00:22:36.141088 sshd[5065]: Accepted publickey for core from 147.75.109.163 port 53284 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:36.145250 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:36.158486 systemd-logind[1805]: New session 23 of user core. Aug 13 00:22:36.187010 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:22:36.321434 sshd[5068]: Connection closed by 147.75.109.163 port 53284 Aug 13 00:22:36.321585 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:36.323263 systemd[1]: sshd@20-147.75.71.157:22-147.75.109.163:53284.service: Deactivated successfully. Aug 13 00:22:36.324221 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:22:36.324985 systemd-logind[1805]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:22:36.325459 systemd-logind[1805]: Removed session 23. Aug 13 00:22:41.356859 systemd[1]: Started sshd@21-147.75.71.157:22-147.75.109.163:37176.service - OpenSSH per-connection server daemon (147.75.109.163:37176). Aug 13 00:22:41.386328 sshd[5096]: Accepted publickey for core from 147.75.109.163 port 37176 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:41.389643 sshd-session[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:41.401740 systemd-logind[1805]: New session 24 of user core. Aug 13 00:22:41.423881 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:22:41.519261 sshd[5098]: Connection closed by 147.75.109.163 port 37176 Aug 13 00:22:41.519459 sshd-session[5096]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:41.521162 systemd[1]: sshd@21-147.75.71.157:22-147.75.109.163:37176.service: Deactivated successfully. Aug 13 00:22:41.522141 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:22:41.522858 systemd-logind[1805]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:22:41.523431 systemd-logind[1805]: Removed session 24. Aug 13 00:22:46.539316 systemd[1]: Started sshd@22-147.75.71.157:22-147.75.109.163:37188.service - OpenSSH per-connection server daemon (147.75.109.163:37188). Aug 13 00:22:46.568174 sshd[5122]: Accepted publickey for core from 147.75.109.163 port 37188 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:46.569139 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:46.572354 systemd-logind[1805]: New session 25 of user core. Aug 13 00:22:46.589749 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:22:46.677922 sshd[5124]: Connection closed by 147.75.109.163 port 37188 Aug 13 00:22:46.678113 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:46.679720 systemd[1]: sshd@22-147.75.71.157:22-147.75.109.163:37188.service: Deactivated successfully. Aug 13 00:22:46.680640 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:22:46.681363 systemd-logind[1805]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:22:46.682013 systemd-logind[1805]: Removed session 25. Aug 13 00:22:51.695717 systemd[1]: Started sshd@23-147.75.71.157:22-147.75.109.163:49888.service - OpenSSH per-connection server daemon (147.75.109.163:49888). Aug 13 00:22:51.723753 sshd[5147]: Accepted publickey for core from 147.75.109.163 port 49888 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:51.724435 sshd-session[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:51.727319 systemd-logind[1805]: New session 26 of user core. Aug 13 00:22:51.745031 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:22:51.839408 sshd[5149]: Connection closed by 147.75.109.163 port 49888 Aug 13 00:22:51.839756 sshd-session[5147]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:51.861620 systemd[1]: sshd@23-147.75.71.157:22-147.75.109.163:49888.service: Deactivated successfully. Aug 13 00:22:51.865888 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:22:51.869496 systemd-logind[1805]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:22:51.889259 systemd[1]: Started sshd@24-147.75.71.157:22-147.75.109.163:49890.service - OpenSSH per-connection server daemon (147.75.109.163:49890). Aug 13 00:22:51.892025 systemd-logind[1805]: Removed session 26. Aug 13 00:22:51.943810 sshd[5173]: Accepted publickey for core from 147.75.109.163 port 49890 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:51.944677 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:51.948355 systemd-logind[1805]: New session 27 of user core. Aug 13 00:22:51.963736 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:22:53.296614 containerd[1824]: time="2025-08-13T00:22:53.296393572Z" level=info msg="StopContainer for \"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085\" with timeout 30 (s)" Aug 13 00:22:53.297542 containerd[1824]: time="2025-08-13T00:22:53.297185905Z" level=info msg="Stop container \"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085\" with signal terminated" Aug 13 00:22:53.319335 systemd[1]: cri-containerd-504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085.scope: Deactivated successfully. Aug 13 00:22:53.328069 systemd[1]: cri-containerd-504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085.scope: Consumed 1.005s CPU time, 32.7M memory peak, 4K written to disk. Aug 13 00:22:53.334336 containerd[1824]: time="2025-08-13T00:22:53.334263258Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:22:53.340242 containerd[1824]: time="2025-08-13T00:22:53.340222725Z" level=info msg="StopContainer for \"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce\" with timeout 2 (s)" Aug 13 00:22:53.340460 containerd[1824]: time="2025-08-13T00:22:53.340332685Z" level=info msg="Stop container \"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce\" with signal terminated" Aug 13 00:22:53.340688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085-rootfs.mount: Deactivated successfully. Aug 13 00:22:53.343876 systemd-networkd[1732]: lxc_health: Link DOWN Aug 13 00:22:53.343879 systemd-networkd[1732]: lxc_health: Lost carrier Aug 13 00:22:53.354064 containerd[1824]: time="2025-08-13T00:22:53.354033258Z" level=info msg="shim disconnected" id=504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085 namespace=k8s.io Aug 13 00:22:53.354064 containerd[1824]: time="2025-08-13T00:22:53.354064281Z" level=warning msg="cleaning up after shim disconnected" id=504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085 namespace=k8s.io Aug 13 00:22:53.354145 containerd[1824]: time="2025-08-13T00:22:53.354070502Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:53.361177 containerd[1824]: time="2025-08-13T00:22:53.361159116Z" level=info msg="StopContainer for \"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085\" returns successfully" Aug 13 00:22:53.361529 containerd[1824]: time="2025-08-13T00:22:53.361519249Z" level=info msg="StopPodSandbox for \"b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182\"" Aug 13 00:22:53.361557 containerd[1824]: time="2025-08-13T00:22:53.361536961Z" level=info msg="Container to stop \"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:22:53.362840 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182-shm.mount: Deactivated successfully. Aug 13 00:22:53.364738 systemd[1]: cri-containerd-b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182.scope: Deactivated successfully. Aug 13 00:22:53.373937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182-rootfs.mount: Deactivated successfully. Aug 13 00:22:53.374406 systemd[1]: cri-containerd-684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce.scope: Deactivated successfully. Aug 13 00:22:53.374569 systemd[1]: cri-containerd-684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce.scope: Consumed 6.630s CPU time, 166.5M memory peak, 136K read from disk, 13.3M written to disk. Aug 13 00:22:53.374728 containerd[1824]: time="2025-08-13T00:22:53.374690924Z" level=info msg="shim disconnected" id=b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182 namespace=k8s.io Aug 13 00:22:53.374788 containerd[1824]: time="2025-08-13T00:22:53.374727633Z" level=warning msg="cleaning up after shim disconnected" id=b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182 namespace=k8s.io Aug 13 00:22:53.374788 containerd[1824]: time="2025-08-13T00:22:53.374735575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:53.381157 containerd[1824]: time="2025-08-13T00:22:53.381136545Z" level=info msg="TearDown network for sandbox \"b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182\" successfully" Aug 13 00:22:53.381157 containerd[1824]: time="2025-08-13T00:22:53.381153748Z" level=info msg="StopPodSandbox for \"b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182\" returns successfully" Aug 13 00:22:53.381920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce-rootfs.mount: Deactivated successfully. Aug 13 00:22:53.392868 containerd[1824]: time="2025-08-13T00:22:53.392834976Z" level=info msg="shim disconnected" id=684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce namespace=k8s.io Aug 13 00:22:53.392868 containerd[1824]: time="2025-08-13T00:22:53.392863959Z" level=warning msg="cleaning up after shim disconnected" id=684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce namespace=k8s.io Aug 13 00:22:53.392868 containerd[1824]: time="2025-08-13T00:22:53.392871086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:53.400292 containerd[1824]: time="2025-08-13T00:22:53.400241557Z" level=info msg="StopContainer for \"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce\" returns successfully" Aug 13 00:22:53.400545 containerd[1824]: time="2025-08-13T00:22:53.400500674Z" level=info msg="StopPodSandbox for \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\"" Aug 13 00:22:53.400545 containerd[1824]: time="2025-08-13T00:22:53.400518238Z" level=info msg="Container to stop \"3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:22:53.400545 containerd[1824]: time="2025-08-13T00:22:53.400538362Z" level=info msg="Container to stop \"b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:22:53.400545 containerd[1824]: time="2025-08-13T00:22:53.400543109Z" level=info msg="Container to stop \"698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:22:53.400545 containerd[1824]: time="2025-08-13T00:22:53.400547654Z" level=info msg="Container to stop \"41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:22:53.400661 containerd[1824]: time="2025-08-13T00:22:53.400552014Z" level=info msg="Container to stop \"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:22:53.403456 systemd[1]: cri-containerd-4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca.scope: Deactivated successfully. Aug 13 00:22:53.412133 containerd[1824]: time="2025-08-13T00:22:53.412095686Z" level=info msg="shim disconnected" id=4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca namespace=k8s.io Aug 13 00:22:53.412133 containerd[1824]: time="2025-08-13T00:22:53.412130981Z" level=warning msg="cleaning up after shim disconnected" id=4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca namespace=k8s.io Aug 13 00:22:53.412241 containerd[1824]: time="2025-08-13T00:22:53.412138224Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:53.418479 containerd[1824]: time="2025-08-13T00:22:53.418454295Z" level=info msg="TearDown network for sandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" successfully" Aug 13 00:22:53.418479 containerd[1824]: time="2025-08-13T00:22:53.418476254Z" level=info msg="StopPodSandbox for \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" returns successfully" Aug 13 00:22:53.512223 kubelet[3139]: I0813 00:22:53.512092 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2d6792a-70aa-41d5-9193-3307acec6362-cilium-config-path\") pod \"a2d6792a-70aa-41d5-9193-3307acec6362\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " Aug 13 00:22:53.512223 kubelet[3139]: I0813 00:22:53.512188 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-hostproc\") pod \"a2d6792a-70aa-41d5-9193-3307acec6362\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " Aug 13 00:22:53.513383 kubelet[3139]: I0813 00:22:53.512244 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-host-proc-sys-kernel\") pod \"a2d6792a-70aa-41d5-9193-3307acec6362\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " Aug 13 00:22:53.513383 kubelet[3139]: I0813 00:22:53.512302 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-etc-cni-netd\") pod \"a2d6792a-70aa-41d5-9193-3307acec6362\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " Aug 13 00:22:53.513383 kubelet[3139]: I0813 00:22:53.512361 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0534553-bcfe-41d4-a3c8-21477efb11c7-cilium-config-path\") pod \"b0534553-bcfe-41d4-a3c8-21477efb11c7\" (UID: \"b0534553-bcfe-41d4-a3c8-21477efb11c7\") " Aug 13 00:22:53.513383 kubelet[3139]: I0813 00:22:53.512361 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-hostproc" (OuterVolumeSpecName: "hostproc") pod "a2d6792a-70aa-41d5-9193-3307acec6362" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:22:53.513383 kubelet[3139]: I0813 00:22:53.512422 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v5bl\" (UniqueName: \"kubernetes.io/projected/b0534553-bcfe-41d4-a3c8-21477efb11c7-kube-api-access-5v5bl\") pod \"b0534553-bcfe-41d4-a3c8-21477efb11c7\" (UID: \"b0534553-bcfe-41d4-a3c8-21477efb11c7\") " Aug 13 00:22:53.513927 kubelet[3139]: I0813 00:22:53.512416 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a2d6792a-70aa-41d5-9193-3307acec6362" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:22:53.513927 kubelet[3139]: I0813 00:22:53.512521 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-xtables-lock\") pod \"a2d6792a-70aa-41d5-9193-3307acec6362\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " Aug 13 00:22:53.513927 kubelet[3139]: I0813 00:22:53.512531 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a2d6792a-70aa-41d5-9193-3307acec6362" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:22:53.513927 kubelet[3139]: I0813 00:22:53.512583 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-host-proc-sys-net\") pod \"a2d6792a-70aa-41d5-9193-3307acec6362\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " Aug 13 00:22:53.513927 kubelet[3139]: I0813 00:22:53.512646 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a2d6792a-70aa-41d5-9193-3307acec6362" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:22:53.514406 kubelet[3139]: I0813 00:22:53.512705 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-cilium-cgroup\") pod \"a2d6792a-70aa-41d5-9193-3307acec6362\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " Aug 13 00:22:53.514406 kubelet[3139]: I0813 00:22:53.512709 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a2d6792a-70aa-41d5-9193-3307acec6362" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:22:53.514406 kubelet[3139]: I0813 00:22:53.512779 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-cilium-run\") pod \"a2d6792a-70aa-41d5-9193-3307acec6362\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " Aug 13 00:22:53.514406 kubelet[3139]: I0813 00:22:53.512801 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a2d6792a-70aa-41d5-9193-3307acec6362" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:22:53.514406 kubelet[3139]: I0813 00:22:53.512835 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-cni-path\") pod \"a2d6792a-70aa-41d5-9193-3307acec6362\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " Aug 13 00:22:53.514981 kubelet[3139]: I0813 00:22:53.512860 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a2d6792a-70aa-41d5-9193-3307acec6362" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:22:53.514981 kubelet[3139]: I0813 00:22:53.512882 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-bpf-maps\") pod \"a2d6792a-70aa-41d5-9193-3307acec6362\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " Aug 13 00:22:53.514981 kubelet[3139]: I0813 00:22:53.512906 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-cni-path" (OuterVolumeSpecName: "cni-path") pod "a2d6792a-70aa-41d5-9193-3307acec6362" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:22:53.514981 kubelet[3139]: I0813 00:22:53.512939 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2d6792a-70aa-41d5-9193-3307acec6362-hubble-tls\") pod \"a2d6792a-70aa-41d5-9193-3307acec6362\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " Aug 13 00:22:53.514981 kubelet[3139]: I0813 00:22:53.512985 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a2d6792a-70aa-41d5-9193-3307acec6362" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:22:53.515457 kubelet[3139]: I0813 00:22:53.513115 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd4sz\" (UniqueName: \"kubernetes.io/projected/a2d6792a-70aa-41d5-9193-3307acec6362-kube-api-access-jd4sz\") pod \"a2d6792a-70aa-41d5-9193-3307acec6362\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " Aug 13 00:22:53.515457 kubelet[3139]: I0813 00:22:53.513217 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-lib-modules\") pod \"a2d6792a-70aa-41d5-9193-3307acec6362\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " Aug 13 00:22:53.515457 kubelet[3139]: I0813 00:22:53.513328 3139 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2d6792a-70aa-41d5-9193-3307acec6362-clustermesh-secrets\") pod \"a2d6792a-70aa-41d5-9193-3307acec6362\" (UID: \"a2d6792a-70aa-41d5-9193-3307acec6362\") " Aug 13 00:22:53.515457 kubelet[3139]: I0813 00:22:53.513343 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a2d6792a-70aa-41d5-9193-3307acec6362" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:22:53.515457 kubelet[3139]: I0813 00:22:53.513547 3139 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-cni-path\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.515457 kubelet[3139]: I0813 00:22:53.513615 3139 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-bpf-maps\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.516417 kubelet[3139]: I0813 00:22:53.513669 3139 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-lib-modules\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.516417 kubelet[3139]: I0813 00:22:53.513729 3139 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-hostproc\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.516417 kubelet[3139]: I0813 00:22:53.513781 3139 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-host-proc-sys-kernel\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.516417 kubelet[3139]: I0813 00:22:53.513832 3139 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-etc-cni-netd\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.516417 kubelet[3139]: I0813 00:22:53.513887 3139 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-xtables-lock\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.516417 kubelet[3139]: I0813 00:22:53.513939 3139 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-host-proc-sys-net\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.516417 kubelet[3139]: I0813 00:22:53.513992 3139 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-cilium-cgroup\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.516417 kubelet[3139]: I0813 00:22:53.514050 3139 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2d6792a-70aa-41d5-9193-3307acec6362-cilium-run\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.519260 kubelet[3139]: I0813 00:22:53.519169 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0534553-bcfe-41d4-a3c8-21477efb11c7-kube-api-access-5v5bl" (OuterVolumeSpecName: "kube-api-access-5v5bl") pod "b0534553-bcfe-41d4-a3c8-21477efb11c7" (UID: "b0534553-bcfe-41d4-a3c8-21477efb11c7"). InnerVolumeSpecName "kube-api-access-5v5bl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:22:53.519993 kubelet[3139]: I0813 00:22:53.519888 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d6792a-70aa-41d5-9193-3307acec6362-kube-api-access-jd4sz" (OuterVolumeSpecName: "kube-api-access-jd4sz") pod "a2d6792a-70aa-41d5-9193-3307acec6362" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362"). InnerVolumeSpecName "kube-api-access-jd4sz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:22:53.520317 kubelet[3139]: I0813 00:22:53.519963 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d6792a-70aa-41d5-9193-3307acec6362-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a2d6792a-70aa-41d5-9193-3307acec6362" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:22:53.520317 kubelet[3139]: I0813 00:22:53.520154 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d6792a-70aa-41d5-9193-3307acec6362-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a2d6792a-70aa-41d5-9193-3307acec6362" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:22:53.520739 kubelet[3139]: I0813 00:22:53.520552 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0534553-bcfe-41d4-a3c8-21477efb11c7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b0534553-bcfe-41d4-a3c8-21477efb11c7" (UID: "b0534553-bcfe-41d4-a3c8-21477efb11c7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:22:53.521271 kubelet[3139]: I0813 00:22:53.521181 3139 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2d6792a-70aa-41d5-9193-3307acec6362-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a2d6792a-70aa-41d5-9193-3307acec6362" (UID: "a2d6792a-70aa-41d5-9193-3307acec6362"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:22:53.615140 kubelet[3139]: I0813 00:22:53.615027 3139 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2d6792a-70aa-41d5-9193-3307acec6362-cilium-config-path\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.615140 kubelet[3139]: I0813 00:22:53.615105 3139 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0534553-bcfe-41d4-a3c8-21477efb11c7-cilium-config-path\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.615140 kubelet[3139]: I0813 00:22:53.615140 3139 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5v5bl\" (UniqueName: \"kubernetes.io/projected/b0534553-bcfe-41d4-a3c8-21477efb11c7-kube-api-access-5v5bl\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.615698 kubelet[3139]: I0813 00:22:53.615179 3139 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2d6792a-70aa-41d5-9193-3307acec6362-clustermesh-secrets\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.615698 kubelet[3139]: I0813 00:22:53.615209 3139 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2d6792a-70aa-41d5-9193-3307acec6362-hubble-tls\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.615698 kubelet[3139]: I0813 00:22:53.615238 3139 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jd4sz\" (UniqueName: \"kubernetes.io/projected/a2d6792a-70aa-41d5-9193-3307acec6362-kube-api-access-jd4sz\") on node \"ci-4230.2.2-a-e75a6b4c18\" DevicePath \"\"" Aug 13 00:22:53.986851 systemd[1]: Removed slice kubepods-besteffort-podb0534553_bcfe_41d4_a3c8_21477efb11c7.slice - libcontainer container kubepods-besteffort-podb0534553_bcfe_41d4_a3c8_21477efb11c7.slice. Aug 13 00:22:53.986955 systemd[1]: kubepods-besteffort-podb0534553_bcfe_41d4_a3c8_21477efb11c7.slice: Consumed 1.027s CPU time, 32.9M memory peak, 4K written to disk. Aug 13 00:22:53.987777 systemd[1]: Removed slice kubepods-burstable-poda2d6792a_70aa_41d5_9193_3307acec6362.slice - libcontainer container kubepods-burstable-poda2d6792a_70aa_41d5_9193_3307acec6362.slice. Aug 13 00:22:53.987910 systemd[1]: kubepods-burstable-poda2d6792a_70aa_41d5_9193_3307acec6362.slice: Consumed 6.706s CPU time, 167M memory peak, 136K read from disk, 13.3M written to disk. Aug 13 00:22:54.118409 kubelet[3139]: I0813 00:22:54.118331 3139 scope.go:117] "RemoveContainer" containerID="504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085" Aug 13 00:22:54.120978 containerd[1824]: time="2025-08-13T00:22:54.120915177Z" level=info msg="RemoveContainer for \"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085\"" Aug 13 00:22:54.123761 containerd[1824]: time="2025-08-13T00:22:54.123748879Z" level=info msg="RemoveContainer for \"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085\" returns successfully" Aug 13 00:22:54.123876 kubelet[3139]: I0813 00:22:54.123867 3139 scope.go:117] "RemoveContainer" containerID="504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085" Aug 13 00:22:54.123959 containerd[1824]: time="2025-08-13T00:22:54.123944126Z" level=error msg="ContainerStatus for \"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085\": not found" Aug 13 00:22:54.124015 kubelet[3139]: E0813 00:22:54.124005 3139 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085\": not found" containerID="504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085" Aug 13 00:22:54.124056 kubelet[3139]: I0813 00:22:54.124019 3139 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085"} err="failed to get container status \"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085\": rpc error: code = NotFound desc = an error occurred when try to find container \"504543a1fb3c1c37748a93c6435f200d22e4683bbcaaf335c956b8c13a3d1085\": not found" Aug 13 00:22:54.124077 kubelet[3139]: I0813 00:22:54.124059 3139 scope.go:117] "RemoveContainer" containerID="684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce" Aug 13 00:22:54.124428 containerd[1824]: time="2025-08-13T00:22:54.124417656Z" level=info msg="RemoveContainer for \"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce\"" Aug 13 00:22:54.125524 containerd[1824]: time="2025-08-13T00:22:54.125499202Z" level=info msg="RemoveContainer for \"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce\" returns successfully" Aug 13 00:22:54.125637 kubelet[3139]: I0813 00:22:54.125612 3139 scope.go:117] "RemoveContainer" containerID="3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a" Aug 13 00:22:54.126109 containerd[1824]: time="2025-08-13T00:22:54.126100180Z" level=info msg="RemoveContainer for \"3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a\"" Aug 13 00:22:54.127289 containerd[1824]: time="2025-08-13T00:22:54.127273899Z" level=info msg="RemoveContainer for \"3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a\" returns successfully" Aug 13 00:22:54.127435 kubelet[3139]: I0813 00:22:54.127416 3139 scope.go:117] "RemoveContainer" containerID="41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23" Aug 13 00:22:54.128107 containerd[1824]: time="2025-08-13T00:22:54.128096025Z" level=info msg="RemoveContainer for \"41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23\"" Aug 13 00:22:54.129318 containerd[1824]: time="2025-08-13T00:22:54.129306355Z" level=info msg="RemoveContainer for \"41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23\" returns successfully" Aug 13 00:22:54.129380 kubelet[3139]: I0813 00:22:54.129371 3139 scope.go:117] "RemoveContainer" containerID="b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611" Aug 13 00:22:54.129804 containerd[1824]: time="2025-08-13T00:22:54.129794073Z" level=info msg="RemoveContainer for \"b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611\"" Aug 13 00:22:54.130894 containerd[1824]: time="2025-08-13T00:22:54.130883616Z" level=info msg="RemoveContainer for \"b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611\" returns successfully" Aug 13 00:22:54.130952 kubelet[3139]: I0813 00:22:54.130945 3139 scope.go:117] "RemoveContainer" containerID="698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6" Aug 13 00:22:54.131300 containerd[1824]: time="2025-08-13T00:22:54.131290630Z" level=info msg="RemoveContainer for \"698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6\"" Aug 13 00:22:54.132325 containerd[1824]: time="2025-08-13T00:22:54.132315243Z" level=info msg="RemoveContainer for \"698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6\" returns successfully" Aug 13 00:22:54.132374 kubelet[3139]: I0813 00:22:54.132365 3139 scope.go:117] "RemoveContainer" containerID="684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce" Aug 13 00:22:54.132459 containerd[1824]: time="2025-08-13T00:22:54.132444284Z" level=error msg="ContainerStatus for \"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce\": not found" Aug 13 00:22:54.132518 kubelet[3139]: E0813 00:22:54.132509 3139 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce\": not found" containerID="684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce" Aug 13 00:22:54.132542 kubelet[3139]: I0813 00:22:54.132523 3139 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce"} err="failed to get container status \"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"684f2301399aa9c05fabc0c7a32f66d9e7fdf5c41f1e606e7498770f8b4a10ce\": not found" Aug 13 00:22:54.132542 kubelet[3139]: I0813 00:22:54.132533 3139 scope.go:117] "RemoveContainer" containerID="3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a" Aug 13 00:22:54.132631 containerd[1824]: time="2025-08-13T00:22:54.132613806Z" level=error msg="ContainerStatus for \"3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a\": not found" Aug 13 00:22:54.132680 kubelet[3139]: E0813 00:22:54.132672 3139 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a\": not found" containerID="3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a" Aug 13 00:22:54.132700 kubelet[3139]: I0813 00:22:54.132684 3139 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a"} err="failed to get container status \"3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3a2595c52fb956d160a1b4b300f7117071ae6a277389f597121b32a54b2a838a\": not found" Aug 13 00:22:54.132700 kubelet[3139]: I0813 00:22:54.132693 3139 scope.go:117] "RemoveContainer" containerID="41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23" Aug 13 00:22:54.132770 containerd[1824]: time="2025-08-13T00:22:54.132756410Z" level=error msg="ContainerStatus for \"41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23\": not found" Aug 13 00:22:54.132808 kubelet[3139]: E0813 00:22:54.132801 3139 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23\": not found" containerID="41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23" Aug 13 00:22:54.132830 kubelet[3139]: I0813 00:22:54.132810 3139 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23"} err="failed to get container status \"41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23\": rpc error: code = NotFound desc = an error occurred when try to find container \"41ab53ccb7a704adf4318c4ff61a6c317480df0678817ea1c7ed14da2be3be23\": not found" Aug 13 00:22:54.132830 kubelet[3139]: I0813 00:22:54.132817 3139 scope.go:117] "RemoveContainer" containerID="b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611" Aug 13 00:22:54.132886 containerd[1824]: time="2025-08-13T00:22:54.132875228Z" level=error msg="ContainerStatus for \"b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611\": not found" Aug 13 00:22:54.132935 kubelet[3139]: E0813 00:22:54.132927 3139 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611\": not found" containerID="b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611" Aug 13 00:22:54.132957 kubelet[3139]: I0813 00:22:54.132938 3139 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611"} err="failed to get container status \"b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2efdbd8445c0ad7edfb8d5191163721f5de3a4aba73b353b21d5ab468b3b611\": not found" Aug 13 00:22:54.132957 kubelet[3139]: I0813 00:22:54.132946 3139 scope.go:117] "RemoveContainer" containerID="698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6" Aug 13 00:22:54.133017 containerd[1824]: time="2025-08-13T00:22:54.133004994Z" level=error msg="ContainerStatus for \"698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6\": not found" Aug 13 00:22:54.133054 kubelet[3139]: E0813 00:22:54.133047 3139 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6\": not found" containerID="698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6" Aug 13 00:22:54.133078 kubelet[3139]: I0813 00:22:54.133057 3139 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6"} err="failed to get container status \"698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"698ebb6d3896615494f999da857b274297d80afc4568d845b336e21fe271a2f6\": not found" Aug 13 00:22:54.320769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca-rootfs.mount: Deactivated successfully. Aug 13 00:22:54.320828 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca-shm.mount: Deactivated successfully. Aug 13 00:22:54.320872 systemd[1]: var-lib-kubelet-pods-a2d6792a\x2d70aa\x2d41d5\x2d9193\x2d3307acec6362-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djd4sz.mount: Deactivated successfully. Aug 13 00:22:54.320912 systemd[1]: var-lib-kubelet-pods-b0534553\x2dbcfe\x2d41d4\x2da3c8\x2d21477efb11c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5v5bl.mount: Deactivated successfully. Aug 13 00:22:54.320950 systemd[1]: var-lib-kubelet-pods-a2d6792a\x2d70aa\x2d41d5\x2d9193\x2d3307acec6362-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:22:54.320987 systemd[1]: var-lib-kubelet-pods-a2d6792a\x2d70aa\x2d41d5\x2d9193\x2d3307acec6362-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:22:55.245718 sshd[5176]: Connection closed by 147.75.109.163 port 49890 Aug 13 00:22:55.245908 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:55.271155 systemd[1]: sshd@24-147.75.71.157:22-147.75.109.163:49890.service: Deactivated successfully. Aug 13 00:22:55.272040 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:22:55.272419 systemd-logind[1805]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:22:55.273414 systemd[1]: Started sshd@25-147.75.71.157:22-147.75.109.163:49892.service - OpenSSH per-connection server daemon (147.75.109.163:49892). Aug 13 00:22:55.273882 systemd-logind[1805]: Removed session 27. Aug 13 00:22:55.301703 sshd[5349]: Accepted publickey for core from 147.75.109.163 port 49892 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:55.302376 sshd-session[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:55.305299 systemd-logind[1805]: New session 28 of user core. Aug 13 00:22:55.315950 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:22:55.753655 sshd[5352]: Connection closed by 147.75.109.163 port 49892 Aug 13 00:22:55.753898 sshd-session[5349]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:55.759148 kubelet[3139]: I0813 00:22:55.759124 3139 memory_manager.go:355] "RemoveStaleState removing state" podUID="b0534553-bcfe-41d4-a3c8-21477efb11c7" containerName="cilium-operator" Aug 13 00:22:55.759148 kubelet[3139]: I0813 00:22:55.759144 3139 memory_manager.go:355] "RemoveStaleState removing state" podUID="a2d6792a-70aa-41d5-9193-3307acec6362" containerName="cilium-agent" Aug 13 00:22:55.767028 systemd[1]: sshd@25-147.75.71.157:22-147.75.109.163:49892.service: Deactivated successfully. Aug 13 00:22:55.768436 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:22:55.769537 systemd-logind[1805]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:22:55.774444 systemd[1]: Started sshd@26-147.75.71.157:22-147.75.109.163:49904.service - OpenSSH per-connection server daemon (147.75.109.163:49904). Aug 13 00:22:55.775725 systemd-logind[1805]: Removed session 28. Aug 13 00:22:55.778388 systemd[1]: Created slice kubepods-burstable-pod4c6fc8df_f199_45b5_9f40_e86709245fd3.slice - libcontainer container kubepods-burstable-pod4c6fc8df_f199_45b5_9f40_e86709245fd3.slice. Aug 13 00:22:55.802937 sshd[5374]: Accepted publickey for core from 147.75.109.163 port 49904 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:55.806299 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:55.819492 systemd-logind[1805]: New session 29 of user core. Aug 13 00:22:55.828851 kubelet[3139]: I0813 00:22:55.828727 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c6fc8df-f199-45b5-9f40-e86709245fd3-cilium-config-path\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.828851 kubelet[3139]: I0813 00:22:55.828830 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c6fc8df-f199-45b5-9f40-e86709245fd3-host-proc-sys-net\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.829196 kubelet[3139]: I0813 00:22:55.828887 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c6fc8df-f199-45b5-9f40-e86709245fd3-bpf-maps\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.829196 kubelet[3139]: I0813 00:22:55.828935 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c6fc8df-f199-45b5-9f40-e86709245fd3-etc-cni-netd\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.829196 kubelet[3139]: I0813 00:22:55.828987 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c6fc8df-f199-45b5-9f40-e86709245fd3-xtables-lock\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.829196 kubelet[3139]: I0813 00:22:55.829036 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c6fc8df-f199-45b5-9f40-e86709245fd3-cni-path\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.829196 kubelet[3139]: I0813 00:22:55.829088 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb44x\" (UniqueName: \"kubernetes.io/projected/4c6fc8df-f199-45b5-9f40-e86709245fd3-kube-api-access-jb44x\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.829196 kubelet[3139]: I0813 00:22:55.829146 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4c6fc8df-f199-45b5-9f40-e86709245fd3-cilium-ipsec-secrets\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.829930 kubelet[3139]: I0813 00:22:55.829197 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c6fc8df-f199-45b5-9f40-e86709245fd3-host-proc-sys-kernel\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.829930 kubelet[3139]: I0813 00:22:55.829248 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c6fc8df-f199-45b5-9f40-e86709245fd3-cilium-cgroup\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.829930 kubelet[3139]: I0813 00:22:55.829347 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c6fc8df-f199-45b5-9f40-e86709245fd3-clustermesh-secrets\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.829930 kubelet[3139]: I0813 00:22:55.829439 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c6fc8df-f199-45b5-9f40-e86709245fd3-lib-modules\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.829930 kubelet[3139]: I0813 00:22:55.829547 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c6fc8df-f199-45b5-9f40-e86709245fd3-cilium-run\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.829930 kubelet[3139]: I0813 00:22:55.829736 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c6fc8df-f199-45b5-9f40-e86709245fd3-hubble-tls\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.830482 kubelet[3139]: I0813 00:22:55.829861 3139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c6fc8df-f199-45b5-9f40-e86709245fd3-hostproc\") pod \"cilium-cd84f\" (UID: \"4c6fc8df-f199-45b5-9f40-e86709245fd3\") " pod="kube-system/cilium-cd84f" Aug 13 00:22:55.840970 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 00:22:55.901867 sshd[5377]: Connection closed by 147.75.109.163 port 49904 Aug 13 00:22:55.902682 sshd-session[5374]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:55.926373 systemd[1]: sshd@26-147.75.71.157:22-147.75.109.163:49904.service: Deactivated successfully. Aug 13 00:22:55.930736 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:22:55.933093 systemd-logind[1805]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:22:55.951245 systemd[1]: Started sshd@27-147.75.71.157:22-147.75.109.163:49906.service - OpenSSH per-connection server daemon (147.75.109.163:49906). Aug 13 00:22:55.967603 systemd-logind[1805]: Removed session 29. Aug 13 00:22:55.975437 kubelet[3139]: I0813 00:22:55.975393 3139 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2d6792a-70aa-41d5-9193-3307acec6362" path="/var/lib/kubelet/pods/a2d6792a-70aa-41d5-9193-3307acec6362/volumes" Aug 13 00:22:55.975909 kubelet[3139]: I0813 00:22:55.975873 3139 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0534553-bcfe-41d4-a3c8-21477efb11c7" path="/var/lib/kubelet/pods/b0534553-bcfe-41d4-a3c8-21477efb11c7/volumes" Aug 13 00:22:55.989847 sshd[5386]: Accepted publickey for core from 147.75.109.163 port 49906 ssh2: RSA SHA256:6yFpbHwraljbTSoFRlXfE1ktNqnFdMRnmtlLPP+3+yY Aug 13 00:22:55.990771 sshd-session[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:55.994342 systemd-logind[1805]: New session 30 of user core. Aug 13 00:22:56.012704 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 00:22:56.080840 containerd[1824]: time="2025-08-13T00:22:56.080741568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cd84f,Uid:4c6fc8df-f199-45b5-9f40-e86709245fd3,Namespace:kube-system,Attempt:0,}" Aug 13 00:22:56.091702 containerd[1824]: time="2025-08-13T00:22:56.091633549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:56.091702 containerd[1824]: time="2025-08-13T00:22:56.091657599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:56.091702 containerd[1824]: time="2025-08-13T00:22:56.091664299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:56.091813 containerd[1824]: time="2025-08-13T00:22:56.091702714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:56.118032 systemd[1]: Started cri-containerd-f9b376e01682d2d4fd977d81e2ca53f7c4a0f75ac275f21eee14d5da4dbd0dcc.scope - libcontainer container f9b376e01682d2d4fd977d81e2ca53f7c4a0f75ac275f21eee14d5da4dbd0dcc. Aug 13 00:22:56.169676 containerd[1824]: time="2025-08-13T00:22:56.169575019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cd84f,Uid:4c6fc8df-f199-45b5-9f40-e86709245fd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9b376e01682d2d4fd977d81e2ca53f7c4a0f75ac275f21eee14d5da4dbd0dcc\"" Aug 13 00:22:56.174835 containerd[1824]: time="2025-08-13T00:22:56.174714549Z" level=info msg="CreateContainer within sandbox \"f9b376e01682d2d4fd977d81e2ca53f7c4a0f75ac275f21eee14d5da4dbd0dcc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:22:56.182314 containerd[1824]: time="2025-08-13T00:22:56.182272517Z" level=info msg="CreateContainer within sandbox \"f9b376e01682d2d4fd977d81e2ca53f7c4a0f75ac275f21eee14d5da4dbd0dcc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"012df072c5776951111f4b2fc60d0107e572564c37991c48a0f239ae0e79c416\"" Aug 13 00:22:56.182469 containerd[1824]: time="2025-08-13T00:22:56.182452496Z" level=info msg="StartContainer for \"012df072c5776951111f4b2fc60d0107e572564c37991c48a0f239ae0e79c416\"" Aug 13 00:22:56.202789 systemd[1]: Started cri-containerd-012df072c5776951111f4b2fc60d0107e572564c37991c48a0f239ae0e79c416.scope - libcontainer container 012df072c5776951111f4b2fc60d0107e572564c37991c48a0f239ae0e79c416. Aug 13 00:22:56.213972 containerd[1824]: time="2025-08-13T00:22:56.213922234Z" level=info msg="StartContainer for \"012df072c5776951111f4b2fc60d0107e572564c37991c48a0f239ae0e79c416\" returns successfully" Aug 13 00:22:56.218029 systemd[1]: cri-containerd-012df072c5776951111f4b2fc60d0107e572564c37991c48a0f239ae0e79c416.scope: Deactivated successfully. Aug 13 00:22:56.249915 containerd[1824]: time="2025-08-13T00:22:56.249860759Z" level=info msg="shim disconnected" id=012df072c5776951111f4b2fc60d0107e572564c37991c48a0f239ae0e79c416 namespace=k8s.io Aug 13 00:22:56.249915 containerd[1824]: time="2025-08-13T00:22:56.249892336Z" level=warning msg="cleaning up after shim disconnected" id=012df072c5776951111f4b2fc60d0107e572564c37991c48a0f239ae0e79c416 namespace=k8s.io Aug 13 00:22:56.249915 containerd[1824]: time="2025-08-13T00:22:56.249897234Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:57.142060 containerd[1824]: time="2025-08-13T00:22:57.141968170Z" level=info msg="CreateContainer within sandbox \"f9b376e01682d2d4fd977d81e2ca53f7c4a0f75ac275f21eee14d5da4dbd0dcc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:22:57.149973 containerd[1824]: time="2025-08-13T00:22:57.149920111Z" level=info msg="CreateContainer within sandbox \"f9b376e01682d2d4fd977d81e2ca53f7c4a0f75ac275f21eee14d5da4dbd0dcc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aa654e388a7f973bcf74bb48aad686cdc7ee0a05d165e93c6163a65efbaa93c8\"" Aug 13 00:22:57.150351 containerd[1824]: time="2025-08-13T00:22:57.150319308Z" level=info msg="StartContainer for \"aa654e388a7f973bcf74bb48aad686cdc7ee0a05d165e93c6163a65efbaa93c8\"" Aug 13 00:22:57.171749 systemd[1]: Started cri-containerd-aa654e388a7f973bcf74bb48aad686cdc7ee0a05d165e93c6163a65efbaa93c8.scope - libcontainer container aa654e388a7f973bcf74bb48aad686cdc7ee0a05d165e93c6163a65efbaa93c8. Aug 13 00:22:57.185846 systemd[1]: cri-containerd-aa654e388a7f973bcf74bb48aad686cdc7ee0a05d165e93c6163a65efbaa93c8.scope: Deactivated successfully. Aug 13 00:22:57.193775 containerd[1824]: time="2025-08-13T00:22:57.193750186Z" level=info msg="StartContainer for \"aa654e388a7f973bcf74bb48aad686cdc7ee0a05d165e93c6163a65efbaa93c8\" returns successfully" Aug 13 00:22:57.205234 containerd[1824]: time="2025-08-13T00:22:57.205177864Z" level=info msg="shim disconnected" id=aa654e388a7f973bcf74bb48aad686cdc7ee0a05d165e93c6163a65efbaa93c8 namespace=k8s.io Aug 13 00:22:57.205234 containerd[1824]: time="2025-08-13T00:22:57.205207485Z" level=warning msg="cleaning up after shim disconnected" id=aa654e388a7f973bcf74bb48aad686cdc7ee0a05d165e93c6163a65efbaa93c8 namespace=k8s.io Aug 13 00:22:57.205234 containerd[1824]: time="2025-08-13T00:22:57.205212759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:57.211094 containerd[1824]: time="2025-08-13T00:22:57.211045212Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:22:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:22:57.955930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa654e388a7f973bcf74bb48aad686cdc7ee0a05d165e93c6163a65efbaa93c8-rootfs.mount: Deactivated successfully. Aug 13 00:22:57.980919 containerd[1824]: time="2025-08-13T00:22:57.980872519Z" level=info msg="StopPodSandbox for \"b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182\"" Aug 13 00:22:57.980969 containerd[1824]: time="2025-08-13T00:22:57.980918702Z" level=info msg="TearDown network for sandbox \"b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182\" successfully" Aug 13 00:22:57.980969 containerd[1824]: time="2025-08-13T00:22:57.980929956Z" level=info msg="StopPodSandbox for \"b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182\" returns successfully" Aug 13 00:22:57.981138 containerd[1824]: time="2025-08-13T00:22:57.981096324Z" level=info msg="RemovePodSandbox for \"b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182\"" Aug 13 00:22:57.981138 containerd[1824]: time="2025-08-13T00:22:57.981110677Z" level=info msg="Forcibly stopping sandbox \"b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182\"" Aug 13 00:22:57.981199 containerd[1824]: time="2025-08-13T00:22:57.981143718Z" level=info msg="TearDown network for sandbox \"b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182\" successfully" Aug 13 00:22:57.982345 containerd[1824]: time="2025-08-13T00:22:57.982300923Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:22:57.982381 containerd[1824]: time="2025-08-13T00:22:57.982346219Z" level=info msg="RemovePodSandbox \"b8c3dff096a42a26b2d7143b8a72558afeb324ffb4e7ce38f80c9acffe05f182\" returns successfully" Aug 13 00:22:57.982553 containerd[1824]: time="2025-08-13T00:22:57.982479029Z" level=info msg="StopPodSandbox for \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\"" Aug 13 00:22:57.982603 containerd[1824]: time="2025-08-13T00:22:57.982573593Z" level=info msg="TearDown network for sandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" successfully" Aug 13 00:22:57.982603 containerd[1824]: time="2025-08-13T00:22:57.982579702Z" level=info msg="StopPodSandbox for \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" returns successfully" Aug 13 00:22:57.982782 containerd[1824]: time="2025-08-13T00:22:57.982744752Z" level=info msg="RemovePodSandbox for \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\"" Aug 13 00:22:57.982782 containerd[1824]: time="2025-08-13T00:22:57.982776554Z" level=info msg="Forcibly stopping sandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\"" Aug 13 00:22:57.982845 containerd[1824]: time="2025-08-13T00:22:57.982824767Z" level=info msg="TearDown network for sandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" successfully" Aug 13 00:22:57.983889 containerd[1824]: time="2025-08-13T00:22:57.983854238Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:22:57.983889 containerd[1824]: time="2025-08-13T00:22:57.983885769Z" level=info msg="RemovePodSandbox \"4c7386dde259f722fae028b7dc2f8b39541f29c89d5fd37900ebe1d958e61bca\" returns successfully" Aug 13 00:22:58.123382 kubelet[3139]: E0813 00:22:58.123298 3139 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:22:58.148120 containerd[1824]: time="2025-08-13T00:22:58.147990175Z" level=info msg="CreateContainer within sandbox \"f9b376e01682d2d4fd977d81e2ca53f7c4a0f75ac275f21eee14d5da4dbd0dcc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:22:58.160824 containerd[1824]: time="2025-08-13T00:22:58.160803129Z" level=info msg="CreateContainer within sandbox \"f9b376e01682d2d4fd977d81e2ca53f7c4a0f75ac275f21eee14d5da4dbd0dcc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b83270f9afa104373d2d08337e429cabcc18dc0a3a157a72019b2ebdf242c64c\"" Aug 13 00:22:58.161169 containerd[1824]: time="2025-08-13T00:22:58.161108688Z" level=info msg="StartContainer for \"b83270f9afa104373d2d08337e429cabcc18dc0a3a157a72019b2ebdf242c64c\"" Aug 13 00:22:58.184730 systemd[1]: Started cri-containerd-b83270f9afa104373d2d08337e429cabcc18dc0a3a157a72019b2ebdf242c64c.scope - libcontainer container b83270f9afa104373d2d08337e429cabcc18dc0a3a157a72019b2ebdf242c64c. Aug 13 00:22:58.199785 containerd[1824]: time="2025-08-13T00:22:58.199761429Z" level=info msg="StartContainer for \"b83270f9afa104373d2d08337e429cabcc18dc0a3a157a72019b2ebdf242c64c\" returns successfully" Aug 13 00:22:58.200941 systemd[1]: cri-containerd-b83270f9afa104373d2d08337e429cabcc18dc0a3a157a72019b2ebdf242c64c.scope: Deactivated successfully. Aug 13 00:22:58.224035 containerd[1824]: time="2025-08-13T00:22:58.223931225Z" level=info msg="shim disconnected" id=b83270f9afa104373d2d08337e429cabcc18dc0a3a157a72019b2ebdf242c64c namespace=k8s.io Aug 13 00:22:58.224035 containerd[1824]: time="2025-08-13T00:22:58.223968071Z" level=warning msg="cleaning up after shim disconnected" id=b83270f9afa104373d2d08337e429cabcc18dc0a3a157a72019b2ebdf242c64c namespace=k8s.io Aug 13 00:22:58.224035 containerd[1824]: time="2025-08-13T00:22:58.223979455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:58.954270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b83270f9afa104373d2d08337e429cabcc18dc0a3a157a72019b2ebdf242c64c-rootfs.mount: Deactivated successfully. Aug 13 00:22:59.155059 containerd[1824]: time="2025-08-13T00:22:59.154981038Z" level=info msg="CreateContainer within sandbox \"f9b376e01682d2d4fd977d81e2ca53f7c4a0f75ac275f21eee14d5da4dbd0dcc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:22:59.163700 containerd[1824]: time="2025-08-13T00:22:59.163657575Z" level=info msg="CreateContainer within sandbox \"f9b376e01682d2d4fd977d81e2ca53f7c4a0f75ac275f21eee14d5da4dbd0dcc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"54fd07d0decb538b766c3968c061553720c238323a63fed8c499764520052cf0\"" Aug 13 00:22:59.164052 containerd[1824]: time="2025-08-13T00:22:59.164038727Z" level=info msg="StartContainer for \"54fd07d0decb538b766c3968c061553720c238323a63fed8c499764520052cf0\"" Aug 13 00:22:59.186792 systemd[1]: Started cri-containerd-54fd07d0decb538b766c3968c061553720c238323a63fed8c499764520052cf0.scope - libcontainer container 54fd07d0decb538b766c3968c061553720c238323a63fed8c499764520052cf0. Aug 13 00:22:59.197937 systemd[1]: cri-containerd-54fd07d0decb538b766c3968c061553720c238323a63fed8c499764520052cf0.scope: Deactivated successfully. Aug 13 00:22:59.198350 containerd[1824]: time="2025-08-13T00:22:59.198333880Z" level=info msg="StartContainer for \"54fd07d0decb538b766c3968c061553720c238323a63fed8c499764520052cf0\" returns successfully" Aug 13 00:22:59.209622 containerd[1824]: time="2025-08-13T00:22:59.209529067Z" level=info msg="shim disconnected" id=54fd07d0decb538b766c3968c061553720c238323a63fed8c499764520052cf0 namespace=k8s.io Aug 13 00:22:59.209622 containerd[1824]: time="2025-08-13T00:22:59.209558362Z" level=warning msg="cleaning up after shim disconnected" id=54fd07d0decb538b766c3968c061553720c238323a63fed8c499764520052cf0 namespace=k8s.io Aug 13 00:22:59.209622 containerd[1824]: time="2025-08-13T00:22:59.209563595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:59.953715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54fd07d0decb538b766c3968c061553720c238323a63fed8c499764520052cf0-rootfs.mount: Deactivated successfully. Aug 13 00:23:00.162959 containerd[1824]: time="2025-08-13T00:23:00.162858696Z" level=info msg="CreateContainer within sandbox \"f9b376e01682d2d4fd977d81e2ca53f7c4a0f75ac275f21eee14d5da4dbd0dcc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:23:00.173060 containerd[1824]: time="2025-08-13T00:23:00.173037066Z" level=info msg="CreateContainer within sandbox \"f9b376e01682d2d4fd977d81e2ca53f7c4a0f75ac275f21eee14d5da4dbd0dcc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a3401ac4e1e242b09e54ce7b4b14e4f1902cd454a8551afed0e2e2b04c9efd5d\"" Aug 13 00:23:00.173402 containerd[1824]: time="2025-08-13T00:23:00.173393109Z" level=info msg="StartContainer for \"a3401ac4e1e242b09e54ce7b4b14e4f1902cd454a8551afed0e2e2b04c9efd5d\"" Aug 13 00:23:00.207655 systemd[1]: Started cri-containerd-a3401ac4e1e242b09e54ce7b4b14e4f1902cd454a8551afed0e2e2b04c9efd5d.scope - libcontainer container a3401ac4e1e242b09e54ce7b4b14e4f1902cd454a8551afed0e2e2b04c9efd5d. Aug 13 00:23:00.222080 containerd[1824]: time="2025-08-13T00:23:00.222054293Z" level=info msg="StartContainer for \"a3401ac4e1e242b09e54ce7b4b14e4f1902cd454a8551afed0e2e2b04c9efd5d\" returns successfully" Aug 13 00:23:00.393541 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:23:03.877958 systemd-networkd[1732]: lxc_health: Link UP Aug 13 00:23:03.878195 systemd-networkd[1732]: lxc_health: Gained carrier Aug 13 00:23:04.091470 kubelet[3139]: I0813 00:23:04.091429 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cd84f" podStartSLOduration=9.09141661 podStartE2EDuration="9.09141661s" podCreationTimestamp="2025-08-13 00:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:23:01.183081973 +0000 UTC m=+423.258782237" watchObservedRunningTime="2025-08-13 00:23:04.09141661 +0000 UTC m=+426.167116871" Aug 13 00:23:05.286657 systemd-networkd[1732]: lxc_health: Gained IPv6LL Aug 13 00:23:10.659876 sshd[5391]: Connection closed by 147.75.109.163 port 49906 Aug 13 00:23:10.660077 sshd-session[5386]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:10.661926 systemd[1]: sshd@27-147.75.71.157:22-147.75.109.163:49906.service: Deactivated successfully. Aug 13 00:23:10.663002 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 00:23:10.663850 systemd-logind[1805]: Session 30 logged out. Waiting for processes to exit. Aug 13 00:23:10.664554 systemd-logind[1805]: Removed session 30.