Jul 6 23:31:18.487373 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:53:45 -00 2025 Jul 6 23:31:18.487388 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:31:18.487394 kernel: BIOS-provided physical RAM map: Jul 6 23:31:18.487400 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jul 6 23:31:18.487404 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jul 6 23:31:18.487408 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jul 6 23:31:18.487413 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jul 6 23:31:18.487417 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jul 6 23:31:18.487422 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b23fff] usable Jul 6 23:31:18.487426 kernel: BIOS-e820: [mem 0x0000000081b24000-0x0000000081b24fff] ACPI NVS Jul 6 23:31:18.487431 kernel: BIOS-e820: [mem 0x0000000081b25000-0x0000000081b25fff] reserved Jul 6 23:31:18.487435 kernel: BIOS-e820: [mem 0x0000000081b26000-0x000000008afccfff] usable Jul 6 23:31:18.487440 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jul 6 23:31:18.487445 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jul 6 23:31:18.487450 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jul 6 23:31:18.487455 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jul 6 23:31:18.487461 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jul 6 23:31:18.487465 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jul 6 23:31:18.487470 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 6 23:31:18.487475 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jul 6 23:31:18.487480 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jul 6 23:31:18.487484 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 6 23:31:18.487489 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jul 6 23:31:18.487494 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jul 6 23:31:18.487498 kernel: NX (Execute Disable) protection: active Jul 6 23:31:18.487503 kernel: APIC: Static calls initialized Jul 6 23:31:18.487508 kernel: SMBIOS 3.2.1 present. Jul 6 23:31:18.487513 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 2.6 12/03/2024 Jul 6 23:31:18.487519 kernel: tsc: Detected 3400.000 MHz processor Jul 6 23:31:18.487526 kernel: tsc: Detected 3399.906 MHz TSC Jul 6 23:31:18.487531 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:31:18.487537 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:31:18.487541 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jul 6 23:31:18.487547 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jul 6 23:31:18.487552 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:31:18.487556 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jul 6 23:31:18.487561 kernel: Using GB pages for direct mapping Jul 6 23:31:18.487566 kernel: ACPI: Early table checksum verification disabled Jul 6 23:31:18.487572 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jul 6 23:31:18.487577 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jul 6 23:31:18.487584 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jul 6 23:31:18.487590 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jul 6 23:31:18.487595 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jul 6 23:31:18.487600 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jul 6 23:31:18.487606 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jul 6 23:31:18.487611 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jul 6 23:31:18.487617 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jul 6 23:31:18.487622 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jul 6 23:31:18.487627 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jul 6 23:31:18.487632 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jul 6 23:31:18.487637 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jul 6 23:31:18.487643 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 6 23:31:18.487649 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jul 6 23:31:18.487654 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jul 6 23:31:18.487659 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 6 23:31:18.487664 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 6 23:31:18.487669 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jul 6 23:31:18.487674 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jul 6 23:31:18.487680 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 6 23:31:18.487685 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jul 6 23:31:18.487691 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jul 6 23:31:18.487696 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jul 6 23:31:18.487701 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jul 6 23:31:18.487706 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jul 6 23:31:18.487712 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jul 6 23:31:18.487717 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jul 6 23:31:18.487722 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jul 6 23:31:18.487727 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jul 6 23:31:18.487733 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jul 6 23:31:18.487738 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jul 6 23:31:18.487744 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jul 6 23:31:18.487749 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jul 6 23:31:18.487754 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jul 6 23:31:18.487759 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jul 6 23:31:18.487764 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jul 6 23:31:18.487769 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jul 6 23:31:18.487775 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jul 6 23:31:18.487781 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jul 6 23:31:18.487786 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jul 6 23:31:18.487791 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jul 6 23:31:18.487796 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jul 6 23:31:18.487801 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jul 6 23:31:18.487806 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jul 6 23:31:18.487811 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jul 6 23:31:18.487816 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jul 6 23:31:18.487822 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jul 6 23:31:18.487828 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jul 6 23:31:18.487833 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jul 6 23:31:18.487838 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jul 6 23:31:18.487843 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jul 6 23:31:18.487848 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jul 6 23:31:18.487853 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jul 6 23:31:18.487858 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jul 6 23:31:18.487863 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jul 6 23:31:18.487869 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jul 6 23:31:18.487874 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jul 6 23:31:18.487880 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jul 6 23:31:18.487885 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jul 6 23:31:18.487890 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jul 6 23:31:18.487895 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jul 6 23:31:18.487900 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jul 6 23:31:18.487905 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jul 6 23:31:18.487911 kernel: No NUMA configuration found Jul 6 23:31:18.487916 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jul 6 23:31:18.487921 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jul 6 23:31:18.487927 kernel: Zone ranges: Jul 6 23:31:18.487933 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:31:18.487938 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 6 23:31:18.487943 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jul 6 23:31:18.487948 kernel: Movable zone start for each node Jul 6 23:31:18.487953 kernel: Early memory node ranges Jul 6 23:31:18.487959 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jul 6 23:31:18.487964 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jul 6 23:31:18.487969 kernel: node 0: [mem 0x0000000040400000-0x0000000081b23fff] Jul 6 23:31:18.487975 kernel: node 0: [mem 0x0000000081b26000-0x000000008afccfff] Jul 6 23:31:18.487980 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jul 6 23:31:18.487985 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jul 6 23:31:18.487991 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jul 6 23:31:18.487999 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jul 6 23:31:18.488006 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:31:18.488011 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jul 6 23:31:18.488017 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 6 23:31:18.488023 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jul 6 23:31:18.488029 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jul 6 23:31:18.488034 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jul 6 23:31:18.488040 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jul 6 23:31:18.488045 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jul 6 23:31:18.488051 kernel: ACPI: PM-Timer IO Port: 0x1808 Jul 6 23:31:18.488056 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 6 23:31:18.488062 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 6 23:31:18.488067 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 6 23:31:18.488074 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 6 23:31:18.488079 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 6 23:31:18.488085 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 6 23:31:18.488091 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 6 23:31:18.488096 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 6 23:31:18.488102 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 6 23:31:18.488107 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 6 23:31:18.488112 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 6 23:31:18.488118 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 6 23:31:18.488123 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 6 23:31:18.488130 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 6 23:31:18.488135 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 6 23:31:18.488141 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 6 23:31:18.488146 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jul 6 23:31:18.488152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:31:18.488157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:31:18.488163 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:31:18.488168 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:31:18.488174 kernel: TSC deadline timer available Jul 6 23:31:18.488181 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jul 6 23:31:18.488186 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jul 6 23:31:18.488192 kernel: Booting paravirtualized kernel on bare hardware Jul 6 23:31:18.488197 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:31:18.488203 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jul 6 23:31:18.488209 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 6 23:31:18.488214 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 6 23:31:18.488220 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 6 23:31:18.488226 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:31:18.488233 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:31:18.488238 kernel: random: crng init done Jul 6 23:31:18.488244 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jul 6 23:31:18.488249 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jul 6 23:31:18.488255 kernel: Fallback order for Node 0: 0 Jul 6 23:31:18.488260 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jul 6 23:31:18.488266 kernel: Policy zone: Normal Jul 6 23:31:18.488271 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:31:18.488278 kernel: software IO TLB: area num 16. Jul 6 23:31:18.488284 kernel: Memory: 32718260K/33452980K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43492K init, 1584K bss, 734460K reserved, 0K cma-reserved) Jul 6 23:31:18.488289 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 6 23:31:18.488295 kernel: ftrace: allocating 37940 entries in 149 pages Jul 6 23:31:18.488300 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:31:18.488306 kernel: Dynamic Preempt: voluntary Jul 6 23:31:18.488311 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:31:18.488319 kernel: rcu: RCU event tracing is enabled. Jul 6 23:31:18.488325 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 6 23:31:18.488331 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:31:18.488337 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:31:18.488342 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:31:18.488348 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:31:18.488353 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 6 23:31:18.488359 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jul 6 23:31:18.488364 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:31:18.488370 kernel: Console: colour VGA+ 80x25 Jul 6 23:31:18.488375 kernel: printk: console [tty0] enabled Jul 6 23:31:18.488382 kernel: printk: console [ttyS1] enabled Jul 6 23:31:18.488387 kernel: ACPI: Core revision 20230628 Jul 6 23:31:18.488393 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jul 6 23:31:18.488398 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:31:18.488404 kernel: DMAR: Host address width 39 Jul 6 23:31:18.488409 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jul 6 23:31:18.488415 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jul 6 23:31:18.488420 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Jul 6 23:31:18.488426 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jul 6 23:31:18.488432 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jul 6 23:31:18.488438 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jul 6 23:31:18.488444 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jul 6 23:31:18.488449 kernel: x2apic enabled Jul 6 23:31:18.488455 kernel: APIC: Switched APIC routing to: cluster x2apic Jul 6 23:31:18.488460 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jul 6 23:31:18.488466 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jul 6 23:31:18.488471 kernel: CPU0: Thermal monitoring enabled (TM1) Jul 6 23:31:18.488477 kernel: process: using mwait in idle threads Jul 6 23:31:18.488483 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 6 23:31:18.488489 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 6 23:31:18.488494 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:31:18.488500 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 6 23:31:18.488505 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 6 23:31:18.488511 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 6 23:31:18.488516 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 6 23:31:18.488521 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 6 23:31:18.488529 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:31:18.488534 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:31:18.488540 kernel: TAA: Mitigation: TSX disabled Jul 6 23:31:18.488546 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jul 6 23:31:18.488551 kernel: SRBDS: Mitigation: Microcode Jul 6 23:31:18.488557 kernel: GDS: Mitigation: Microcode Jul 6 23:31:18.488562 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 6 23:31:18.488568 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:31:18.488573 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:31:18.488579 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:31:18.488584 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 6 23:31:18.488590 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 6 23:31:18.488595 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:31:18.488600 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 6 23:31:18.488607 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 6 23:31:18.488612 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jul 6 23:31:18.488618 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:31:18.488623 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:31:18.488629 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:31:18.488634 kernel: landlock: Up and running. Jul 6 23:31:18.488639 kernel: SELinux: Initializing. Jul 6 23:31:18.488645 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:31:18.488650 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:31:18.488656 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 6 23:31:18.488661 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 6 23:31:18.488667 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 6 23:31:18.488674 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 6 23:31:18.488679 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jul 6 23:31:18.488685 kernel: ... version: 4 Jul 6 23:31:18.488691 kernel: ... bit width: 48 Jul 6 23:31:18.488696 kernel: ... generic registers: 4 Jul 6 23:31:18.488702 kernel: ... value mask: 0000ffffffffffff Jul 6 23:31:18.488707 kernel: ... max period: 00007fffffffffff Jul 6 23:31:18.488713 kernel: ... fixed-purpose events: 3 Jul 6 23:31:18.488718 kernel: ... event mask: 000000070000000f Jul 6 23:31:18.488725 kernel: signal: max sigframe size: 2032 Jul 6 23:31:18.488731 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jul 6 23:31:18.488736 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:31:18.488742 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:31:18.488747 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jul 6 23:31:18.488753 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:31:18.488758 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:31:18.488764 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jul 6 23:31:18.488770 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 6 23:31:18.488776 kernel: smp: Brought up 1 node, 16 CPUs Jul 6 23:31:18.488782 kernel: smpboot: Max logical packages: 1 Jul 6 23:31:18.488787 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jul 6 23:31:18.488793 kernel: devtmpfs: initialized Jul 6 23:31:18.488798 kernel: x86/mm: Memory block size: 128MB Jul 6 23:31:18.488804 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b24000-0x81b24fff] (4096 bytes) Jul 6 23:31:18.488810 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jul 6 23:31:18.488815 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:31:18.488822 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 6 23:31:18.488827 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:31:18.488833 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:31:18.488838 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:31:18.488844 kernel: audit: type=2000 audit(1751844672.041:1): state=initialized audit_enabled=0 res=1 Jul 6 23:31:18.488849 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:31:18.488855 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:31:18.488860 kernel: cpuidle: using governor menu Jul 6 23:31:18.488866 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:31:18.488872 kernel: dca service started, version 1.12.1 Jul 6 23:31:18.488878 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jul 6 23:31:18.488883 kernel: PCI: Using configuration type 1 for base access Jul 6 23:31:18.488889 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jul 6 23:31:18.488894 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:31:18.488900 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:31:18.488905 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:31:18.488911 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:31:18.488916 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:31:18.488923 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:31:18.488928 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:31:18.488934 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:31:18.488939 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jul 6 23:31:18.488945 kernel: ACPI: Dynamic OEM Table Load: Jul 6 23:31:18.488950 kernel: ACPI: SSDT 0xFFFF96E5C1E53C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jul 6 23:31:18.488956 kernel: ACPI: Dynamic OEM Table Load: Jul 6 23:31:18.488961 kernel: ACPI: SSDT 0xFFFF96E5C1E71000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jul 6 23:31:18.488967 kernel: ACPI: Dynamic OEM Table Load: Jul 6 23:31:18.488973 kernel: ACPI: SSDT 0xFFFF96E5C147B200 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jul 6 23:31:18.488979 kernel: ACPI: Dynamic OEM Table Load: Jul 6 23:31:18.488984 kernel: ACPI: SSDT 0xFFFF96E5C1145800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jul 6 23:31:18.488990 kernel: ACPI: Dynamic OEM Table Load: Jul 6 23:31:18.488995 kernel: ACPI: SSDT 0xFFFF96E5C114D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jul 6 23:31:18.489001 kernel: ACPI: Dynamic OEM Table Load: Jul 6 23:31:18.489006 kernel: ACPI: SSDT 0xFFFF96E5C1822000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jul 6 23:31:18.489012 kernel: ACPI: _OSC evaluated successfully for all CPUs Jul 6 23:31:18.489017 kernel: ACPI: Interpreter enabled Jul 6 23:31:18.489023 kernel: ACPI: PM: (supports S0 S5) Jul 6 23:31:18.489029 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:31:18.489035 kernel: HEST: Enabling Firmware First mode for corrected errors. Jul 6 23:31:18.489040 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jul 6 23:31:18.489046 kernel: HEST: Table parsing has been initialized. Jul 6 23:31:18.489051 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jul 6 23:31:18.489057 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:31:18.489063 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 6 23:31:18.489068 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jul 6 23:31:18.489074 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jul 6 23:31:18.489080 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jul 6 23:31:18.489086 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jul 6 23:31:18.489091 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jul 6 23:31:18.489097 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jul 6 23:31:18.489102 kernel: ACPI: \_TZ_.FN00: New power resource Jul 6 23:31:18.489108 kernel: ACPI: \_TZ_.FN01: New power resource Jul 6 23:31:18.489114 kernel: ACPI: \_TZ_.FN02: New power resource Jul 6 23:31:18.489119 kernel: ACPI: \_TZ_.FN03: New power resource Jul 6 23:31:18.489125 kernel: ACPI: \_TZ_.FN04: New power resource Jul 6 23:31:18.489131 kernel: ACPI: \PIN_: New power resource Jul 6 23:31:18.489137 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jul 6 23:31:18.489216 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:31:18.489270 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jul 6 23:31:18.489321 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jul 6 23:31:18.489329 kernel: PCI host bridge to bus 0000:00 Jul 6 23:31:18.489382 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:31:18.489431 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:31:18.489475 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:31:18.489520 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jul 6 23:31:18.489569 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jul 6 23:31:18.489613 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jul 6 23:31:18.489675 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jul 6 23:31:18.489740 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jul 6 23:31:18.489795 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jul 6 23:31:18.489850 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jul 6 23:31:18.489902 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jul 6 23:31:18.489957 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jul 6 23:31:18.490008 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jul 6 23:31:18.490066 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jul 6 23:31:18.490117 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jul 6 23:31:18.490168 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jul 6 23:31:18.490223 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jul 6 23:31:18.490275 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jul 6 23:31:18.490326 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jul 6 23:31:18.490380 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jul 6 23:31:18.490434 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 6 23:31:18.490492 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jul 6 23:31:18.490547 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 6 23:31:18.490602 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jul 6 23:31:18.490653 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jul 6 23:31:18.490703 kernel: pci 0000:00:16.0: PME# supported from D3hot Jul 6 23:31:18.490761 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jul 6 23:31:18.490812 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jul 6 23:31:18.490871 kernel: pci 0000:00:16.1: PME# supported from D3hot Jul 6 23:31:18.490926 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jul 6 23:31:18.490977 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jul 6 23:31:18.491028 kernel: pci 0000:00:16.4: PME# supported from D3hot Jul 6 23:31:18.491083 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jul 6 23:31:18.491137 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jul 6 23:31:18.491186 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jul 6 23:31:18.491238 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jul 6 23:31:18.491287 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jul 6 23:31:18.491338 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jul 6 23:31:18.491389 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jul 6 23:31:18.491439 kernel: pci 0000:00:17.0: PME# supported from D3hot Jul 6 23:31:18.491497 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jul 6 23:31:18.491555 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jul 6 23:31:18.491615 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jul 6 23:31:18.491666 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jul 6 23:31:18.491726 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jul 6 23:31:18.491779 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jul 6 23:31:18.491834 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jul 6 23:31:18.491887 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jul 6 23:31:18.491942 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jul 6 23:31:18.491997 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jul 6 23:31:18.492052 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jul 6 23:31:18.492104 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 6 23:31:18.492158 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jul 6 23:31:18.492214 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jul 6 23:31:18.492265 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jul 6 23:31:18.492318 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jul 6 23:31:18.492376 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jul 6 23:31:18.492427 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jul 6 23:31:18.492485 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jul 6 23:31:18.492541 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jul 6 23:31:18.492594 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jul 6 23:31:18.492647 kernel: pci 0000:01:00.0: PME# supported from D3cold Jul 6 23:31:18.492702 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 6 23:31:18.492756 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 6 23:31:18.492813 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jul 6 23:31:18.492866 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jul 6 23:31:18.492918 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jul 6 23:31:18.492969 kernel: pci 0000:01:00.1: PME# supported from D3cold Jul 6 23:31:18.493022 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 6 23:31:18.493075 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 6 23:31:18.493128 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 6 23:31:18.493180 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 6 23:31:18.493231 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 6 23:31:18.493283 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 6 23:31:18.493341 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jul 6 23:31:18.493395 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jul 6 23:31:18.493449 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jul 6 23:31:18.493501 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jul 6 23:31:18.493556 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jul 6 23:31:18.493608 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 6 23:31:18.493660 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 6 23:31:18.493710 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 6 23:31:18.493762 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 6 23:31:18.493819 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jul 6 23:31:18.493874 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jul 6 23:31:18.493929 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jul 6 23:31:18.493980 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jul 6 23:31:18.494032 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jul 6 23:31:18.494083 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jul 6 23:31:18.494136 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 6 23:31:18.494188 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 6 23:31:18.494240 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 6 23:31:18.494293 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 6 23:31:18.494349 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jul 6 23:31:18.494402 kernel: pci 0000:06:00.0: enabling Extended Tags Jul 6 23:31:18.494455 kernel: pci 0000:06:00.0: supports D1 D2 Jul 6 23:31:18.494507 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:31:18.494570 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 6 23:31:18.494623 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 6 23:31:18.494676 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 6 23:31:18.494736 kernel: pci_bus 0000:07: extended config space not accessible Jul 6 23:31:18.494798 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jul 6 23:31:18.494854 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jul 6 23:31:18.494909 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jul 6 23:31:18.494964 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jul 6 23:31:18.495018 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:31:18.495075 kernel: pci 0000:07:00.0: supports D1 D2 Jul 6 23:31:18.495129 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 6 23:31:18.495181 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 6 23:31:18.495233 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 6 23:31:18.495285 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 6 23:31:18.495294 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jul 6 23:31:18.495300 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jul 6 23:31:18.495306 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jul 6 23:31:18.495314 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jul 6 23:31:18.495320 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jul 6 23:31:18.495325 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jul 6 23:31:18.495331 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jul 6 23:31:18.495337 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jul 6 23:31:18.495343 kernel: iommu: Default domain type: Translated Jul 6 23:31:18.495349 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:31:18.495355 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:31:18.495361 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:31:18.495368 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jul 6 23:31:18.495373 kernel: e820: reserve RAM buffer [mem 0x81b24000-0x83ffffff] Jul 6 23:31:18.495379 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jul 6 23:31:18.495385 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jul 6 23:31:18.495390 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jul 6 23:31:18.495396 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jul 6 23:31:18.495449 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jul 6 23:31:18.495502 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jul 6 23:31:18.495561 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:31:18.495572 kernel: vgaarb: loaded Jul 6 23:31:18.495578 kernel: clocksource: Switched to clocksource tsc-early Jul 6 23:31:18.495584 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:31:18.495590 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:31:18.495596 kernel: pnp: PnP ACPI init Jul 6 23:31:18.495647 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jul 6 23:31:18.495701 kernel: pnp 00:02: [dma 0 disabled] Jul 6 23:31:18.495755 kernel: pnp 00:03: [dma 0 disabled] Jul 6 23:31:18.495809 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jul 6 23:31:18.495857 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jul 6 23:31:18.495906 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Jul 6 23:31:18.495954 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Jul 6 23:31:18.495999 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Jul 6 23:31:18.496046 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Jul 6 23:31:18.496094 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Jul 6 23:31:18.496141 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Jul 6 23:31:18.496187 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Jul 6 23:31:18.496233 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Jul 6 23:31:18.496283 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Jul 6 23:31:18.496330 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Jul 6 23:31:18.496380 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jul 6 23:31:18.496426 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Jul 6 23:31:18.496473 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Jul 6 23:31:18.496519 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Jul 6 23:31:18.496568 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Jul 6 23:31:18.496619 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Jul 6 23:31:18.496628 kernel: pnp: PnP ACPI: found 9 devices Jul 6 23:31:18.496634 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:31:18.496642 kernel: NET: Registered PF_INET protocol family Jul 6 23:31:18.496648 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:31:18.496654 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 6 23:31:18.496660 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:31:18.496666 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:31:18.496672 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 6 23:31:18.496677 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jul 6 23:31:18.496683 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:31:18.496690 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 6 23:31:18.496696 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:31:18.496702 kernel: NET: Registered PF_XDP protocol family Jul 6 23:31:18.496752 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jul 6 23:31:18.496804 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jul 6 23:31:18.496855 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jul 6 23:31:18.496908 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 6 23:31:18.496961 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 6 23:31:18.497014 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 6 23:31:18.497069 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 6 23:31:18.497121 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 6 23:31:18.497173 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 6 23:31:18.497223 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 6 23:31:18.497275 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 6 23:31:18.497330 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 6 23:31:18.497380 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 6 23:31:18.497433 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 6 23:31:18.497483 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 6 23:31:18.497537 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 6 23:31:18.497588 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 6 23:31:18.497640 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 6 23:31:18.497692 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 6 23:31:18.497747 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 6 23:31:18.497800 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 6 23:31:18.497851 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 6 23:31:18.497902 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 6 23:31:18.497953 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 6 23:31:18.498000 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 6 23:31:18.498045 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:31:18.498090 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:31:18.498137 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:31:18.498183 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jul 6 23:31:18.498228 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jul 6 23:31:18.498278 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jul 6 23:31:18.498326 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jul 6 23:31:18.498376 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jul 6 23:31:18.498424 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jul 6 23:31:18.498477 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 6 23:31:18.498557 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jul 6 23:31:18.498611 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jul 6 23:31:18.498657 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jul 6 23:31:18.498707 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jul 6 23:31:18.498755 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jul 6 23:31:18.498766 kernel: PCI: CLS 64 bytes, default 64 Jul 6 23:31:18.498773 kernel: DMAR: No ATSR found Jul 6 23:31:18.498779 kernel: DMAR: No SATC found Jul 6 23:31:18.498785 kernel: DMAR: dmar0: Using Queued invalidation Jul 6 23:31:18.498837 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jul 6 23:31:18.498890 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jul 6 23:31:18.498941 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jul 6 23:31:18.498993 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jul 6 23:31:18.499044 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jul 6 23:31:18.499097 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jul 6 23:31:18.499149 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jul 6 23:31:18.499198 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jul 6 23:31:18.499250 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jul 6 23:31:18.499300 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jul 6 23:31:18.499352 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jul 6 23:31:18.499402 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jul 6 23:31:18.499454 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jul 6 23:31:18.499509 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jul 6 23:31:18.499563 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jul 6 23:31:18.499615 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jul 6 23:31:18.499666 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jul 6 23:31:18.499717 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jul 6 23:31:18.499768 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jul 6 23:31:18.499819 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jul 6 23:31:18.499870 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jul 6 23:31:18.499925 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jul 6 23:31:18.499978 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jul 6 23:31:18.500030 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jul 6 23:31:18.500083 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jul 6 23:31:18.500134 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jul 6 23:31:18.500189 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jul 6 23:31:18.500198 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jul 6 23:31:18.500204 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 6 23:31:18.500210 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jul 6 23:31:18.500218 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jul 6 23:31:18.500224 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jul 6 23:31:18.500230 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jul 6 23:31:18.500236 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jul 6 23:31:18.500292 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jul 6 23:31:18.500301 kernel: Initialise system trusted keyrings Jul 6 23:31:18.500307 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jul 6 23:31:18.500313 kernel: Key type asymmetric registered Jul 6 23:31:18.500320 kernel: Asymmetric key parser 'x509' registered Jul 6 23:31:18.500326 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:31:18.500332 kernel: io scheduler mq-deadline registered Jul 6 23:31:18.500338 kernel: io scheduler kyber registered Jul 6 23:31:18.500344 kernel: io scheduler bfq registered Jul 6 23:31:18.500394 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jul 6 23:31:18.500446 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jul 6 23:31:18.500497 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jul 6 23:31:18.500554 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jul 6 23:31:18.500606 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jul 6 23:31:18.500658 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jul 6 23:31:18.500715 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jul 6 23:31:18.500724 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jul 6 23:31:18.500730 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jul 6 23:31:18.500736 kernel: pstore: Using crash dump compression: deflate Jul 6 23:31:18.500742 kernel: pstore: Registered erst as persistent store backend Jul 6 23:31:18.500750 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:31:18.500756 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:31:18.500762 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:31:18.500768 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 6 23:31:18.500774 kernel: hpet_acpi_add: no address or irqs in _CRS Jul 6 23:31:18.500826 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jul 6 23:31:18.500835 kernel: i8042: PNP: No PS/2 controller found. Jul 6 23:31:18.500882 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jul 6 23:31:18.500931 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jul 6 23:31:18.500979 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-07-06T23:31:17 UTC (1751844677) Jul 6 23:31:18.501026 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jul 6 23:31:18.501034 kernel: intel_pstate: Intel P-state driver initializing Jul 6 23:31:18.501040 kernel: intel_pstate: Disabling energy efficiency optimization Jul 6 23:31:18.501046 kernel: intel_pstate: HWP enabled Jul 6 23:31:18.501052 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:31:18.501058 kernel: Segment Routing with IPv6 Jul 6 23:31:18.501064 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:31:18.501071 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:31:18.501077 kernel: Key type dns_resolver registered Jul 6 23:31:18.501083 kernel: microcode: Current revision: 0x00000102 Jul 6 23:31:18.501089 kernel: microcode: Microcode Update Driver: v2.2. Jul 6 23:31:18.501095 kernel: IPI shorthand broadcast: enabled Jul 6 23:31:18.501101 kernel: sched_clock: Marking stable (2487000630, 1442144003)->(4497617755, -568473122) Jul 6 23:31:18.501107 kernel: registered taskstats version 1 Jul 6 23:31:18.501113 kernel: Loading compiled-in X.509 certificates Jul 6 23:31:18.501119 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: f74b958d282931d4f0d8d911dd18abd0ec707734' Jul 6 23:31:18.501126 kernel: Key type .fscrypt registered Jul 6 23:31:18.501131 kernel: Key type fscrypt-provisioning registered Jul 6 23:31:18.501138 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:31:18.501143 kernel: ima: No architecture policies found Jul 6 23:31:18.501149 kernel: clk: Disabling unused clocks Jul 6 23:31:18.501155 kernel: Freeing unused kernel image (initmem) memory: 43492K Jul 6 23:31:18.501161 kernel: Write protecting the kernel read-only data: 38912k Jul 6 23:31:18.501167 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jul 6 23:31:18.501174 kernel: Run /init as init process Jul 6 23:31:18.501180 kernel: with arguments: Jul 6 23:31:18.501186 kernel: /init Jul 6 23:31:18.501191 kernel: with environment: Jul 6 23:31:18.501197 kernel: HOME=/ Jul 6 23:31:18.501203 kernel: TERM=linux Jul 6 23:31:18.501209 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:31:18.501215 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:31:18.501223 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:31:18.501231 systemd[1]: Detected architecture x86-64. Jul 6 23:31:18.501237 systemd[1]: Running in initrd. Jul 6 23:31:18.501243 systemd[1]: No hostname configured, using default hostname. Jul 6 23:31:18.501249 systemd[1]: Hostname set to . Jul 6 23:31:18.501255 systemd[1]: Initializing machine ID from random generator. Jul 6 23:31:18.501261 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:31:18.501267 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:31:18.501274 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:31:18.501281 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:31:18.501287 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:31:18.501294 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:31:18.501300 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:31:18.501307 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:31:18.501313 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:31:18.501321 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:31:18.501327 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:31:18.501333 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:31:18.501339 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:31:18.501345 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:31:18.501351 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:31:18.501357 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:31:18.501364 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:31:18.501371 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:31:18.501377 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:31:18.501384 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:31:18.501390 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:31:18.501396 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Jul 6 23:31:18.501402 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Jul 6 23:31:18.501408 kernel: clocksource: Switched to clocksource tsc Jul 6 23:31:18.501414 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:31:18.501420 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:31:18.501427 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:31:18.501434 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:31:18.501440 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:31:18.501446 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:31:18.501452 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:31:18.501469 systemd-journald[270]: Collecting audit messages is disabled. Jul 6 23:31:18.501486 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:31:18.501493 systemd-journald[270]: Journal started Jul 6 23:31:18.501511 systemd-journald[270]: Runtime Journal (/run/log/journal/beb66f1419f64da8a84afae107e590a1) is 8M, max 639.9M, 631.9M free. Jul 6 23:31:18.513205 systemd-modules-load[273]: Inserted module 'overlay' Jul 6 23:31:18.522700 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:31:18.531530 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:31:18.531897 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:31:18.568739 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:31:18.568752 kernel: Bridge firewalling registered Jul 6 23:31:18.539035 systemd-modules-load[273]: Inserted module 'br_netfilter' Jul 6 23:31:18.568743 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:31:18.595183 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:31:18.611146 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:31:18.632143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:31:18.671784 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:31:18.683203 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:31:18.695380 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:31:18.696042 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:31:18.700512 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:31:18.701590 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:31:18.702225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:31:18.703368 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:31:18.704238 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:31:18.706676 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:31:18.719829 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:31:18.724350 systemd-resolved[304]: Positive Trust Anchors: Jul 6 23:31:18.724357 systemd-resolved[304]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:31:18.724383 systemd-resolved[304]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:31:18.726083 systemd-resolved[304]: Defaulting to hostname 'linux'. Jul 6 23:31:18.741867 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:31:18.763966 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:31:18.787705 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:31:18.894750 dracut-cmdline[311]: dracut-dracut-053 Jul 6 23:31:18.901833 dracut-cmdline[311]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:31:19.075558 kernel: SCSI subsystem initialized Jul 6 23:31:19.088566 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:31:19.101592 kernel: iscsi: registered transport (tcp) Jul 6 23:31:19.124275 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:31:19.124292 kernel: QLogic iSCSI HBA Driver Jul 6 23:31:19.146755 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:31:19.169823 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:31:19.206348 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:31:19.206368 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:31:19.215153 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:31:19.250596 kernel: raid6: avx2x4 gen() 47201 MB/s Jul 6 23:31:19.271560 kernel: raid6: avx2x2 gen() 53907 MB/s Jul 6 23:31:19.297668 kernel: raid6: avx2x1 gen() 45196 MB/s Jul 6 23:31:19.297690 kernel: raid6: using algorithm avx2x2 gen() 53907 MB/s Jul 6 23:31:19.324754 kernel: raid6: .... xor() 32478 MB/s, rmw enabled Jul 6 23:31:19.324773 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:31:19.345590 kernel: xor: automatically using best checksumming function avx Jul 6 23:31:19.444573 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:31:19.449812 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:31:19.458676 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:31:19.495281 systemd-udevd[496]: Using default interface naming scheme 'v255'. Jul 6 23:31:19.498181 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:31:19.526858 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:31:19.563175 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jul 6 23:31:19.581038 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:31:19.608864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:31:19.706513 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:31:19.747508 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 6 23:31:19.747526 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 6 23:31:19.747535 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:31:19.747559 kernel: ACPI: bus type USB registered Jul 6 23:31:19.739483 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:31:19.764636 kernel: usbcore: registered new interface driver usbfs Jul 6 23:31:19.764649 kernel: usbcore: registered new interface driver hub Jul 6 23:31:19.764657 kernel: usbcore: registered new device driver usb Jul 6 23:31:19.770590 kernel: PTP clock support registered Jul 6 23:31:19.781532 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:31:19.781566 kernel: libata version 3.00 loaded. Jul 6 23:31:19.782527 kernel: AES CTR mode by8 optimization enabled Jul 6 23:31:19.799561 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 6 23:31:19.799598 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 6 23:31:19.800018 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:31:19.800092 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:31:19.805534 kernel: ahci 0000:00:17.0: version 3.0 Jul 6 23:31:19.820878 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 6 23:31:19.820982 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jul 6 23:31:19.821054 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jul 6 23:31:19.821122 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jul 6 23:31:19.831533 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jul 6 23:31:19.831623 kernel: igb 0000:03:00.0: added PHC on eth0 Jul 6 23:31:19.831700 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 6 23:31:19.831772 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:7a Jul 6 23:31:19.831840 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jul 6 23:31:19.831907 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 6 23:31:19.861579 kernel: scsi host0: ahci Jul 6 23:31:19.861606 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 6 23:31:19.863540 kernel: igb 0000:04:00.0: added PHC on eth1 Jul 6 23:31:19.863632 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 6 23:31:19.863705 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d8:7b Jul 6 23:31:19.863777 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jul 6 23:31:19.863844 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 6 23:31:19.873203 kernel: scsi host1: ahci Jul 6 23:31:19.873230 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jul 6 23:31:19.873323 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jul 6 23:31:19.873398 kernel: hub 1-0:1.0: USB hub found Jul 6 23:31:19.885051 kernel: scsi host2: ahci Jul 6 23:31:19.885078 kernel: hub 1-0:1.0: 16 ports detected Jul 6 23:31:19.896188 kernel: scsi host3: ahci Jul 6 23:31:19.910213 kernel: hub 2-0:1.0: USB hub found Jul 6 23:31:19.910304 kernel: scsi host4: ahci Jul 6 23:31:19.910319 kernel: hub 2-0:1.0: 10 ports detected Jul 6 23:31:19.922932 kernel: scsi host5: ahci Jul 6 23:31:19.956674 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jul 6 23:31:19.956783 kernel: scsi host6: ahci Jul 6 23:31:19.995581 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Jul 6 23:31:20.002188 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:31:20.077654 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Jul 6 23:31:20.077666 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Jul 6 23:31:20.077674 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Jul 6 23:31:20.077682 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Jul 6 23:31:20.077692 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Jul 6 23:31:20.077700 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Jul 6 23:31:20.077707 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jul 6 23:31:20.065649 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:31:20.065753 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:31:20.114549 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:31:20.155643 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 Jul 6 23:31:20.155816 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 6 23:31:20.155889 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jul 6 23:31:20.154726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:31:20.166079 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:31:20.180048 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:31:20.197215 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:31:20.197236 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:31:20.230655 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:31:20.255728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:31:20.301598 kernel: hub 1-14:1.0: USB hub found Jul 6 23:31:20.301776 kernel: hub 1-14:1.0: 4 ports detected Jul 6 23:31:20.271675 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:31:20.303598 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:31:20.425837 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jul 6 23:31:20.425853 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 6 23:31:20.425862 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 6 23:31:20.425869 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 6 23:31:20.425876 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 6 23:31:20.425884 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 6 23:31:20.425891 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 6 23:31:20.425898 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jul 6 23:31:20.425906 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jul 6 23:31:20.425913 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 6 23:31:20.425921 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 6 23:31:20.426012 kernel: ata2.00: Features: NCQ-prio Jul 6 23:31:20.426021 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jul 6 23:31:20.426091 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 6 23:31:20.426099 kernel: ata1.00: Features: NCQ-prio Jul 6 23:31:20.435588 kernel: ata2.00: configured for UDMA/133 Jul 6 23:31:20.435605 kernel: ata1.00: configured for UDMA/133 Jul 6 23:31:20.441588 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jul 6 23:31:20.450574 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jul 6 23:31:20.474042 kernel: ata1.00: Enabling discard_zeroes_data Jul 6 23:31:20.474060 kernel: ata2.00: Enabling discard_zeroes_data Jul 6 23:31:20.474068 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 6 23:31:20.478751 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 6 23:31:20.493759 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 6 23:31:20.493863 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 6 23:31:20.493928 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jul 6 23:31:20.493994 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 6 23:31:20.494057 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jul 6 23:31:20.498969 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jul 6 23:31:20.503795 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jul 6 23:31:20.504001 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:31:20.671646 kernel: ata1.00: Enabling discard_zeroes_data Jul 6 23:31:20.671664 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jul 6 23:31:20.671758 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:31:20.671768 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 6 23:31:20.671836 kernel: GPT:9289727 != 937703087 Jul 6 23:31:20.671844 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:31:20.671852 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jul 6 23:31:20.671915 kernel: GPT:9289727 != 937703087 Jul 6 23:31:20.671923 kernel: ata2.00: Enabling discard_zeroes_data Jul 6 23:31:20.671931 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:31:20.671943 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jul 6 23:31:20.672016 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:31:20.672024 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jul 6 23:31:20.672148 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 6 23:31:20.672219 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 6 23:31:20.672293 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 Jul 6 23:31:20.672363 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (546) Jul 6 23:31:20.672374 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 6 23:31:20.672441 kernel: BTRFS: device fsid 25bdfe43-d649-4808-8940-e1722efc7a2e devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (552) Jul 6 23:31:20.638318 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Jul 6 23:31:20.719640 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:31:20.719651 kernel: usbcore: registered new interface driver usbhid Jul 6 23:31:20.719659 kernel: usbhid: USB HID core driver Jul 6 23:31:20.719669 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jul 6 23:31:20.704528 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. Jul 6 23:31:20.743283 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jul 6 23:31:20.765109 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Jul 6 23:31:20.837619 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jul 6 23:31:20.837713 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jul 6 23:31:20.837722 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jul 6 23:31:20.811573 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Jul 6 23:31:20.832628 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:31:20.881596 kernel: ata1.00: Enabling discard_zeroes_data Jul 6 23:31:20.881607 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:31:20.881664 disk-uuid[705]: Primary Header is updated. Jul 6 23:31:20.881664 disk-uuid[705]: Secondary Entries is updated. Jul 6 23:31:20.881664 disk-uuid[705]: Secondary Header is updated. Jul 6 23:31:20.941535 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 6 23:31:20.953855 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jul 6 23:31:21.192603 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 6 23:31:21.204736 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jul 6 23:31:21.221740 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jul 6 23:31:21.882867 kernel: ata1.00: Enabling discard_zeroes_data Jul 6 23:31:21.891317 disk-uuid[706]: The operation has completed successfully. Jul 6 23:31:21.899644 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 6 23:31:21.932398 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:31:21.932449 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:31:21.986758 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:31:22.012571 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 6 23:31:22.012627 sh[731]: Success Jul 6 23:31:22.044428 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:31:22.067599 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:31:22.076800 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:31:22.116963 kernel: BTRFS info (device dm-0): first mount of filesystem 25bdfe43-d649-4808-8940-e1722efc7a2e Jul 6 23:31:22.116981 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:31:22.127731 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:31:22.134742 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:31:22.140662 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:31:22.152562 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 6 23:31:22.153950 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:31:22.163849 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:31:22.174923 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:31:22.197800 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:31:22.229695 kernel: BTRFS info (device sda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:31:22.229713 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:31:22.230530 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:31:22.249180 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:31:22.249402 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:31:22.263530 kernel: BTRFS info (device sda6): last unmount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:31:22.267835 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:31:22.277761 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:31:22.318802 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:31:22.329407 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:31:22.382032 ignition[911]: Ignition 2.20.0 Jul 6 23:31:22.382037 ignition[911]: Stage: fetch-offline Jul 6 23:31:22.384563 unknown[911]: fetched base config from "system" Jul 6 23:31:22.382059 ignition[911]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:31:22.384570 unknown[911]: fetched user config from "system" Jul 6 23:31:22.382064 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 6 23:31:22.385624 systemd-networkd[912]: lo: Link UP Jul 6 23:31:22.382119 ignition[911]: parsed url from cmdline: "" Jul 6 23:31:22.385627 systemd-networkd[912]: lo: Gained carrier Jul 6 23:31:22.382121 ignition[911]: no config URL provided Jul 6 23:31:22.385678 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:31:22.382124 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:31:22.388206 systemd-networkd[912]: Enumeration completed Jul 6 23:31:22.382145 ignition[911]: parsing config with SHA512: 98d463c327e40af628b558f24e290d816c3da72084f27505c30e28a706ffe752843031ae8d2bd3f5e0037ff4dd91303cbfe9d1bb2bcdef383f2a124c06aedfea Jul 6 23:31:22.388270 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:31:22.384895 ignition[911]: fetch-offline: fetch-offline passed Jul 6 23:31:22.389019 systemd-networkd[912]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:31:22.384899 ignition[911]: POST message to Packet Timeline Jul 6 23:31:22.415118 systemd[1]: Reached target network.target - Network. Jul 6 23:31:22.384904 ignition[911]: POST Status error: resource requires networking Jul 6 23:31:22.418882 systemd-networkd[912]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:31:22.385071 ignition[911]: Ignition finished successfully Jul 6 23:31:22.429900 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:31:22.502822 ignition[924]: Ignition 2.20.0 Jul 6 23:31:22.445015 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:31:22.502827 ignition[924]: Stage: kargs Jul 6 23:31:22.631704 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 6 23:31:22.449913 systemd-networkd[912]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:31:22.502923 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:31:22.627578 systemd-networkd[912]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:31:22.502929 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 6 23:31:22.503415 ignition[924]: kargs: kargs passed Jul 6 23:31:22.503418 ignition[924]: POST message to Packet Timeline Jul 6 23:31:22.503430 ignition[924]: GET https://metadata.packet.net/metadata: attempt #1 Jul 6 23:31:22.503895 ignition[924]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39214->[::1]:53: read: connection refused Jul 6 23:31:22.704140 ignition[924]: GET https://metadata.packet.net/metadata: attempt #2 Jul 6 23:31:22.705178 ignition[924]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57020->[::1]:53: read: connection refused Jul 6 23:31:22.826648 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 6 23:31:22.827727 systemd-networkd[912]: eno1: Link UP Jul 6 23:31:22.827872 systemd-networkd[912]: eno2: Link UP Jul 6 23:31:22.827998 systemd-networkd[912]: enp1s0f0np0: Link UP Jul 6 23:31:22.828154 systemd-networkd[912]: enp1s0f0np0: Gained carrier Jul 6 23:31:22.837740 systemd-networkd[912]: enp1s0f1np1: Link UP Jul 6 23:31:22.871732 systemd-networkd[912]: enp1s0f0np0: DHCPv4 address 147.75.203.59/31, gateway 147.75.203.58 acquired from 145.40.83.140 Jul 6 23:31:23.105694 ignition[924]: GET https://metadata.packet.net/metadata: attempt #3 Jul 6 23:31:23.106868 ignition[924]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48620->[::1]:53: read: connection refused Jul 6 23:31:23.685321 systemd-networkd[912]: enp1s0f1np1: Gained carrier Jul 6 23:31:23.907178 ignition[924]: GET https://metadata.packet.net/metadata: attempt #4 Jul 6 23:31:23.908407 ignition[924]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51210->[::1]:53: read: connection refused Jul 6 23:31:24.005143 systemd-networkd[912]: enp1s0f0np0: Gained IPv6LL Jul 6 23:31:25.157134 systemd-networkd[912]: enp1s0f1np1: Gained IPv6LL Jul 6 23:31:25.509791 ignition[924]: GET https://metadata.packet.net/metadata: attempt #5 Jul 6 23:31:25.510993 ignition[924]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57409->[::1]:53: read: connection refused Jul 6 23:31:28.714521 ignition[924]: GET https://metadata.packet.net/metadata: attempt #6 Jul 6 23:31:29.898042 ignition[924]: GET result: OK Jul 6 23:31:30.301702 ignition[924]: Ignition finished successfully Jul 6 23:31:30.306944 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:31:30.333765 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:31:30.339984 ignition[941]: Ignition 2.20.0 Jul 6 23:31:30.339989 ignition[941]: Stage: disks Jul 6 23:31:30.340092 ignition[941]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:31:30.340098 ignition[941]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 6 23:31:30.340635 ignition[941]: disks: disks passed Jul 6 23:31:30.340638 ignition[941]: POST message to Packet Timeline Jul 6 23:31:30.340650 ignition[941]: GET https://metadata.packet.net/metadata: attempt #1 Jul 6 23:31:31.354551 ignition[941]: GET result: OK Jul 6 23:31:32.216015 ignition[941]: Ignition finished successfully Jul 6 23:31:32.220032 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:31:32.235314 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:31:32.253822 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:31:32.274869 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:31:32.295880 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:31:32.315878 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:31:32.343781 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:31:32.379370 systemd-fsck[962]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:31:32.389970 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:31:32.425792 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:31:32.499582 kernel: EXT4-fs (sda9): mounted filesystem daab0c95-3783-44c0-bef8-9d61a5c53c14 r/w with ordered data mode. Quota mode: none. Jul 6 23:31:32.499898 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:31:32.510022 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:31:32.548863 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:31:32.593686 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (971) Jul 6 23:31:32.593700 kernel: BTRFS info (device sda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:31:32.593708 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:31:32.593716 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:31:32.558078 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:31:32.614797 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:31:32.614819 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:31:32.626780 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 6 23:31:32.638029 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jul 6 23:31:32.659619 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:31:32.696750 coreos-metadata[988]: Jul 06 23:31:32.674 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 6 23:31:32.659647 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:31:32.734750 coreos-metadata[989]: Jul 06 23:31:32.682 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 6 23:31:32.679646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:31:32.704763 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:31:32.738935 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:31:32.788664 initrd-setup-root[1003]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:31:32.798599 initrd-setup-root[1010]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:31:32.808653 initrd-setup-root[1017]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:31:32.818651 initrd-setup-root[1024]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:31:32.827599 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:31:32.858717 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:31:32.886755 kernel: BTRFS info (device sda6): last unmount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:31:32.877047 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:31:32.895320 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:31:32.910685 ignition[1091]: INFO : Ignition 2.20.0 Jul 6 23:31:32.910685 ignition[1091]: INFO : Stage: mount Jul 6 23:31:32.910685 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:31:32.910685 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 6 23:31:32.910685 ignition[1091]: INFO : mount: mount passed Jul 6 23:31:32.910685 ignition[1091]: INFO : POST message to Packet Timeline Jul 6 23:31:32.910685 ignition[1091]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 6 23:31:32.913077 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:31:33.795270 coreos-metadata[989]: Jul 06 23:31:33.795 INFO Fetch successful Jul 6 23:31:33.876559 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 6 23:31:33.876616 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jul 6 23:31:33.917091 ignition[1091]: INFO : GET result: OK Jul 6 23:31:34.168077 coreos-metadata[988]: Jul 06 23:31:34.168 INFO Fetch successful Jul 6 23:31:34.202995 coreos-metadata[988]: Jul 06 23:31:34.202 INFO wrote hostname ci-4230.2.1-a-901fa91dbf to /sysroot/etc/hostname Jul 6 23:31:34.204359 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:31:34.303550 ignition[1091]: INFO : Ignition finished successfully Jul 6 23:31:34.305816 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:31:34.333751 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:31:34.345068 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:31:34.403226 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (1116) Jul 6 23:31:34.403255 kernel: BTRFS info (device sda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:31:34.403566 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:31:34.417213 kernel: BTRFS info (device sda6): using free space tree Jul 6 23:31:34.431892 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 6 23:31:34.431913 kernel: BTRFS info (device sda6): auto enabling async discard Jul 6 23:31:34.433884 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:31:34.464037 ignition[1133]: INFO : Ignition 2.20.0 Jul 6 23:31:34.464037 ignition[1133]: INFO : Stage: files Jul 6 23:31:34.479771 ignition[1133]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:31:34.479771 ignition[1133]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 6 23:31:34.479771 ignition[1133]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:31:34.479771 ignition[1133]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:31:34.479771 ignition[1133]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:31:34.479771 ignition[1133]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:31:34.479771 ignition[1133]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:31:34.479771 ignition[1133]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:31:34.479771 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:31:34.479771 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 6 23:31:34.467478 unknown[1133]: wrote ssh authorized keys file for user: core Jul 6 23:31:34.613600 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:31:34.760755 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:31:34.760755 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:31:34.793862 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 6 23:31:35.326827 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:31:35.371991 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:31:35.388728 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:31:35.388728 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:31:35.388728 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:31:35.388728 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:31:35.388728 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:31:35.388728 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:31:35.388728 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:31:35.388728 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:31:35.388728 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:31:35.388728 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:31:35.388728 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:31:35.388728 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:31:35.388728 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:31:35.388728 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 6 23:31:35.975010 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:31:36.394202 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:31:36.394202 ignition[1133]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:31:36.425757 ignition[1133]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:31:36.425757 ignition[1133]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:31:36.425757 ignition[1133]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:31:36.425757 ignition[1133]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:31:36.425757 ignition[1133]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:31:36.425757 ignition[1133]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:31:36.425757 ignition[1133]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:31:36.425757 ignition[1133]: INFO : files: files passed Jul 6 23:31:36.425757 ignition[1133]: INFO : POST message to Packet Timeline Jul 6 23:31:36.425757 ignition[1133]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 6 23:31:37.458569 ignition[1133]: INFO : GET result: OK Jul 6 23:31:37.821481 ignition[1133]: INFO : Ignition finished successfully Jul 6 23:31:37.824803 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:31:37.851800 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:31:37.864826 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:31:37.884931 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:31:37.890750 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:31:37.905161 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:31:37.956819 initrd-setup-root-after-ignition[1172]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:31:37.956819 initrd-setup-root-after-ignition[1172]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:31:37.924860 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:31:38.003855 initrd-setup-root-after-ignition[1176]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:31:37.959998 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:31:38.019829 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:31:38.019879 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:31:38.041955 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:31:38.060710 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:31:38.079777 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:31:38.089859 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:31:38.177593 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:31:38.196069 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:31:38.255373 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:31:38.268160 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:31:38.289247 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:31:38.309204 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:31:38.309655 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:31:38.348012 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:31:38.358158 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:31:38.378160 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:31:38.396167 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:31:38.417156 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:31:38.438164 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:31:38.458265 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:31:38.479196 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:31:38.500279 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:31:38.520160 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:31:38.538057 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:31:38.538479 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:31:38.564244 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:31:38.584293 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:31:38.605026 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:31:38.605495 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:31:38.628161 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:31:38.628601 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:31:38.660137 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:31:38.660634 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:31:38.680378 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:31:38.699020 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:31:38.699521 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:31:38.720161 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:31:38.739175 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:31:38.759240 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:31:38.759586 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:31:38.779302 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:31:38.779621 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:31:38.802398 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:31:38.802849 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:31:38.823258 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:31:38.933753 ignition[1197]: INFO : Ignition 2.20.0 Jul 6 23:31:38.933753 ignition[1197]: INFO : Stage: umount Jul 6 23:31:38.933753 ignition[1197]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:31:38.933753 ignition[1197]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 6 23:31:38.933753 ignition[1197]: INFO : umount: umount passed Jul 6 23:31:38.933753 ignition[1197]: INFO : POST message to Packet Timeline Jul 6 23:31:38.933753 ignition[1197]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 6 23:31:38.823686 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:31:38.841271 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 6 23:31:38.841727 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 6 23:31:38.873700 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:31:38.892611 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:31:38.892757 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:31:38.925925 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:31:38.941783 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:31:38.942215 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:31:38.960164 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:31:38.960554 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:31:39.004036 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:31:39.006715 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:31:39.006969 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:31:39.027903 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:31:39.028155 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:31:40.057802 ignition[1197]: INFO : GET result: OK Jul 6 23:31:40.869820 ignition[1197]: INFO : Ignition finished successfully Jul 6 23:31:40.872946 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:31:40.873242 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:31:40.889857 systemd[1]: Stopped target network.target - Network. Jul 6 23:31:40.904848 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:31:40.905066 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:31:40.922983 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:31:40.923162 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:31:40.940993 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:31:40.941170 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:31:40.958996 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:31:40.959176 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:31:40.976981 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:31:40.977166 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:31:40.995348 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:31:41.013070 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:31:41.031688 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:31:41.031969 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:31:41.055382 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:31:41.055977 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:31:41.056245 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:31:41.073181 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:31:41.075607 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:31:41.075724 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:31:41.111713 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:31:41.120746 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:31:41.120783 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:31:41.149929 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:31:41.150022 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:31:41.169364 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:31:41.169551 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:31:41.188934 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:31:41.189113 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:31:41.211166 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:31:41.234253 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:31:41.234471 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:31:41.235653 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:31:41.236021 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:31:41.262130 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:31:41.262157 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:31:41.279732 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:31:41.279759 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:31:41.299692 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:31:41.299747 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:31:41.614667 systemd-journald[270]: Received SIGTERM from PID 1 (systemd). Jul 6 23:31:41.329973 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:31:41.330111 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:31:41.368710 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:31:41.368862 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:31:41.411908 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:31:41.420810 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:31:41.420959 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:31:41.452157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:31:41.452293 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:31:41.478173 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:31:41.478334 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:31:41.479422 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:31:41.479655 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:31:41.502750 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:31:41.503043 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:31:41.518800 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:31:41.559968 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:31:41.576273 systemd[1]: Switching root. Jul 6 23:31:41.751712 systemd-journald[270]: Journal stopped Jul 6 23:31:43.473403 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:31:43.473419 kernel: SELinux: policy capability open_perms=1 Jul 6 23:31:43.473426 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:31:43.473432 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:31:43.473438 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:31:43.473444 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:31:43.473450 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:31:43.473455 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:31:43.473461 kernel: audit: type=1403 audit(1751844701.848:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:31:43.473468 systemd[1]: Successfully loaded SELinux policy in 74.639ms. Jul 6 23:31:43.473476 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.204ms. Jul 6 23:31:43.473483 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:31:43.473489 systemd[1]: Detected architecture x86-64. Jul 6 23:31:43.473495 systemd[1]: Detected first boot. Jul 6 23:31:43.473501 systemd[1]: Hostname set to . Jul 6 23:31:43.473509 systemd[1]: Initializing machine ID from random generator. Jul 6 23:31:43.473516 zram_generator::config[1251]: No configuration found. Jul 6 23:31:43.473522 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:31:43.473533 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:31:43.473539 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:31:43.473563 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:31:43.473570 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:31:43.473590 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:31:43.473597 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:31:43.473603 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:31:43.473610 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:31:43.473618 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:31:43.473624 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:31:43.473631 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:31:43.473638 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:31:43.473644 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:31:43.473651 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:31:43.473658 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:31:43.473664 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:31:43.473670 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:31:43.473677 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:31:43.473683 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jul 6 23:31:43.473691 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:31:43.473697 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:31:43.473704 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:31:43.473712 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:31:43.473718 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:31:43.473725 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:31:43.473732 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:31:43.473738 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:31:43.473746 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:31:43.473752 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:31:43.473759 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:31:43.473765 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:31:43.473772 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:31:43.473780 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:31:43.473786 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:31:43.473793 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:31:43.473800 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:31:43.473807 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:31:43.473813 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:31:43.473820 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:31:43.473826 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:31:43.473834 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:31:43.473841 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:31:43.473848 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:31:43.473855 systemd[1]: Reached target machines.target - Containers. Jul 6 23:31:43.473862 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:31:43.473868 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:31:43.473875 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:31:43.473882 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:31:43.473889 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:31:43.473896 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:31:43.473904 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:31:43.473911 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:31:43.473917 kernel: ACPI: bus type drm_connector registered Jul 6 23:31:43.473923 kernel: fuse: init (API version 7.39) Jul 6 23:31:43.473929 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:31:43.473936 kernel: loop: module loaded Jul 6 23:31:43.473942 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:31:43.474017 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:31:43.474024 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:31:43.474030 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:31:43.474063 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:31:43.474071 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:31:43.474092 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:31:43.474099 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:31:43.474114 systemd-journald[1354]: Collecting audit messages is disabled. Jul 6 23:31:43.474130 systemd-journald[1354]: Journal started Jul 6 23:31:43.474145 systemd-journald[1354]: Runtime Journal (/run/log/journal/f418561f1fa9487b8570e90445b213ee) is 8M, max 639.9M, 631.9M free. Jul 6 23:31:42.287429 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:31:42.302311 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 6 23:31:42.302969 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:31:43.501584 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:31:43.532531 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:31:43.555569 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:31:43.575604 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:31:43.596668 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:31:43.596693 systemd[1]: Stopped verity-setup.service. Jul 6 23:31:43.621571 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:31:43.629546 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:31:43.638993 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:31:43.648683 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:31:43.658801 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:31:43.668807 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:31:43.678789 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:31:43.688778 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:31:43.698864 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:31:43.709862 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:31:43.720940 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:31:43.721109 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:31:43.732041 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:31:43.732262 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:31:43.741404 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:31:43.741792 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:31:43.752428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:31:43.752915 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:31:43.764405 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:31:43.764962 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:31:43.775455 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:31:43.775936 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:31:43.786474 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:31:43.798462 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:31:43.811554 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:31:43.824486 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:31:43.837449 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:31:43.873037 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:31:43.904746 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:31:43.916401 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:31:43.926703 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:31:43.926730 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:31:43.937648 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:31:43.961124 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:31:43.975801 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:31:43.986069 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:31:43.989002 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:31:43.999122 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:31:44.009645 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:31:44.010262 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:31:44.014175 systemd-journald[1354]: Time spent on flushing to /var/log/journal/f418561f1fa9487b8570e90445b213ee is 13.622ms for 1369 entries. Jul 6 23:31:44.014175 systemd-journald[1354]: System Journal (/var/log/journal/f418561f1fa9487b8570e90445b213ee) is 8M, max 195.6M, 187.6M free. Jul 6 23:31:44.037796 systemd-journald[1354]: Received client request to flush runtime journal. Jul 6 23:31:44.028628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:31:44.029341 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:31:44.040273 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:31:44.053236 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:31:44.065578 kernel: loop0: detected capacity change from 0 to 138176 Jul 6 23:31:44.076982 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:31:44.091529 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:31:44.095839 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:31:44.107656 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:31:44.119789 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:31:44.136941 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:31:44.137530 kernel: loop1: detected capacity change from 0 to 224512 Jul 6 23:31:44.147801 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:31:44.158733 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:31:44.168712 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:31:44.181900 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:31:44.185584 kernel: loop2: detected capacity change from 0 to 8 Jul 6 23:31:44.207775 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:31:44.219316 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:31:44.234530 kernel: loop3: detected capacity change from 0 to 147912 Jul 6 23:31:44.236380 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:31:44.237005 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:31:44.248723 udevadm[1396]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:31:44.251670 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Jul 6 23:31:44.251681 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Jul 6 23:31:44.254126 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:31:44.288532 kernel: loop4: detected capacity change from 0 to 138176 Jul 6 23:31:44.312573 kernel: loop5: detected capacity change from 0 to 224512 Jul 6 23:31:44.336195 kernel: loop6: detected capacity change from 0 to 8 Jul 6 23:31:44.336233 kernel: loop7: detected capacity change from 0 to 147912 Jul 6 23:31:44.354964 (sd-merge)[1415]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jul 6 23:31:44.355220 (sd-merge)[1415]: Merged extensions into '/usr'. Jul 6 23:31:44.357825 systemd[1]: Reload requested from client PID 1392 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:31:44.357832 systemd[1]: Reloading... Jul 6 23:31:44.361038 ldconfig[1386]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:31:44.388602 zram_generator::config[1443]: No configuration found. Jul 6 23:31:44.462694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:31:44.516062 systemd[1]: Reloading finished in 157 ms. Jul 6 23:31:44.537744 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:31:44.547894 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:31:44.559873 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:31:44.582916 systemd[1]: Starting ensure-sysext.service... Jul 6 23:31:44.590877 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:31:44.603520 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:31:44.613864 systemd-tmpfiles[1501]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:31:44.614015 systemd-tmpfiles[1501]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:31:44.614473 systemd-tmpfiles[1501]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:31:44.614691 systemd-tmpfiles[1501]: ACLs are not supported, ignoring. Jul 6 23:31:44.614743 systemd-tmpfiles[1501]: ACLs are not supported, ignoring. Jul 6 23:31:44.616490 systemd-tmpfiles[1501]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:31:44.616494 systemd-tmpfiles[1501]: Skipping /boot Jul 6 23:31:44.618809 systemd[1]: Reload requested from client PID 1500 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:31:44.618834 systemd[1]: Reloading... Jul 6 23:31:44.622124 systemd-tmpfiles[1501]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:31:44.622128 systemd-tmpfiles[1501]: Skipping /boot Jul 6 23:31:44.630705 systemd-udevd[1502]: Using default interface naming scheme 'v255'. Jul 6 23:31:44.649536 zram_generator::config[1531]: No configuration found. Jul 6 23:31:44.682538 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1572) Jul 6 23:31:44.691537 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jul 6 23:31:44.698536 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:31:44.698586 kernel: ACPI: button: Sleep Button [SLPB] Jul 6 23:31:44.698602 kernel: IPMI message handler: version 39.2 Jul 6 23:31:44.706887 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 6 23:31:44.721538 kernel: ipmi device interface Jul 6 23:31:44.721604 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jul 6 23:31:44.721836 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:31:44.732535 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jul 6 23:31:44.754552 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jul 6 23:31:44.766003 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:31:44.812531 kernel: iTCO_vendor_support: vendor-support=0 Jul 6 23:31:44.813534 kernel: ipmi_si: IPMI System Interface driver Jul 6 23:31:44.813584 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jul 6 23:31:44.813832 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jul 6 23:31:44.824465 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jul 6 23:31:44.849057 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jul 6 23:31:44.849124 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jul 6 23:31:44.852542 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jul 6 23:31:44.856261 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jul 6 23:31:44.856349 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jul 6 23:31:44.867940 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jul 6 23:31:44.885767 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jul 6 23:31:44.886325 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jul 6 23:31:44.886376 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jul 6 23:31:44.904241 systemd[1]: Reloading finished in 285 ms. Jul 6 23:31:44.920168 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jul 6 23:31:44.920285 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jul 6 23:31:44.929234 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:31:44.931529 kernel: intel_rapl_common: Found RAPL domain package Jul 6 23:31:44.931573 kernel: intel_rapl_common: Found RAPL domain core Jul 6 23:31:44.931588 kernel: intel_rapl_common: Found RAPL domain dram Jul 6 23:31:44.955528 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jul 6 23:31:44.984246 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:31:44.988588 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jul 6 23:31:45.009195 systemd[1]: Finished ensure-sysext.service. Jul 6 23:31:45.036106 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jul 6 23:31:45.045666 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:31:45.057623 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:31:45.066408 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:31:45.075878 augenrules[1705]: No rules Jul 6 23:31:45.080532 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jul 6 23:31:45.086733 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:31:45.087437 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:31:45.087550 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 6 23:31:45.097194 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:31:45.107156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:31:45.118163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:31:45.127697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:31:45.128228 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:31:45.140625 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:31:45.141251 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:31:45.153474 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:31:45.154402 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:31:45.155342 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:31:45.181313 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:31:45.193420 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:31:45.202630 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:31:45.203741 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:31:45.218112 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:31:45.218331 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:31:45.218610 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:31:45.218792 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:31:45.218904 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:31:45.219062 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:31:45.219148 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:31:45.219287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:31:45.219368 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:31:45.219504 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:31:45.219590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:31:45.219754 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:31:45.219922 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:31:45.224945 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:31:45.237655 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:31:45.237751 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:31:45.237790 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:31:45.238443 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:31:45.239302 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:31:45.239328 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:31:45.245730 lvm[1734]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:31:45.248839 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:31:45.266010 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:31:45.312883 systemd-resolved[1718]: Positive Trust Anchors: Jul 6 23:31:45.312889 systemd-resolved[1718]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:31:45.312913 systemd-resolved[1718]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:31:45.315671 systemd-resolved[1718]: Using system hostname 'ci-4230.2.1-a-901fa91dbf'. Jul 6 23:31:45.319374 systemd-networkd[1717]: lo: Link UP Jul 6 23:31:45.319377 systemd-networkd[1717]: lo: Gained carrier Jul 6 23:31:45.322153 systemd-networkd[1717]: bond0: netdev ready Jul 6 23:31:45.323190 systemd-networkd[1717]: Enumeration completed Jul 6 23:31:45.327750 systemd-networkd[1717]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:fc:94.network. Jul 6 23:31:45.339887 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:31:45.350889 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:31:45.360656 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:31:45.370886 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:31:45.381945 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:31:45.396835 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:31:45.406781 systemd[1]: Reached target network.target - Network. Jul 6 23:31:45.414765 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:31:45.425725 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:31:45.435924 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:31:45.446826 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:31:45.457766 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:31:45.468693 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:31:45.468795 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:31:45.476726 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:31:45.487159 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:31:45.496601 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:31:45.507595 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:31:45.516240 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:31:45.527491 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:31:45.537088 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:31:45.550104 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:31:45.561037 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:31:45.586737 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:31:45.588989 lvm[1756]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:31:45.598422 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:31:45.610392 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:31:45.620658 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 6 23:31:45.633531 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jul 6 23:31:45.633567 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:31:45.640182 systemd-networkd[1717]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:fc:95.network. Jul 6 23:31:45.642861 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:31:45.654343 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:31:45.663666 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:31:45.671678 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:31:45.671703 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:31:45.672552 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:31:45.683381 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:31:45.694229 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:31:45.704228 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:31:45.706484 coreos-metadata[1761]: Jul 06 23:31:45.706 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 6 23:31:45.707420 coreos-metadata[1761]: Jul 06 23:31:45.707 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 6 23:31:45.714628 dbus-daemon[1762]: [system] SELinux support is enabled Jul 6 23:31:45.715268 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:31:45.717066 jq[1765]: false Jul 6 23:31:45.725731 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:31:45.726427 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:31:45.735336 extend-filesystems[1767]: Found loop4 Jul 6 23:31:45.735336 extend-filesystems[1767]: Found loop5 Jul 6 23:31:45.768691 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jul 6 23:31:45.768708 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1629) Jul 6 23:31:45.768718 extend-filesystems[1767]: Found loop6 Jul 6 23:31:45.768718 extend-filesystems[1767]: Found loop7 Jul 6 23:31:45.768718 extend-filesystems[1767]: Found sda Jul 6 23:31:45.768718 extend-filesystems[1767]: Found sda1 Jul 6 23:31:45.768718 extend-filesystems[1767]: Found sda2 Jul 6 23:31:45.768718 extend-filesystems[1767]: Found sda3 Jul 6 23:31:45.768718 extend-filesystems[1767]: Found usr Jul 6 23:31:45.768718 extend-filesystems[1767]: Found sda4 Jul 6 23:31:45.768718 extend-filesystems[1767]: Found sda6 Jul 6 23:31:45.768718 extend-filesystems[1767]: Found sda7 Jul 6 23:31:45.768718 extend-filesystems[1767]: Found sda9 Jul 6 23:31:45.768718 extend-filesystems[1767]: Checking size of /dev/sda9 Jul 6 23:31:45.768718 extend-filesystems[1767]: Resized partition /dev/sda9 Jul 6 23:31:45.928638 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 6 23:31:45.928783 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jul 6 23:31:45.928801 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 6 23:31:45.928814 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Jul 6 23:31:45.928828 kernel: bond0: active interface up! Jul 6 23:31:45.736332 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:31:45.928921 extend-filesystems[1778]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:31:45.782628 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:31:45.792047 systemd-networkd[1717]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 6 23:31:45.793289 systemd-networkd[1717]: enp1s0f0np0: Link UP Jul 6 23:31:45.793433 systemd-networkd[1717]: enp1s0f0np0: Gained carrier Jul 6 23:31:45.811623 systemd-networkd[1717]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:fc:94.network. Jul 6 23:31:45.950947 sshd_keygen[1789]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:31:45.811779 systemd-networkd[1717]: enp1s0f1np1: Link UP Jul 6 23:31:45.951075 update_engine[1792]: I20250706 23:31:45.949673 1792 main.cc:92] Flatcar Update Engine starting Jul 6 23:31:45.951075 update_engine[1792]: I20250706 23:31:45.950505 1792 update_check_scheduler.cc:74] Next update check in 3m38s Jul 6 23:31:45.811907 systemd-networkd[1717]: enp1s0f1np1: Gained carrier Jul 6 23:31:45.824648 systemd-networkd[1717]: bond0: Link UP Jul 6 23:31:45.824803 systemd-networkd[1717]: bond0: Gained carrier Jul 6 23:31:45.824944 systemd-timesyncd[1719]: Network configuration changed, trying to establish connection. Jul 6 23:31:45.825200 systemd-timesyncd[1719]: Network configuration changed, trying to establish connection. Jul 6 23:31:45.825348 systemd-timesyncd[1719]: Network configuration changed, trying to establish connection. Jul 6 23:31:45.825429 systemd-timesyncd[1719]: Network configuration changed, trying to establish connection. Jul 6 23:31:45.827618 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:31:45.857639 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:31:45.892620 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jul 6 23:31:45.893782 systemd-logind[1790]: Watching system buttons on /dev/input/event3 (Power Button) Jul 6 23:31:45.893794 systemd-logind[1790]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 6 23:31:45.893805 systemd-logind[1790]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jul 6 23:31:45.894002 systemd-logind[1790]: New seat seat0. Jul 6 23:31:45.920877 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:31:45.926615 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:31:45.929171 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:31:45.951093 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:31:45.973124 jq[1798]: true Jul 6 23:31:45.973125 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:31:45.982870 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:31:46.005770 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:31:46.005880 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:31:46.006068 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:31:46.006174 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:31:46.016108 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:31:46.016214 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:31:46.032529 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Jul 6 23:31:46.035781 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:31:46.048509 (ntainerd)[1805]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:31:46.049911 jq[1804]: true Jul 6 23:31:46.053113 tar[1802]: linux-amd64/LICENSE Jul 6 23:31:46.053286 tar[1802]: linux-amd64/helm Jul 6 23:31:46.053481 dbus-daemon[1762]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 6 23:31:46.057650 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jul 6 23:31:46.057766 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jul 6 23:31:46.068357 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:31:46.090685 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:31:46.098644 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:31:46.098749 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:31:46.109645 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:31:46.109728 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:31:46.118940 bash[1834]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:31:46.130674 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:31:46.143244 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:31:46.153897 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:31:46.154016 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:31:46.160066 locksmithd[1841]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:31:46.178727 systemd[1]: Starting sshkeys.service... Jul 6 23:31:46.186371 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:31:46.199084 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:31:46.210456 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:31:46.222959 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:31:46.227871 containerd[1805]: time="2025-07-06T23:31:46.227825723Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 6 23:31:46.234006 coreos-metadata[1855]: Jul 06 23:31:46.233 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 6 23:31:46.236301 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:31:46.245698 containerd[1805]: time="2025-07-06T23:31:46.245678003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:31:46.246403 containerd[1805]: time="2025-07-06T23:31:46.246382338Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:31:46.246428 containerd[1805]: time="2025-07-06T23:31:46.246403348Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:31:46.246428 containerd[1805]: time="2025-07-06T23:31:46.246416432Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:31:46.246509 containerd[1805]: time="2025-07-06T23:31:46.246499618Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:31:46.246534 containerd[1805]: time="2025-07-06T23:31:46.246515152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:31:46.246573 containerd[1805]: time="2025-07-06T23:31:46.246563719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:31:46.246595 containerd[1805]: time="2025-07-06T23:31:46.246572611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:31:46.246706 containerd[1805]: time="2025-07-06T23:31:46.246696334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:31:46.246725 containerd[1805]: time="2025-07-06T23:31:46.246705676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:31:46.246725 containerd[1805]: time="2025-07-06T23:31:46.246713095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:31:46.246725 containerd[1805]: time="2025-07-06T23:31:46.246718186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:31:46.246772 containerd[1805]: time="2025-07-06T23:31:46.246760822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:31:46.246896 containerd[1805]: time="2025-07-06T23:31:46.246886473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:31:46.246975 containerd[1805]: time="2025-07-06T23:31:46.246966601Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:31:46.246998 containerd[1805]: time="2025-07-06T23:31:46.246975405Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:31:46.247026 containerd[1805]: time="2025-07-06T23:31:46.247019217Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:31:46.247055 containerd[1805]: time="2025-07-06T23:31:46.247048120Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:31:46.254803 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jul 6 23:31:46.257287 containerd[1805]: time="2025-07-06T23:31:46.257274557Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:31:46.257325 containerd[1805]: time="2025-07-06T23:31:46.257314534Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:31:46.257344 containerd[1805]: time="2025-07-06T23:31:46.257332422Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:31:46.257361 containerd[1805]: time="2025-07-06T23:31:46.257345773Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:31:46.257361 containerd[1805]: time="2025-07-06T23:31:46.257354461Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:31:46.257439 containerd[1805]: time="2025-07-06T23:31:46.257429993Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:31:46.257604 containerd[1805]: time="2025-07-06T23:31:46.257594785Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:31:46.257667 containerd[1805]: time="2025-07-06T23:31:46.257658929Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:31:46.257691 containerd[1805]: time="2025-07-06T23:31:46.257670070Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:31:46.257691 containerd[1805]: time="2025-07-06T23:31:46.257678274Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:31:46.257691 containerd[1805]: time="2025-07-06T23:31:46.257685713Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:31:46.257734 containerd[1805]: time="2025-07-06T23:31:46.257692960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:31:46.257734 containerd[1805]: time="2025-07-06T23:31:46.257700369Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:31:46.257734 containerd[1805]: time="2025-07-06T23:31:46.257709902Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:31:46.257734 containerd[1805]: time="2025-07-06T23:31:46.257724727Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:31:46.257791 containerd[1805]: time="2025-07-06T23:31:46.257737441Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:31:46.257791 containerd[1805]: time="2025-07-06T23:31:46.257745205Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:31:46.257791 containerd[1805]: time="2025-07-06T23:31:46.257751246Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:31:46.257791 containerd[1805]: time="2025-07-06T23:31:46.257762628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.257791 containerd[1805]: time="2025-07-06T23:31:46.257772078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.257791 containerd[1805]: time="2025-07-06T23:31:46.257778900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.257877 containerd[1805]: time="2025-07-06T23:31:46.257788567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.257877 containerd[1805]: time="2025-07-06T23:31:46.257808899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.257877 containerd[1805]: time="2025-07-06T23:31:46.257819542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.257877 containerd[1805]: time="2025-07-06T23:31:46.257826236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.257877 containerd[1805]: time="2025-07-06T23:31:46.257833461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.257877 containerd[1805]: time="2025-07-06T23:31:46.257840379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.257877 containerd[1805]: time="2025-07-06T23:31:46.257848940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.257877 containerd[1805]: time="2025-07-06T23:31:46.257855251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.257877 containerd[1805]: time="2025-07-06T23:31:46.257862192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.257877 containerd[1805]: time="2025-07-06T23:31:46.257874185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.258020 containerd[1805]: time="2025-07-06T23:31:46.257887490Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:31:46.258020 containerd[1805]: time="2025-07-06T23:31:46.257899624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.258020 containerd[1805]: time="2025-07-06T23:31:46.257907108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.258020 containerd[1805]: time="2025-07-06T23:31:46.257912653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:31:46.258020 containerd[1805]: time="2025-07-06T23:31:46.257944814Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:31:46.258020 containerd[1805]: time="2025-07-06T23:31:46.257960397Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:31:46.258020 containerd[1805]: time="2025-07-06T23:31:46.257968631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:31:46.258020 containerd[1805]: time="2025-07-06T23:31:46.257975943Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:31:46.258020 containerd[1805]: time="2025-07-06T23:31:46.257981324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.258020 containerd[1805]: time="2025-07-06T23:31:46.257987917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:31:46.258020 containerd[1805]: time="2025-07-06T23:31:46.257993989Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:31:46.258020 containerd[1805]: time="2025-07-06T23:31:46.258002908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:31:46.258229 containerd[1805]: time="2025-07-06T23:31:46.258195010Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:31:46.258313 containerd[1805]: time="2025-07-06T23:31:46.258231867Z" level=info msg="Connect containerd service" Jul 6 23:31:46.258313 containerd[1805]: time="2025-07-06T23:31:46.258250473Z" level=info msg="using legacy CRI server" Jul 6 23:31:46.258313 containerd[1805]: time="2025-07-06T23:31:46.258254959Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:31:46.258358 containerd[1805]: time="2025-07-06T23:31:46.258320587Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:31:46.258671 containerd[1805]: time="2025-07-06T23:31:46.258659859Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:31:46.258779 containerd[1805]: time="2025-07-06T23:31:46.258757704Z" level=info msg="Start subscribing containerd event" Jul 6 23:31:46.258799 containerd[1805]: time="2025-07-06T23:31:46.258792535Z" level=info msg="Start recovering state" Jul 6 23:31:46.258815 containerd[1805]: time="2025-07-06T23:31:46.258807248Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:31:46.258840 containerd[1805]: time="2025-07-06T23:31:46.258833624Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:31:46.258866 containerd[1805]: time="2025-07-06T23:31:46.258834360Z" level=info msg="Start event monitor" Jul 6 23:31:46.258866 containerd[1805]: time="2025-07-06T23:31:46.258848563Z" level=info msg="Start snapshots syncer" Jul 6 23:31:46.258866 containerd[1805]: time="2025-07-06T23:31:46.258855173Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:31:46.258866 containerd[1805]: time="2025-07-06T23:31:46.258861884Z" level=info msg="Start streaming server" Jul 6 23:31:46.258963 containerd[1805]: time="2025-07-06T23:31:46.258900292Z" level=info msg="containerd successfully booted in 0.031522s" Jul 6 23:31:46.265781 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:31:46.276529 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jul 6 23:31:46.279977 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:31:46.296619 extend-filesystems[1778]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 6 23:31:46.296619 extend-filesystems[1778]: old_desc_blocks = 1, new_desc_blocks = 56 Jul 6 23:31:46.296619 extend-filesystems[1778]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jul 6 23:31:46.336615 extend-filesystems[1767]: Resized filesystem in /dev/sda9 Jul 6 23:31:46.336615 extend-filesystems[1767]: Found sdb Jul 6 23:31:46.297157 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:31:46.297273 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:31:46.357138 tar[1802]: linux-amd64/README.md Jul 6 23:31:46.371709 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:31:46.707756 coreos-metadata[1761]: Jul 06 23:31:46.707 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 6 23:31:47.108792 systemd-networkd[1717]: bond0: Gained IPv6LL Jul 6 23:31:47.109118 systemd-timesyncd[1719]: Network configuration changed, trying to establish connection. Jul 6 23:31:47.173770 systemd-timesyncd[1719]: Network configuration changed, trying to establish connection. Jul 6 23:31:47.174285 systemd-timesyncd[1719]: Network configuration changed, trying to establish connection. Jul 6 23:31:47.177877 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:31:47.192737 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:31:47.215707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:31:47.227360 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:31:47.247312 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:31:47.924361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:31:47.943709 (kubelet)[1897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:31:48.356807 kubelet[1897]: E0706 23:31:48.356722 1897 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:31:48.357887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:31:48.357973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:31:48.358145 systemd[1]: kubelet.service: Consumed 591ms CPU time, 273.8M memory peak. Jul 6 23:31:48.520208 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Jul 6 23:31:48.520351 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Jul 6 23:31:49.866155 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:31:49.881904 systemd[1]: Started sshd@0-147.75.203.59:22-139.178.89.65:44204.service - OpenSSH per-connection server daemon (139.178.89.65:44204). Jul 6 23:31:49.935055 sshd[1918]: Accepted publickey for core from 139.178.89.65 port 44204 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:31:49.935957 sshd-session[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:31:49.943316 systemd-logind[1790]: New session 1 of user core. Jul 6 23:31:49.944175 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:31:49.959875 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:31:49.975253 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:31:50.009867 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:31:50.020592 (systemd)[1922]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:31:50.022570 systemd-logind[1790]: New session c1 of user core. Jul 6 23:31:50.127969 systemd[1922]: Queued start job for default target default.target. Jul 6 23:31:50.136218 systemd[1922]: Created slice app.slice - User Application Slice. Jul 6 23:31:50.136232 systemd[1922]: Reached target paths.target - Paths. Jul 6 23:31:50.136272 systemd[1922]: Reached target timers.target - Timers. Jul 6 23:31:50.136929 systemd[1922]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:31:50.142432 systemd[1922]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:31:50.142460 systemd[1922]: Reached target sockets.target - Sockets. Jul 6 23:31:50.142484 systemd[1922]: Reached target basic.target - Basic System. Jul 6 23:31:50.142505 systemd[1922]: Reached target default.target - Main User Target. Jul 6 23:31:50.142520 systemd[1922]: Startup finished in 115ms. Jul 6 23:31:50.142593 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:31:50.161836 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:31:50.233435 systemd[1]: Started sshd@1-147.75.203.59:22-139.178.89.65:43128.service - OpenSSH per-connection server daemon (139.178.89.65:43128). Jul 6 23:31:50.269948 sshd[1933]: Accepted publickey for core from 139.178.89.65 port 43128 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:31:50.270773 sshd-session[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:31:50.274098 systemd-logind[1790]: New session 2 of user core. Jul 6 23:31:50.284838 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:31:50.314389 coreos-metadata[1855]: Jul 06 23:31:50.314 INFO Fetch successful Jul 6 23:31:50.355652 sshd[1935]: Connection closed by 139.178.89.65 port 43128 Jul 6 23:31:50.356242 sshd-session[1933]: pam_unix(sshd:session): session closed for user core Jul 6 23:31:50.370591 systemd[1]: sshd@1-147.75.203.59:22-139.178.89.65:43128.service: Deactivated successfully. Jul 6 23:31:50.372486 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:31:50.373402 systemd-logind[1790]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:31:50.375350 systemd[1]: Started sshd@2-147.75.203.59:22-139.178.89.65:43140.service - OpenSSH per-connection server daemon (139.178.89.65:43140). Jul 6 23:31:50.387657 systemd-logind[1790]: Removed session 2. Jul 6 23:31:50.400461 unknown[1855]: wrote ssh authorized keys file for user: core Jul 6 23:31:50.412745 sshd[1940]: Accepted publickey for core from 139.178.89.65 port 43140 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:31:50.413403 sshd-session[1940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:31:50.415247 update-ssh-keys[1944]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:31:50.416251 systemd-logind[1790]: New session 3 of user core. Jul 6 23:31:50.416724 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:31:50.428469 systemd[1]: Finished sshkeys.service. Jul 6 23:31:50.447762 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:31:50.465285 coreos-metadata[1761]: Jul 06 23:31:50.465 INFO Fetch successful Jul 6 23:31:50.505646 sshd[1948]: Connection closed by 139.178.89.65 port 43140 Jul 6 23:31:50.505832 sshd-session[1940]: pam_unix(sshd:session): session closed for user core Jul 6 23:31:50.507546 systemd[1]: sshd@2-147.75.203.59:22-139.178.89.65:43140.service: Deactivated successfully. Jul 6 23:31:50.508543 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:31:50.509317 systemd-logind[1790]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:31:50.510040 systemd-logind[1790]: Removed session 3. Jul 6 23:31:50.527931 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:31:50.538824 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jul 6 23:31:50.960062 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jul 6 23:31:50.974109 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:31:50.984261 systemd[1]: Startup finished in 2.672s (kernel) + 24.007s (initrd) + 9.209s (userspace) = 35.889s. Jul 6 23:31:51.008942 login[1870]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 6 23:31:51.012460 systemd-logind[1790]: New session 4 of user core. Jul 6 23:31:51.012965 login[1869]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 6 23:31:51.013280 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:31:51.016072 systemd-logind[1790]: New session 5 of user core. Jul 6 23:31:51.016645 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:31:52.096421 systemd-timesyncd[1719]: Network configuration changed, trying to establish connection. Jul 6 23:31:58.549489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:31:58.566798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:31:58.822070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:31:58.824207 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:31:58.866018 kubelet[1995]: E0706 23:31:58.865921 1995 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:31:58.867969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:31:58.868050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:31:58.868225 systemd[1]: kubelet.service: Consumed 155ms CPU time, 121.7M memory peak. Jul 6 23:32:00.522658 systemd[1]: Started sshd@3-147.75.203.59:22-139.178.89.65:42596.service - OpenSSH per-connection server daemon (139.178.89.65:42596). Jul 6 23:32:00.555978 sshd[2013]: Accepted publickey for core from 139.178.89.65 port 42596 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:32:00.556583 sshd-session[2013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:00.559203 systemd-logind[1790]: New session 6 of user core. Jul 6 23:32:00.574726 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:32:00.629249 sshd[2015]: Connection closed by 139.178.89.65 port 42596 Jul 6 23:32:00.629439 sshd-session[2013]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:00.647696 systemd[1]: sshd@3-147.75.203.59:22-139.178.89.65:42596.service: Deactivated successfully. Jul 6 23:32:00.648619 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:32:00.649484 systemd-logind[1790]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:32:00.650236 systemd[1]: Started sshd@4-147.75.203.59:22-139.178.89.65:42612.service - OpenSSH per-connection server daemon (139.178.89.65:42612). Jul 6 23:32:00.650777 systemd-logind[1790]: Removed session 6. Jul 6 23:32:00.685087 sshd[2020]: Accepted publickey for core from 139.178.89.65 port 42612 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:32:00.685733 sshd-session[2020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:00.688320 systemd-logind[1790]: New session 7 of user core. Jul 6 23:32:00.704813 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:32:00.753417 sshd[2023]: Connection closed by 139.178.89.65 port 42612 Jul 6 23:32:00.753971 sshd-session[2020]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:00.776838 systemd[1]: sshd@4-147.75.203.59:22-139.178.89.65:42612.service: Deactivated successfully. Jul 6 23:32:00.780942 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:32:00.783208 systemd-logind[1790]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:32:00.809849 systemd[1]: Started sshd@5-147.75.203.59:22-139.178.89.65:42626.service - OpenSSH per-connection server daemon (139.178.89.65:42626). Jul 6 23:32:00.810410 systemd-logind[1790]: Removed session 7. Jul 6 23:32:00.849645 sshd[2028]: Accepted publickey for core from 139.178.89.65 port 42626 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:32:00.850345 sshd-session[2028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:00.853426 systemd-logind[1790]: New session 8 of user core. Jul 6 23:32:00.865784 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:32:00.918599 sshd[2032]: Connection closed by 139.178.89.65 port 42626 Jul 6 23:32:00.918741 sshd-session[2028]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:00.931949 systemd[1]: sshd@5-147.75.203.59:22-139.178.89.65:42626.service: Deactivated successfully. Jul 6 23:32:00.932849 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:32:00.933395 systemd-logind[1790]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:32:00.934629 systemd[1]: Started sshd@6-147.75.203.59:22-139.178.89.65:42642.service - OpenSSH per-connection server daemon (139.178.89.65:42642). Jul 6 23:32:00.935255 systemd-logind[1790]: Removed session 8. Jul 6 23:32:00.980083 sshd[2037]: Accepted publickey for core from 139.178.89.65 port 42642 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:32:00.981013 sshd-session[2037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:00.984824 systemd-logind[1790]: New session 9 of user core. Jul 6 23:32:01.002961 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:32:01.063794 sudo[2041]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:32:01.063946 sudo[2041]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:32:01.081143 sudo[2041]: pam_unix(sudo:session): session closed for user root Jul 6 23:32:01.083975 sshd[2040]: Connection closed by 139.178.89.65 port 42642 Jul 6 23:32:01.084893 sshd-session[2037]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:01.104644 systemd[1]: sshd@6-147.75.203.59:22-139.178.89.65:42642.service: Deactivated successfully. Jul 6 23:32:01.108699 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:32:01.111001 systemd-logind[1790]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:32:01.129859 systemd[1]: Started sshd@7-147.75.203.59:22-139.178.89.65:42646.service - OpenSSH per-connection server daemon (139.178.89.65:42646). Jul 6 23:32:01.130486 systemd-logind[1790]: Removed session 9. Jul 6 23:32:01.161013 sshd[2046]: Accepted publickey for core from 139.178.89.65 port 42646 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:32:01.161619 sshd-session[2046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:01.164318 systemd-logind[1790]: New session 10 of user core. Jul 6 23:32:01.173804 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:32:01.227053 sudo[2051]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:32:01.227454 sudo[2051]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:32:01.232951 sudo[2051]: pam_unix(sudo:session): session closed for user root Jul 6 23:32:01.245891 sudo[2050]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:32:01.246826 sudo[2050]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:32:01.282387 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:32:01.357137 augenrules[2073]: No rules Jul 6 23:32:01.358681 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:32:01.359254 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:32:01.361416 sudo[2050]: pam_unix(sudo:session): session closed for user root Jul 6 23:32:01.364070 sshd[2049]: Connection closed by 139.178.89.65 port 42646 Jul 6 23:32:01.364879 sshd-session[2046]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:01.387046 systemd[1]: sshd@7-147.75.203.59:22-139.178.89.65:42646.service: Deactivated successfully. Jul 6 23:32:01.391099 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:32:01.394761 systemd-logind[1790]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:32:01.417485 systemd[1]: Started sshd@8-147.75.203.59:22-139.178.89.65:42658.service - OpenSSH per-connection server daemon (139.178.89.65:42658). Jul 6 23:32:01.420851 systemd-logind[1790]: Removed session 10. Jul 6 23:32:01.483960 sshd[2081]: Accepted publickey for core from 139.178.89.65 port 42658 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:32:01.484584 sshd-session[2081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:32:01.487356 systemd-logind[1790]: New session 11 of user core. Jul 6 23:32:01.497826 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:32:01.547641 sudo[2086]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:32:01.547848 sudo[2086]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:32:01.878849 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:32:01.878907 (dockerd)[2111]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:32:02.251184 dockerd[2111]: time="2025-07-06T23:32:02.251120731Z" level=info msg="Starting up" Jul 6 23:32:02.329968 dockerd[2111]: time="2025-07-06T23:32:02.329938441Z" level=info msg="Loading containers: start." Jul 6 23:32:02.443597 kernel: Initializing XFRM netlink socket Jul 6 23:32:02.458333 systemd-timesyncd[1719]: Network configuration changed, trying to establish connection. Jul 6 23:32:02.514469 systemd-networkd[1717]: docker0: Link UP Jul 6 23:32:02.556517 dockerd[2111]: time="2025-07-06T23:32:02.556469986Z" level=info msg="Loading containers: done." Jul 6 23:32:02.563969 dockerd[2111]: time="2025-07-06T23:32:02.563924880Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:32:02.564183 dockerd[2111]: time="2025-07-06T23:32:02.564045011Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 6 23:32:02.564228 dockerd[2111]: time="2025-07-06T23:32:02.564216169Z" level=info msg="Daemon has completed initialization" Jul 6 23:32:02.577237 dockerd[2111]: time="2025-07-06T23:32:02.577214192Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:32:02.577303 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:32:02.702643 systemd-timesyncd[1719]: Contacted time server [2607:f1c0:f04e:fd00::1]:123 (2.flatcar.pool.ntp.org). Jul 6 23:32:02.702681 systemd-timesyncd[1719]: Initial clock synchronization to Sun 2025-07-06 23:32:02.924619 UTC. Jul 6 23:32:03.652043 containerd[1805]: time="2025-07-06T23:32:03.651911525Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 6 23:32:04.313137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount221822533.mount: Deactivated successfully. Jul 6 23:32:05.039045 containerd[1805]: time="2025-07-06T23:32:05.038989868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:05.039266 containerd[1805]: time="2025-07-06T23:32:05.039145779Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 6 23:32:05.039692 containerd[1805]: time="2025-07-06T23:32:05.039651306Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:05.041600 containerd[1805]: time="2025-07-06T23:32:05.041551716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:05.042084 containerd[1805]: time="2025-07-06T23:32:05.042052128Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.390066771s" Jul 6 23:32:05.042084 containerd[1805]: time="2025-07-06T23:32:05.042070866Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 6 23:32:05.042397 containerd[1805]: time="2025-07-06T23:32:05.042384024Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 6 23:32:06.081879 containerd[1805]: time="2025-07-06T23:32:06.081828525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:06.082098 containerd[1805]: time="2025-07-06T23:32:06.082062899Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 6 23:32:06.082431 containerd[1805]: time="2025-07-06T23:32:06.082395534Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:06.083987 containerd[1805]: time="2025-07-06T23:32:06.083951915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:06.084637 containerd[1805]: time="2025-07-06T23:32:06.084585113Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.04218431s" Jul 6 23:32:06.084637 containerd[1805]: time="2025-07-06T23:32:06.084600294Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 6 23:32:06.084893 containerd[1805]: time="2025-07-06T23:32:06.084851321Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 6 23:32:07.018878 containerd[1805]: time="2025-07-06T23:32:07.018824890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:07.019087 containerd[1805]: time="2025-07-06T23:32:07.019034144Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 6 23:32:07.019414 containerd[1805]: time="2025-07-06T23:32:07.019375920Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:07.021024 containerd[1805]: time="2025-07-06T23:32:07.020985913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:07.022150 containerd[1805]: time="2025-07-06T23:32:07.022109596Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 937.243958ms" Jul 6 23:32:07.022150 containerd[1805]: time="2025-07-06T23:32:07.022124358Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 6 23:32:07.022435 containerd[1805]: time="2025-07-06T23:32:07.022423676Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 6 23:32:07.848922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount688988025.mount: Deactivated successfully. Jul 6 23:32:08.050178 containerd[1805]: time="2025-07-06T23:32:08.050153058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:08.050393 containerd[1805]: time="2025-07-06T23:32:08.050338447Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 6 23:32:08.050659 containerd[1805]: time="2025-07-06T23:32:08.050617569Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:08.051715 containerd[1805]: time="2025-07-06T23:32:08.051667631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:08.052097 containerd[1805]: time="2025-07-06T23:32:08.052053006Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.029614934s" Jul 6 23:32:08.052097 containerd[1805]: time="2025-07-06T23:32:08.052068137Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 6 23:32:08.052349 containerd[1805]: time="2025-07-06T23:32:08.052336781Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:32:08.603805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275703834.mount: Deactivated successfully. Jul 6 23:32:09.047405 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:32:09.055748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:09.194725 containerd[1805]: time="2025-07-06T23:32:09.194659625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:09.252963 containerd[1805]: time="2025-07-06T23:32:09.252822207Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 6 23:32:09.291004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:09.294788 (kubelet)[2455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:32:09.296241 containerd[1805]: time="2025-07-06T23:32:09.296199040Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:09.298198 containerd[1805]: time="2025-07-06T23:32:09.298134925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:09.299018 containerd[1805]: time="2025-07-06T23:32:09.298980536Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.246626603s" Jul 6 23:32:09.299018 containerd[1805]: time="2025-07-06T23:32:09.299015244Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:32:09.299256 containerd[1805]: time="2025-07-06T23:32:09.299243478Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:32:09.316107 kubelet[2455]: E0706 23:32:09.316084 2455 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:32:09.317194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:32:09.317274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:32:09.317485 systemd[1]: kubelet.service: Consumed 132ms CPU time, 120.7M memory peak. Jul 6 23:32:09.798840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount571860228.mount: Deactivated successfully. Jul 6 23:32:09.800096 containerd[1805]: time="2025-07-06T23:32:09.800066713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:09.800337 containerd[1805]: time="2025-07-06T23:32:09.800299393Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:32:09.800787 containerd[1805]: time="2025-07-06T23:32:09.800726255Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:09.801973 containerd[1805]: time="2025-07-06T23:32:09.801932204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:09.802474 containerd[1805]: time="2025-07-06T23:32:09.802437206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 503.178296ms" Jul 6 23:32:09.802474 containerd[1805]: time="2025-07-06T23:32:09.802470401Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:32:09.802918 containerd[1805]: time="2025-07-06T23:32:09.802886621Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 6 23:32:10.276243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3124303050.mount: Deactivated successfully. Jul 6 23:32:11.408603 containerd[1805]: time="2025-07-06T23:32:11.408545424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:11.408822 containerd[1805]: time="2025-07-06T23:32:11.408782423Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 6 23:32:11.409243 containerd[1805]: time="2025-07-06T23:32:11.409207668Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:11.411387 containerd[1805]: time="2025-07-06T23:32:11.411345006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:11.411998 containerd[1805]: time="2025-07-06T23:32:11.411951057Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.609033457s" Jul 6 23:32:11.411998 containerd[1805]: time="2025-07-06T23:32:11.411972159Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 6 23:32:13.460435 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:13.460621 systemd[1]: kubelet.service: Consumed 132ms CPU time, 120.7M memory peak. Jul 6 23:32:13.478882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:13.495495 systemd[1]: Reload requested from client PID 2584 ('systemctl') (unit session-11.scope)... Jul 6 23:32:13.495519 systemd[1]: Reloading... Jul 6 23:32:13.541683 zram_generator::config[2630]: No configuration found. Jul 6 23:32:13.616798 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:32:13.701156 systemd[1]: Reloading finished in 205 ms. Jul 6 23:32:13.732540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:13.734491 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:13.734797 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:32:13.734907 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:13.734926 systemd[1]: kubelet.service: Consumed 57ms CPU time, 98.2M memory peak. Jul 6 23:32:13.735796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:13.994512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:13.996624 (kubelet)[2700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:32:14.015967 kubelet[2700]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:32:14.015967 kubelet[2700]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:32:14.015967 kubelet[2700]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:32:14.016178 kubelet[2700]: I0706 23:32:14.015971 2700 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:32:14.266482 kubelet[2700]: I0706 23:32:14.266368 2700 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:32:14.266649 kubelet[2700]: I0706 23:32:14.266415 2700 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:32:14.266966 kubelet[2700]: I0706 23:32:14.266917 2700 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:32:14.290829 kubelet[2700]: E0706 23:32:14.290786 2700 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.203.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.203.59:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:14.291427 kubelet[2700]: I0706 23:32:14.291389 2700 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:32:14.297676 kubelet[2700]: E0706 23:32:14.297635 2700 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:32:14.297676 kubelet[2700]: I0706 23:32:14.297648 2700 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:32:14.306079 kubelet[2700]: I0706 23:32:14.306041 2700 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:32:14.307140 kubelet[2700]: I0706 23:32:14.307095 2700 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:32:14.307378 kubelet[2700]: I0706 23:32:14.307113 2700 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-a-901fa91dbf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:32:14.307486 kubelet[2700]: I0706 23:32:14.307382 2700 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:32:14.307486 kubelet[2700]: I0706 23:32:14.307390 2700 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:32:14.307486 kubelet[2700]: I0706 23:32:14.307470 2700 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:32:14.310485 kubelet[2700]: I0706 23:32:14.310445 2700 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:32:14.311871 kubelet[2700]: I0706 23:32:14.311827 2700 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:32:14.311871 kubelet[2700]: I0706 23:32:14.311846 2700 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:32:14.311871 kubelet[2700]: I0706 23:32:14.311853 2700 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:32:14.314048 kubelet[2700]: I0706 23:32:14.313939 2700 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:32:14.314420 kubelet[2700]: I0706 23:32:14.314413 2700 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:32:14.315176 kubelet[2700]: W0706 23:32:14.315168 2700 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:32:14.315205 kubelet[2700]: W0706 23:32:14.315179 2700 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.203.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-901fa91dbf&limit=500&resourceVersion=0": dial tcp 147.75.203.59:6443: connect: connection refused Jul 6 23:32:14.315243 kubelet[2700]: E0706 23:32:14.315208 2700 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.203.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-901fa91dbf&limit=500&resourceVersion=0\": dial tcp 147.75.203.59:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:14.316185 kubelet[2700]: W0706 23:32:14.316134 2700 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.203.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.203.59:6443: connect: connection refused Jul 6 23:32:14.316225 kubelet[2700]: E0706 23:32:14.316188 2700 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.203.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.203.59:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:14.317140 kubelet[2700]: I0706 23:32:14.317103 2700 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:32:14.317140 kubelet[2700]: I0706 23:32:14.317119 2700 server.go:1287] "Started kubelet" Jul 6 23:32:14.317244 kubelet[2700]: I0706 23:32:14.317207 2700 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:32:14.317336 kubelet[2700]: I0706 23:32:14.317311 2700 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:32:14.317470 kubelet[2700]: I0706 23:32:14.317461 2700 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:32:14.318052 kubelet[2700]: I0706 23:32:14.318015 2700 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:32:14.318052 kubelet[2700]: I0706 23:32:14.318033 2700 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:32:14.318129 kubelet[2700]: I0706 23:32:14.318068 2700 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:32:14.318129 kubelet[2700]: E0706 23:32:14.318079 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:14.318129 kubelet[2700]: I0706 23:32:14.318093 2700 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:32:14.318290 kubelet[2700]: E0706 23:32:14.318264 2700 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-901fa91dbf?timeout=10s\": dial tcp 147.75.203.59:6443: connect: connection refused" interval="200ms" Jul 6 23:32:14.318328 kubelet[2700]: I0706 23:32:14.318302 2700 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:32:14.318364 kubelet[2700]: I0706 23:32:14.318354 2700 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:32:14.318392 kubelet[2700]: W0706 23:32:14.318337 2700 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.203.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.59:6443: connect: connection refused Jul 6 23:32:14.318392 kubelet[2700]: E0706 23:32:14.318381 2700 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.203.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.203.59:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:14.318475 kubelet[2700]: I0706 23:32:14.318465 2700 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:32:14.322194 kubelet[2700]: E0706 23:32:14.322162 2700 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:32:14.322636 kubelet[2700]: I0706 23:32:14.322627 2700 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:32:14.322636 kubelet[2700]: I0706 23:32:14.322635 2700 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:32:14.324013 kubelet[2700]: E0706 23:32:14.322854 2700 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.203.59:6443/api/v1/namespaces/default/events\": dial tcp 147.75.203.59:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.1-a-901fa91dbf.184fcd81129c20e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.1-a-901fa91dbf,UID:ci-4230.2.1-a-901fa91dbf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.1-a-901fa91dbf,},FirstTimestamp:2025-07-06 23:32:14.317109478 +0000 UTC m=+0.318657671,LastTimestamp:2025-07-06 23:32:14.317109478 +0000 UTC m=+0.318657671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.1-a-901fa91dbf,}" Jul 6 23:32:14.330399 kubelet[2700]: I0706 23:32:14.330388 2700 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:32:14.330399 kubelet[2700]: I0706 23:32:14.330397 2700 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:32:14.330473 kubelet[2700]: I0706 23:32:14.330405 2700 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:32:14.330663 kubelet[2700]: I0706 23:32:14.330650 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:32:14.331221 kubelet[2700]: I0706 23:32:14.331213 2700 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:32:14.331241 kubelet[2700]: I0706 23:32:14.331226 2700 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:32:14.331241 kubelet[2700]: I0706 23:32:14.331237 2700 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:32:14.331273 kubelet[2700]: I0706 23:32:14.331241 2700 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:32:14.331273 kubelet[2700]: E0706 23:32:14.331265 2700 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:32:14.331351 kubelet[2700]: I0706 23:32:14.331276 2700 policy_none.go:49] "None policy: Start" Jul 6 23:32:14.331351 kubelet[2700]: I0706 23:32:14.331290 2700 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:32:14.331351 kubelet[2700]: I0706 23:32:14.331300 2700 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:32:14.331518 kubelet[2700]: W0706 23:32:14.331505 2700 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.203.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.59:6443: connect: connection refused Jul 6 23:32:14.331550 kubelet[2700]: E0706 23:32:14.331539 2700 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.203.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.203.59:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:14.333699 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:32:14.346395 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:32:14.348377 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:32:14.360308 kubelet[2700]: I0706 23:32:14.360268 2700 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:32:14.360446 kubelet[2700]: I0706 23:32:14.360435 2700 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:32:14.360494 kubelet[2700]: I0706 23:32:14.360447 2700 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:32:14.360627 kubelet[2700]: I0706 23:32:14.360584 2700 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:32:14.361077 kubelet[2700]: E0706 23:32:14.361034 2700 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:32:14.361077 kubelet[2700]: E0706 23:32:14.361070 2700 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:14.453827 systemd[1]: Created slice kubepods-burstable-podd46e610cb0baef577224ecc0a267c9f9.slice - libcontainer container kubepods-burstable-podd46e610cb0baef577224ecc0a267c9f9.slice. Jul 6 23:32:14.464329 kubelet[2700]: I0706 23:32:14.464267 2700 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.465090 kubelet[2700]: E0706 23:32:14.464991 2700 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.203.59:6443/api/v1/nodes\": dial tcp 147.75.203.59:6443: connect: connection refused" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.475571 kubelet[2700]: E0706 23:32:14.475481 2700 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.482674 systemd[1]: Created slice kubepods-burstable-pod5768667644f003ff894fe7fb54ad227c.slice - libcontainer container kubepods-burstable-pod5768667644f003ff894fe7fb54ad227c.slice. Jul 6 23:32:14.496561 kubelet[2700]: E0706 23:32:14.496476 2700 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.504233 systemd[1]: Created slice kubepods-burstable-pod110c0f8a1ed7c97d5d6b39d14047f7db.slice - libcontainer container kubepods-burstable-pod110c0f8a1ed7c97d5d6b39d14047f7db.slice. Jul 6 23:32:14.508964 kubelet[2700]: E0706 23:32:14.508878 2700 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.519890 kubelet[2700]: E0706 23:32:14.519690 2700 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-901fa91dbf?timeout=10s\": dial tcp 147.75.203.59:6443: connect: connection refused" interval="400ms" Jul 6 23:32:14.619246 kubelet[2700]: I0706 23:32:14.619121 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5768667644f003ff894fe7fb54ad227c-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-a-901fa91dbf\" (UID: \"5768667644f003ff894fe7fb54ad227c\") " pod="kube-system/kube-scheduler-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.619246 kubelet[2700]: I0706 23:32:14.619224 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/110c0f8a1ed7c97d5d6b39d14047f7db-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-a-901fa91dbf\" (UID: \"110c0f8a1ed7c97d5d6b39d14047f7db\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.619692 kubelet[2700]: I0706 23:32:14.619323 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d46e610cb0baef577224ecc0a267c9f9-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-901fa91dbf\" (UID: \"d46e610cb0baef577224ecc0a267c9f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.619692 kubelet[2700]: I0706 23:32:14.619417 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d46e610cb0baef577224ecc0a267c9f9-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-a-901fa91dbf\" (UID: \"d46e610cb0baef577224ecc0a267c9f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.619692 kubelet[2700]: I0706 23:32:14.619483 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d46e610cb0baef577224ecc0a267c9f9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-a-901fa91dbf\" (UID: \"d46e610cb0baef577224ecc0a267c9f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.619692 kubelet[2700]: I0706 23:32:14.619575 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/110c0f8a1ed7c97d5d6b39d14047f7db-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-a-901fa91dbf\" (UID: \"110c0f8a1ed7c97d5d6b39d14047f7db\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.619692 kubelet[2700]: I0706 23:32:14.619629 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/110c0f8a1ed7c97d5d6b39d14047f7db-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-a-901fa91dbf\" (UID: \"110c0f8a1ed7c97d5d6b39d14047f7db\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.620167 kubelet[2700]: I0706 23:32:14.619682 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d46e610cb0baef577224ecc0a267c9f9-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-a-901fa91dbf\" (UID: \"d46e610cb0baef577224ecc0a267c9f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.620167 kubelet[2700]: I0706 23:32:14.619733 2700 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d46e610cb0baef577224ecc0a267c9f9-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-901fa91dbf\" (UID: \"d46e610cb0baef577224ecc0a267c9f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.669812 kubelet[2700]: I0706 23:32:14.669709 2700 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.670541 kubelet[2700]: E0706 23:32:14.670410 2700 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.203.59:6443/api/v1/nodes\": dial tcp 147.75.203.59:6443: connect: connection refused" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:14.778845 containerd[1805]: time="2025-07-06T23:32:14.778587015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-a-901fa91dbf,Uid:d46e610cb0baef577224ecc0a267c9f9,Namespace:kube-system,Attempt:0,}" Jul 6 23:32:14.798161 containerd[1805]: time="2025-07-06T23:32:14.798120616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-a-901fa91dbf,Uid:5768667644f003ff894fe7fb54ad227c,Namespace:kube-system,Attempt:0,}" Jul 6 23:32:14.810948 containerd[1805]: time="2025-07-06T23:32:14.810831803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-a-901fa91dbf,Uid:110c0f8a1ed7c97d5d6b39d14047f7db,Namespace:kube-system,Attempt:0,}" Jul 6 23:32:14.921108 kubelet[2700]: E0706 23:32:14.920980 2700 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.1-a-901fa91dbf?timeout=10s\": dial tcp 147.75.203.59:6443: connect: connection refused" interval="800ms" Jul 6 23:32:15.075822 kubelet[2700]: I0706 23:32:15.075712 2700 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:15.076696 kubelet[2700]: E0706 23:32:15.076400 2700 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.203.59:6443/api/v1/nodes\": dial tcp 147.75.203.59:6443: connect: connection refused" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:15.263749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844865837.mount: Deactivated successfully. Jul 6 23:32:15.265179 containerd[1805]: time="2025-07-06T23:32:15.265128862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:32:15.266198 containerd[1805]: time="2025-07-06T23:32:15.266163455Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:32:15.266461 containerd[1805]: time="2025-07-06T23:32:15.266424908Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:32:15.266926 containerd[1805]: time="2025-07-06T23:32:15.266880223Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:32:15.267199 containerd[1805]: time="2025-07-06T23:32:15.267154600Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:32:15.267796 containerd[1805]: time="2025-07-06T23:32:15.267752291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:32:15.267960 containerd[1805]: time="2025-07-06T23:32:15.267916023Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:32:15.269711 containerd[1805]: time="2025-07-06T23:32:15.269669883Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 490.856423ms" Jul 6 23:32:15.270363 containerd[1805]: time="2025-07-06T23:32:15.270306593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:32:15.270872 containerd[1805]: time="2025-07-06T23:32:15.270828080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 472.660263ms" Jul 6 23:32:15.272298 containerd[1805]: time="2025-07-06T23:32:15.272285086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 461.279151ms" Jul 6 23:32:15.315010 kubelet[2700]: W0706 23:32:15.314950 2700 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.203.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-901fa91dbf&limit=500&resourceVersion=0": dial tcp 147.75.203.59:6443: connect: connection refused Jul 6 23:32:15.315010 kubelet[2700]: E0706 23:32:15.314992 2700 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.203.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.1-a-901fa91dbf&limit=500&resourceVersion=0\": dial tcp 147.75.203.59:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:15.342878 kubelet[2700]: W0706 23:32:15.342790 2700 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.203.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.59:6443: connect: connection refused Jul 6 23:32:15.342878 kubelet[2700]: E0706 23:32:15.342825 2700 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.203.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.203.59:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:15.373033 containerd[1805]: time="2025-07-06T23:32:15.372780721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:32:15.373033 containerd[1805]: time="2025-07-06T23:32:15.373016583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:32:15.373033 containerd[1805]: time="2025-07-06T23:32:15.372800005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:32:15.373033 containerd[1805]: time="2025-07-06T23:32:15.373025541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:15.373033 containerd[1805]: time="2025-07-06T23:32:15.373031510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:32:15.373239 containerd[1805]: time="2025-07-06T23:32:15.373042820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:15.373239 containerd[1805]: time="2025-07-06T23:32:15.373072387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:15.373239 containerd[1805]: time="2025-07-06T23:32:15.373136400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:15.373495 containerd[1805]: time="2025-07-06T23:32:15.373466531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:32:15.373517 containerd[1805]: time="2025-07-06T23:32:15.373493107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:32:15.373517 containerd[1805]: time="2025-07-06T23:32:15.373501530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:15.373703 containerd[1805]: time="2025-07-06T23:32:15.373687175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:15.391886 systemd[1]: Started cri-containerd-46719def2307447f7085fc56d822b329864ec52af4a5929e196c9da2764a4c3e.scope - libcontainer container 46719def2307447f7085fc56d822b329864ec52af4a5929e196c9da2764a4c3e. Jul 6 23:32:15.392792 systemd[1]: Started cri-containerd-a2412be119ae80e47023b7378c60b15befc97b17aecf8af736f1371cdd48fbdf.scope - libcontainer container a2412be119ae80e47023b7378c60b15befc97b17aecf8af736f1371cdd48fbdf. Jul 6 23:32:15.393577 systemd[1]: Started cri-containerd-fdc40af6816cd9caf65f8b72fe5992ce4ca870992dd56ee6e96b409dacae4499.scope - libcontainer container fdc40af6816cd9caf65f8b72fe5992ce4ca870992dd56ee6e96b409dacae4499. Jul 6 23:32:15.418153 containerd[1805]: time="2025-07-06T23:32:15.418118162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.1-a-901fa91dbf,Uid:5768667644f003ff894fe7fb54ad227c,Namespace:kube-system,Attempt:0,} returns sandbox id \"46719def2307447f7085fc56d822b329864ec52af4a5929e196c9da2764a4c3e\"" Jul 6 23:32:15.418333 containerd[1805]: time="2025-07-06T23:32:15.418312898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.1-a-901fa91dbf,Uid:d46e610cb0baef577224ecc0a267c9f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2412be119ae80e47023b7378c60b15befc97b17aecf8af736f1371cdd48fbdf\"" Jul 6 23:32:15.419907 containerd[1805]: time="2025-07-06T23:32:15.419890016Z" level=info msg="CreateContainer within sandbox \"a2412be119ae80e47023b7378c60b15befc97b17aecf8af736f1371cdd48fbdf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:32:15.419971 containerd[1805]: time="2025-07-06T23:32:15.419890618Z" level=info msg="CreateContainer within sandbox \"46719def2307447f7085fc56d822b329864ec52af4a5929e196c9da2764a4c3e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:32:15.420154 containerd[1805]: time="2025-07-06T23:32:15.420139691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.1-a-901fa91dbf,Uid:110c0f8a1ed7c97d5d6b39d14047f7db,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdc40af6816cd9caf65f8b72fe5992ce4ca870992dd56ee6e96b409dacae4499\"" Jul 6 23:32:15.421144 containerd[1805]: time="2025-07-06T23:32:15.421129349Z" level=info msg="CreateContainer within sandbox \"fdc40af6816cd9caf65f8b72fe5992ce4ca870992dd56ee6e96b409dacae4499\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:32:15.423433 kubelet[2700]: W0706 23:32:15.423408 2700 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.203.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.59:6443: connect: connection refused Jul 6 23:32:15.423466 kubelet[2700]: E0706 23:32:15.423444 2700 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.203.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.203.59:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:32:15.425879 containerd[1805]: time="2025-07-06T23:32:15.425867820Z" level=info msg="CreateContainer within sandbox \"46719def2307447f7085fc56d822b329864ec52af4a5929e196c9da2764a4c3e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9705e46eb65a2b76e176c25a4b5178432d407b250f4342856447733ee82e3a87\"" Jul 6 23:32:15.426087 containerd[1805]: time="2025-07-06T23:32:15.426074634Z" level=info msg="StartContainer for \"9705e46eb65a2b76e176c25a4b5178432d407b250f4342856447733ee82e3a87\"" Jul 6 23:32:15.427945 containerd[1805]: time="2025-07-06T23:32:15.427901602Z" level=info msg="CreateContainer within sandbox \"a2412be119ae80e47023b7378c60b15befc97b17aecf8af736f1371cdd48fbdf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7a4ec76cb95740ffe48fc9a2cf4739ce23a2200f6f53ef07b62ddc4409fb2e33\"" Jul 6 23:32:15.428095 containerd[1805]: time="2025-07-06T23:32:15.428055356Z" level=info msg="StartContainer for \"7a4ec76cb95740ffe48fc9a2cf4739ce23a2200f6f53ef07b62ddc4409fb2e33\"" Jul 6 23:32:15.428880 containerd[1805]: time="2025-07-06T23:32:15.428834423Z" level=info msg="CreateContainer within sandbox \"fdc40af6816cd9caf65f8b72fe5992ce4ca870992dd56ee6e96b409dacae4499\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"58959e29a9ad6b5cf16162078962e66565343b1841f840ead4f28906d6e4d4d9\"" Jul 6 23:32:15.429011 containerd[1805]: time="2025-07-06T23:32:15.428972747Z" level=info msg="StartContainer for \"58959e29a9ad6b5cf16162078962e66565343b1841f840ead4f28906d6e4d4d9\"" Jul 6 23:32:15.450728 systemd[1]: Started cri-containerd-9705e46eb65a2b76e176c25a4b5178432d407b250f4342856447733ee82e3a87.scope - libcontainer container 9705e46eb65a2b76e176c25a4b5178432d407b250f4342856447733ee82e3a87. Jul 6 23:32:15.452931 systemd[1]: Started cri-containerd-58959e29a9ad6b5cf16162078962e66565343b1841f840ead4f28906d6e4d4d9.scope - libcontainer container 58959e29a9ad6b5cf16162078962e66565343b1841f840ead4f28906d6e4d4d9. Jul 6 23:32:15.453539 systemd[1]: Started cri-containerd-7a4ec76cb95740ffe48fc9a2cf4739ce23a2200f6f53ef07b62ddc4409fb2e33.scope - libcontainer container 7a4ec76cb95740ffe48fc9a2cf4739ce23a2200f6f53ef07b62ddc4409fb2e33. Jul 6 23:32:15.474493 containerd[1805]: time="2025-07-06T23:32:15.474468019Z" level=info msg="StartContainer for \"9705e46eb65a2b76e176c25a4b5178432d407b250f4342856447733ee82e3a87\" returns successfully" Jul 6 23:32:15.475866 containerd[1805]: time="2025-07-06T23:32:15.475848239Z" level=info msg="StartContainer for \"58959e29a9ad6b5cf16162078962e66565343b1841f840ead4f28906d6e4d4d9\" returns successfully" Jul 6 23:32:15.476870 containerd[1805]: time="2025-07-06T23:32:15.476854480Z" level=info msg="StartContainer for \"7a4ec76cb95740ffe48fc9a2cf4739ce23a2200f6f53ef07b62ddc4409fb2e33\" returns successfully" Jul 6 23:32:15.878116 kubelet[2700]: I0706 23:32:15.878065 2700 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:16.214682 kubelet[2700]: E0706 23:32:16.214609 2700 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.1-a-901fa91dbf\" not found" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:16.327712 kubelet[2700]: I0706 23:32:16.327675 2700 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:16.327712 kubelet[2700]: E0706 23:32:16.327716 2700 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.1-a-901fa91dbf\": node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:16.337241 kubelet[2700]: E0706 23:32:16.337038 2700 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:16.337719 kubelet[2700]: E0706 23:32:16.337697 2700 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:16.339195 kubelet[2700]: E0706 23:32:16.339181 2700 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:16.346996 kubelet[2700]: E0706 23:32:16.346976 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:16.448012 kubelet[2700]: E0706 23:32:16.447932 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:16.549122 kubelet[2700]: E0706 23:32:16.549053 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:16.649891 kubelet[2700]: E0706 23:32:16.649761 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:16.750114 kubelet[2700]: E0706 23:32:16.750001 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:16.850412 kubelet[2700]: E0706 23:32:16.850156 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:16.950907 kubelet[2700]: E0706 23:32:16.950787 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:17.051149 kubelet[2700]: E0706 23:32:17.051066 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:17.151878 kubelet[2700]: E0706 23:32:17.151667 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:17.252702 kubelet[2700]: E0706 23:32:17.252598 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:17.345014 kubelet[2700]: E0706 23:32:17.344937 2700 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:17.345270 kubelet[2700]: E0706 23:32:17.345093 2700 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:17.353347 kubelet[2700]: E0706 23:32:17.353254 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:17.454448 kubelet[2700]: E0706 23:32:17.454199 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:17.555096 kubelet[2700]: E0706 23:32:17.555036 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:17.655786 kubelet[2700]: E0706 23:32:17.655716 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:17.756843 kubelet[2700]: E0706 23:32:17.756660 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:17.857841 kubelet[2700]: E0706 23:32:17.857725 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:17.958073 kubelet[2700]: E0706 23:32:17.957952 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:18.058750 kubelet[2700]: E0706 23:32:18.058701 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:18.159655 kubelet[2700]: E0706 23:32:18.159563 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:18.260770 kubelet[2700]: E0706 23:32:18.260692 2700 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:18.315088 kubelet[2700]: I0706 23:32:18.314863 2700 apiserver.go:52] "Watching apiserver" Jul 6 23:32:18.318450 kubelet[2700]: I0706 23:32:18.318350 2700 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:18.318450 kubelet[2700]: I0706 23:32:18.318395 2700 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:32:18.332120 kubelet[2700]: W0706 23:32:18.332074 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:32:18.332330 kubelet[2700]: I0706 23:32:18.332295 2700 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:18.338647 kubelet[2700]: W0706 23:32:18.338578 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:32:18.338901 kubelet[2700]: I0706 23:32:18.338825 2700 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:18.342889 kubelet[2700]: I0706 23:32:18.342840 2700 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:18.347706 kubelet[2700]: W0706 23:32:18.347645 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:32:18.349347 kubelet[2700]: W0706 23:32:18.349303 2700 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:32:18.349502 kubelet[2700]: E0706 23:32:18.349411 2700 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.1-a-901fa91dbf\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:18.475658 systemd[1]: Reload requested from client PID 3014 ('systemctl') (unit session-11.scope)... Jul 6 23:32:18.475690 systemd[1]: Reloading... Jul 6 23:32:18.526610 zram_generator::config[3060]: No configuration found. Jul 6 23:32:18.595755 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:32:18.688659 systemd[1]: Reloading finished in 212 ms. Jul 6 23:32:18.718321 kubelet[2700]: I0706 23:32:18.718202 2700 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:32:18.718341 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:18.732750 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:32:18.732932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:18.732973 systemd[1]: kubelet.service: Consumed 739ms CPU time, 144.3M memory peak. Jul 6 23:32:18.745859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:32:19.033831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:32:19.038059 (kubelet)[3124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:32:19.073561 kubelet[3124]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:32:19.073561 kubelet[3124]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:32:19.073561 kubelet[3124]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:32:19.073893 kubelet[3124]: I0706 23:32:19.073610 3124 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:32:19.080230 kubelet[3124]: I0706 23:32:19.080179 3124 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:32:19.080230 kubelet[3124]: I0706 23:32:19.080201 3124 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:32:19.080463 kubelet[3124]: I0706 23:32:19.080431 3124 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:32:19.081664 kubelet[3124]: I0706 23:32:19.081623 3124 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:32:19.083697 kubelet[3124]: I0706 23:32:19.083656 3124 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:32:19.086175 kubelet[3124]: E0706 23:32:19.086127 3124 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:32:19.086175 kubelet[3124]: I0706 23:32:19.086149 3124 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:32:19.094836 kubelet[3124]: I0706 23:32:19.094796 3124 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:32:19.094951 kubelet[3124]: I0706 23:32:19.094909 3124 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:32:19.095048 kubelet[3124]: I0706 23:32:19.094928 3124 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.1-a-901fa91dbf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:32:19.095048 kubelet[3124]: I0706 23:32:19.095030 3124 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:32:19.095048 kubelet[3124]: I0706 23:32:19.095036 3124 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:32:19.095135 kubelet[3124]: I0706 23:32:19.095065 3124 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:32:19.095199 kubelet[3124]: I0706 23:32:19.095167 3124 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:32:19.095199 kubelet[3124]: I0706 23:32:19.095178 3124 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:32:19.095199 kubelet[3124]: I0706 23:32:19.095188 3124 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:32:19.095199 kubelet[3124]: I0706 23:32:19.095194 3124 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:32:19.095686 kubelet[3124]: I0706 23:32:19.095677 3124 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:32:19.095942 kubelet[3124]: I0706 23:32:19.095936 3124 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:32:19.096164 kubelet[3124]: I0706 23:32:19.096159 3124 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:32:19.096183 kubelet[3124]: I0706 23:32:19.096174 3124 server.go:1287] "Started kubelet" Jul 6 23:32:19.096260 kubelet[3124]: I0706 23:32:19.096210 3124 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:32:19.096260 kubelet[3124]: I0706 23:32:19.096215 3124 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:32:19.096390 kubelet[3124]: I0706 23:32:19.096380 3124 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:32:19.096961 kubelet[3124]: I0706 23:32:19.096954 3124 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:32:19.097012 kubelet[3124]: I0706 23:32:19.096979 3124 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:32:19.097012 kubelet[3124]: E0706 23:32:19.097000 3124 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.1-a-901fa91dbf\" not found" Jul 6 23:32:19.097076 kubelet[3124]: I0706 23:32:19.097013 3124 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:32:19.097076 kubelet[3124]: I0706 23:32:19.097046 3124 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:32:19.097135 kubelet[3124]: E0706 23:32:19.097102 3124 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:32:19.097164 kubelet[3124]: I0706 23:32:19.097139 3124 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:32:19.097250 kubelet[3124]: I0706 23:32:19.097239 3124 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:32:19.098778 kubelet[3124]: I0706 23:32:19.098344 3124 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:32:19.098778 kubelet[3124]: I0706 23:32:19.098360 3124 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:32:19.098778 kubelet[3124]: I0706 23:32:19.098431 3124 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:32:19.102327 kubelet[3124]: I0706 23:32:19.102215 3124 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:32:19.102885 kubelet[3124]: I0706 23:32:19.102870 3124 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:32:19.102933 kubelet[3124]: I0706 23:32:19.102889 3124 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:32:19.102933 kubelet[3124]: I0706 23:32:19.102905 3124 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:32:19.102933 kubelet[3124]: I0706 23:32:19.102911 3124 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:32:19.102985 kubelet[3124]: E0706 23:32:19.102942 3124 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:32:19.112851 kubelet[3124]: I0706 23:32:19.112803 3124 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:32:19.112851 kubelet[3124]: I0706 23:32:19.112814 3124 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:32:19.112851 kubelet[3124]: I0706 23:32:19.112825 3124 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:32:19.112959 kubelet[3124]: I0706 23:32:19.112913 3124 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:32:19.112959 kubelet[3124]: I0706 23:32:19.112920 3124 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:32:19.112959 kubelet[3124]: I0706 23:32:19.112931 3124 policy_none.go:49] "None policy: Start" Jul 6 23:32:19.112959 kubelet[3124]: I0706 23:32:19.112937 3124 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:32:19.112959 kubelet[3124]: I0706 23:32:19.112942 3124 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:32:19.113047 kubelet[3124]: I0706 23:32:19.112999 3124 state_mem.go:75] "Updated machine memory state" Jul 6 23:32:19.114775 kubelet[3124]: I0706 23:32:19.114766 3124 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:32:19.114860 kubelet[3124]: I0706 23:32:19.114851 3124 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:32:19.114908 kubelet[3124]: I0706 23:32:19.114859 3124 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:32:19.115006 kubelet[3124]: I0706 23:32:19.114942 3124 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:32:19.115265 kubelet[3124]: E0706 23:32:19.115251 3124 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:32:19.204897 kubelet[3124]: I0706 23:32:19.204832 3124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.205231 kubelet[3124]: I0706 23:32:19.205049 3124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.205231 kubelet[3124]: I0706 23:32:19.205106 3124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.212919 kubelet[3124]: W0706 23:32:19.212856 3124 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:32:19.212919 kubelet[3124]: W0706 23:32:19.212905 3124 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:32:19.213361 kubelet[3124]: E0706 23:32:19.213009 3124 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-a-901fa91dbf\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.213361 kubelet[3124]: E0706 23:32:19.213068 3124 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.1-a-901fa91dbf\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.213361 kubelet[3124]: W0706 23:32:19.213336 3124 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:32:19.213741 kubelet[3124]: E0706 23:32:19.213442 3124 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.1-a-901fa91dbf\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.222253 kubelet[3124]: I0706 23:32:19.222199 3124 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.237922 kubelet[3124]: I0706 23:32:19.237877 3124 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.238132 kubelet[3124]: I0706 23:32:19.238018 3124 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.398954 kubelet[3124]: I0706 23:32:19.398835 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d46e610cb0baef577224ecc0a267c9f9-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.1-a-901fa91dbf\" (UID: \"d46e610cb0baef577224ecc0a267c9f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.399278 kubelet[3124]: I0706 23:32:19.398986 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d46e610cb0baef577224ecc0a267c9f9-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-901fa91dbf\" (UID: \"d46e610cb0baef577224ecc0a267c9f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.399278 kubelet[3124]: I0706 23:32:19.399122 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5768667644f003ff894fe7fb54ad227c-kubeconfig\") pod \"kube-scheduler-ci-4230.2.1-a-901fa91dbf\" (UID: \"5768667644f003ff894fe7fb54ad227c\") " pod="kube-system/kube-scheduler-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.399278 kubelet[3124]: I0706 23:32:19.399216 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/110c0f8a1ed7c97d5d6b39d14047f7db-ca-certs\") pod \"kube-apiserver-ci-4230.2.1-a-901fa91dbf\" (UID: \"110c0f8a1ed7c97d5d6b39d14047f7db\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.399278 kubelet[3124]: I0706 23:32:19.399272 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/110c0f8a1ed7c97d5d6b39d14047f7db-k8s-certs\") pod \"kube-apiserver-ci-4230.2.1-a-901fa91dbf\" (UID: \"110c0f8a1ed7c97d5d6b39d14047f7db\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.399732 kubelet[3124]: I0706 23:32:19.399325 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d46e610cb0baef577224ecc0a267c9f9-ca-certs\") pod \"kube-controller-manager-ci-4230.2.1-a-901fa91dbf\" (UID: \"d46e610cb0baef577224ecc0a267c9f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.399732 kubelet[3124]: I0706 23:32:19.399382 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/110c0f8a1ed7c97d5d6b39d14047f7db-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.1-a-901fa91dbf\" (UID: \"110c0f8a1ed7c97d5d6b39d14047f7db\") " pod="kube-system/kube-apiserver-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.399732 kubelet[3124]: I0706 23:32:19.399437 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d46e610cb0baef577224ecc0a267c9f9-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.1-a-901fa91dbf\" (UID: \"d46e610cb0baef577224ecc0a267c9f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.399732 kubelet[3124]: I0706 23:32:19.399556 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d46e610cb0baef577224ecc0a267c9f9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.1-a-901fa91dbf\" (UID: \"d46e610cb0baef577224ecc0a267c9f9\") " pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:19.468317 sudo[3168]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:32:19.469234 sudo[3168]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:32:19.848227 sudo[3168]: pam_unix(sudo:session): session closed for user root Jul 6 23:32:20.095608 kubelet[3124]: I0706 23:32:20.095591 3124 apiserver.go:52] "Watching apiserver" Jul 6 23:32:20.097847 kubelet[3124]: I0706 23:32:20.097837 3124 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:32:20.107029 kubelet[3124]: I0706 23:32:20.106940 3124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:20.107029 kubelet[3124]: I0706 23:32:20.106984 3124 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:20.110907 kubelet[3124]: W0706 23:32:20.110899 3124 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:32:20.110958 kubelet[3124]: E0706 23:32:20.110942 3124 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.1-a-901fa91dbf\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:20.110986 kubelet[3124]: W0706 23:32:20.110966 3124 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:32:20.111004 kubelet[3124]: E0706 23:32:20.110990 3124 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.1-a-901fa91dbf\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" Jul 6 23:32:20.117924 kubelet[3124]: I0706 23:32:20.117850 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.1-a-901fa91dbf" podStartSLOduration=2.117840166 podStartE2EDuration="2.117840166s" podCreationTimestamp="2025-07-06 23:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:32:20.117792187 +0000 UTC m=+1.076609421" watchObservedRunningTime="2025-07-06 23:32:20.117840166 +0000 UTC m=+1.076657399" Jul 6 23:32:20.121494 kubelet[3124]: I0706 23:32:20.121471 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.1-a-901fa91dbf" podStartSLOduration=2.121463331 podStartE2EDuration="2.121463331s" podCreationTimestamp="2025-07-06 23:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:32:20.12146099 +0000 UTC m=+1.080278229" watchObservedRunningTime="2025-07-06 23:32:20.121463331 +0000 UTC m=+1.080280563" Jul 6 23:32:20.129733 kubelet[3124]: I0706 23:32:20.129644 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.1-a-901fa91dbf" podStartSLOduration=2.129636248 podStartE2EDuration="2.129636248s" podCreationTimestamp="2025-07-06 23:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:32:20.125536271 +0000 UTC m=+1.084353511" watchObservedRunningTime="2025-07-06 23:32:20.129636248 +0000 UTC m=+1.088453481" Jul 6 23:32:21.045279 sudo[2086]: pam_unix(sudo:session): session closed for user root Jul 6 23:32:21.046162 sshd[2085]: Connection closed by 139.178.89.65 port 42658 Jul 6 23:32:21.046364 sshd-session[2081]: pam_unix(sshd:session): session closed for user core Jul 6 23:32:21.048262 systemd[1]: sshd@8-147.75.203.59:22-139.178.89.65:42658.service: Deactivated successfully. Jul 6 23:32:21.049496 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:32:21.049635 systemd[1]: session-11.scope: Consumed 3.397s CPU time, 267.6M memory peak. Jul 6 23:32:21.051018 systemd-logind[1790]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:32:21.051888 systemd-logind[1790]: Removed session 11. Jul 6 23:32:22.911339 kubelet[3124]: I0706 23:32:22.911267 3124 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:32:22.912350 containerd[1805]: time="2025-07-06T23:32:22.912016396Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:32:22.913063 kubelet[3124]: I0706 23:32:22.912412 3124 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:32:23.815684 systemd[1]: Created slice kubepods-besteffort-podd1cc9818_07a7_4f8a_b015_1086055ffe99.slice - libcontainer container kubepods-besteffort-podd1cc9818_07a7_4f8a_b015_1086055ffe99.slice. Jul 6 23:32:23.830425 kubelet[3124]: I0706 23:32:23.830376 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t76t7\" (UniqueName: \"kubernetes.io/projected/d1cc9818-07a7-4f8a-b015-1086055ffe99-kube-api-access-t76t7\") pod \"kube-proxy-wh48j\" (UID: \"d1cc9818-07a7-4f8a-b015-1086055ffe99\") " pod="kube-system/kube-proxy-wh48j" Jul 6 23:32:23.830583 kubelet[3124]: I0706 23:32:23.830439 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-hostproc\") pod \"cilium-gzqkk\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " pod="kube-system/cilium-gzqkk" Jul 6 23:32:23.830583 kubelet[3124]: I0706 23:32:23.830480 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-cni-path\") pod \"cilium-gzqkk\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " pod="kube-system/cilium-gzqkk" Jul 6 23:32:23.830583 kubelet[3124]: I0706 23:32:23.830515 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7rb8\" (UniqueName: \"kubernetes.io/projected/e1d12d00-bfab-465e-bb69-f2d25979c176-kube-api-access-b7rb8\") pod \"cilium-gzqkk\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " pod="kube-system/cilium-gzqkk" Jul 6 23:32:23.830583 kubelet[3124]: I0706 23:32:23.830563 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d1cc9818-07a7-4f8a-b015-1086055ffe99-kube-proxy\") pod \"kube-proxy-wh48j\" (UID: \"d1cc9818-07a7-4f8a-b015-1086055ffe99\") " pod="kube-system/kube-proxy-wh48j" Jul 6 23:32:23.830800 kubelet[3124]: I0706 23:32:23.830593 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1d12d00-bfab-465e-bb69-f2d25979c176-clustermesh-secrets\") pod \"cilium-gzqkk\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " pod="kube-system/cilium-gzqkk" Jul 6 23:32:23.830800 kubelet[3124]: I0706 23:32:23.830649 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1d12d00-bfab-465e-bb69-f2d25979c176-cilium-config-path\") pod \"cilium-gzqkk\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " pod="kube-system/cilium-gzqkk" Jul 6 23:32:23.830800 kubelet[3124]: I0706 23:32:23.830696 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-cilium-run\") pod \"cilium-gzqkk\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " pod="kube-system/cilium-gzqkk" Jul 6 23:32:23.830800 kubelet[3124]: I0706 23:32:23.830731 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-etc-cni-netd\") pod \"cilium-gzqkk\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " pod="kube-system/cilium-gzqkk" Jul 6 23:32:23.830800 kubelet[3124]: I0706 23:32:23.830762 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-xtables-lock\") pod \"cilium-gzqkk\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " pod="kube-system/cilium-gzqkk" Jul 6 23:32:23.830800 kubelet[3124]: I0706 23:32:23.830791 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1cc9818-07a7-4f8a-b015-1086055ffe99-xtables-lock\") pod \"kube-proxy-wh48j\" (UID: \"d1cc9818-07a7-4f8a-b015-1086055ffe99\") " pod="kube-system/kube-proxy-wh48j" Jul 6 23:32:23.831067 kubelet[3124]: I0706 23:32:23.830825 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1cc9818-07a7-4f8a-b015-1086055ffe99-lib-modules\") pod \"kube-proxy-wh48j\" (UID: \"d1cc9818-07a7-4f8a-b015-1086055ffe99\") " pod="kube-system/kube-proxy-wh48j" Jul 6 23:32:23.831067 kubelet[3124]: I0706 23:32:23.830858 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-bpf-maps\") pod \"cilium-gzqkk\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " pod="kube-system/cilium-gzqkk" Jul 6 23:32:23.831067 kubelet[3124]: I0706 23:32:23.830890 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-lib-modules\") pod \"cilium-gzqkk\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " pod="kube-system/cilium-gzqkk" Jul 6 23:32:23.831067 kubelet[3124]: I0706 23:32:23.830924 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-host-proc-sys-kernel\") pod \"cilium-gzqkk\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " pod="kube-system/cilium-gzqkk" Jul 6 23:32:23.831067 kubelet[3124]: I0706 23:32:23.830952 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1d12d00-bfab-465e-bb69-f2d25979c176-hubble-tls\") pod \"cilium-gzqkk\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " pod="kube-system/cilium-gzqkk" Jul 6 23:32:23.831067 kubelet[3124]: I0706 23:32:23.830983 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-cilium-cgroup\") pod \"cilium-gzqkk\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " pod="kube-system/cilium-gzqkk" Jul 6 23:32:23.831336 kubelet[3124]: I0706 23:32:23.831016 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-host-proc-sys-net\") pod \"cilium-gzqkk\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " pod="kube-system/cilium-gzqkk" Jul 6 23:32:23.835095 systemd[1]: Created slice kubepods-burstable-pode1d12d00_bfab_465e_bb69_f2d25979c176.slice - libcontainer container kubepods-burstable-pode1d12d00_bfab_465e_bb69_f2d25979c176.slice. Jul 6 23:32:23.995809 systemd[1]: Created slice kubepods-besteffort-pod3b0cc778_4007_44d2_9744_7444c8ab67da.slice - libcontainer container kubepods-besteffort-pod3b0cc778_4007_44d2_9744_7444c8ab67da.slice. Jul 6 23:32:24.033764 kubelet[3124]: I0706 23:32:24.033694 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b0cc778-4007-44d2-9744-7444c8ab67da-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-chg6t\" (UID: \"3b0cc778-4007-44d2-9744-7444c8ab67da\") " pod="kube-system/cilium-operator-6c4d7847fc-chg6t" Jul 6 23:32:24.034147 kubelet[3124]: I0706 23:32:24.033784 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmp9w\" (UniqueName: \"kubernetes.io/projected/3b0cc778-4007-44d2-9744-7444c8ab67da-kube-api-access-dmp9w\") pod \"cilium-operator-6c4d7847fc-chg6t\" (UID: \"3b0cc778-4007-44d2-9744-7444c8ab67da\") " pod="kube-system/cilium-operator-6c4d7847fc-chg6t" Jul 6 23:32:24.135422 containerd[1805]: time="2025-07-06T23:32:24.135186770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wh48j,Uid:d1cc9818-07a7-4f8a-b015-1086055ffe99,Namespace:kube-system,Attempt:0,}" Jul 6 23:32:24.137770 containerd[1805]: time="2025-07-06T23:32:24.137698738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gzqkk,Uid:e1d12d00-bfab-465e-bb69-f2d25979c176,Namespace:kube-system,Attempt:0,}" Jul 6 23:32:24.146890 containerd[1805]: time="2025-07-06T23:32:24.146853026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:32:24.146890 containerd[1805]: time="2025-07-06T23:32:24.146878602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:32:24.146890 containerd[1805]: time="2025-07-06T23:32:24.146885285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:24.146994 containerd[1805]: time="2025-07-06T23:32:24.146920594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:24.164703 systemd[1]: Started cri-containerd-a864ed77f243150979bbd24baf02d3029cc82b6e113654c7d9627751465b9713.scope - libcontainer container a864ed77f243150979bbd24baf02d3029cc82b6e113654c7d9627751465b9713. Jul 6 23:32:24.165914 containerd[1805]: time="2025-07-06T23:32:24.165873500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:32:24.165914 containerd[1805]: time="2025-07-06T23:32:24.165904320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:32:24.165914 containerd[1805]: time="2025-07-06T23:32:24.165910909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:24.166005 containerd[1805]: time="2025-07-06T23:32:24.165952307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:24.171427 systemd[1]: Started cri-containerd-dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97.scope - libcontainer container dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97. Jul 6 23:32:24.174965 containerd[1805]: time="2025-07-06T23:32:24.174945737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wh48j,Uid:d1cc9818-07a7-4f8a-b015-1086055ffe99,Namespace:kube-system,Attempt:0,} returns sandbox id \"a864ed77f243150979bbd24baf02d3029cc82b6e113654c7d9627751465b9713\"" Jul 6 23:32:24.176043 containerd[1805]: time="2025-07-06T23:32:24.176029435Z" level=info msg="CreateContainer within sandbox \"a864ed77f243150979bbd24baf02d3029cc82b6e113654c7d9627751465b9713\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:32:24.181066 containerd[1805]: time="2025-07-06T23:32:24.181046615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gzqkk,Uid:e1d12d00-bfab-465e-bb69-f2d25979c176,Namespace:kube-system,Attempt:0,} returns sandbox id \"dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97\"" Jul 6 23:32:24.181629 containerd[1805]: time="2025-07-06T23:32:24.181615866Z" level=info msg="CreateContainer within sandbox \"a864ed77f243150979bbd24baf02d3029cc82b6e113654c7d9627751465b9713\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e585cbcdb5f443979ee12f07224fc47fb9b4b529ad1593dc1e311f540509d4e\"" Jul 6 23:32:24.181791 containerd[1805]: time="2025-07-06T23:32:24.181778464Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:32:24.181817 containerd[1805]: time="2025-07-06T23:32:24.181790511Z" level=info msg="StartContainer for \"0e585cbcdb5f443979ee12f07224fc47fb9b4b529ad1593dc1e311f540509d4e\"" Jul 6 23:32:24.204861 systemd[1]: Started cri-containerd-0e585cbcdb5f443979ee12f07224fc47fb9b4b529ad1593dc1e311f540509d4e.scope - libcontainer container 0e585cbcdb5f443979ee12f07224fc47fb9b4b529ad1593dc1e311f540509d4e. Jul 6 23:32:24.218747 containerd[1805]: time="2025-07-06T23:32:24.218721878Z" level=info msg="StartContainer for \"0e585cbcdb5f443979ee12f07224fc47fb9b4b529ad1593dc1e311f540509d4e\" returns successfully" Jul 6 23:32:24.298967 containerd[1805]: time="2025-07-06T23:32:24.298869048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-chg6t,Uid:3b0cc778-4007-44d2-9744-7444c8ab67da,Namespace:kube-system,Attempt:0,}" Jul 6 23:32:24.425289 containerd[1805]: time="2025-07-06T23:32:24.425190631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:32:24.425289 containerd[1805]: time="2025-07-06T23:32:24.425215269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:32:24.425289 containerd[1805]: time="2025-07-06T23:32:24.425222347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:24.425289 containerd[1805]: time="2025-07-06T23:32:24.425260135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:24.450779 systemd[1]: Started cri-containerd-083394c4238eb4aaa099271d7a6676f6b14b4e3087089b8c13690f97114f84ec.scope - libcontainer container 083394c4238eb4aaa099271d7a6676f6b14b4e3087089b8c13690f97114f84ec. Jul 6 23:32:24.480516 containerd[1805]: time="2025-07-06T23:32:24.480480032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-chg6t,Uid:3b0cc778-4007-44d2-9744-7444c8ab67da,Namespace:kube-system,Attempt:0,} returns sandbox id \"083394c4238eb4aaa099271d7a6676f6b14b4e3087089b8c13690f97114f84ec\"" Jul 6 23:32:25.142408 kubelet[3124]: I0706 23:32:25.142233 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wh48j" podStartSLOduration=2.142182278 podStartE2EDuration="2.142182278s" podCreationTimestamp="2025-07-06 23:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:32:25.142140715 +0000 UTC m=+6.100958045" watchObservedRunningTime="2025-07-06 23:32:25.142182278 +0000 UTC m=+6.100999569" Jul 6 23:32:27.433080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2600469352.mount: Deactivated successfully. Jul 6 23:32:28.219045 containerd[1805]: time="2025-07-06T23:32:28.218993907Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:28.219239 containerd[1805]: time="2025-07-06T23:32:28.219204347Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 6 23:32:28.219535 containerd[1805]: time="2025-07-06T23:32:28.219518532Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:28.220492 containerd[1805]: time="2025-07-06T23:32:28.220451581Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.038654908s" Jul 6 23:32:28.220492 containerd[1805]: time="2025-07-06T23:32:28.220465884Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 6 23:32:28.221048 containerd[1805]: time="2025-07-06T23:32:28.221035709Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:32:28.221438 containerd[1805]: time="2025-07-06T23:32:28.221424322Z" level=info msg="CreateContainer within sandbox \"dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:32:28.225832 containerd[1805]: time="2025-07-06T23:32:28.225784981Z" level=info msg="CreateContainer within sandbox \"dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3\"" Jul 6 23:32:28.226063 containerd[1805]: time="2025-07-06T23:32:28.226024916Z" level=info msg="StartContainer for \"e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3\"" Jul 6 23:32:28.226829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2508852599.mount: Deactivated successfully. Jul 6 23:32:28.245684 systemd[1]: Started cri-containerd-e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3.scope - libcontainer container e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3. Jul 6 23:32:28.256846 containerd[1805]: time="2025-07-06T23:32:28.256827436Z" level=info msg="StartContainer for \"e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3\" returns successfully" Jul 6 23:32:28.261842 systemd[1]: cri-containerd-e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3.scope: Deactivated successfully. Jul 6 23:32:29.229176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3-rootfs.mount: Deactivated successfully. Jul 6 23:32:29.449466 containerd[1805]: time="2025-07-06T23:32:29.449434457Z" level=info msg="shim disconnected" id=e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3 namespace=k8s.io Jul 6 23:32:29.449466 containerd[1805]: time="2025-07-06T23:32:29.449463507Z" level=warning msg="cleaning up after shim disconnected" id=e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3 namespace=k8s.io Jul 6 23:32:29.449466 containerd[1805]: time="2025-07-06T23:32:29.449469036Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:32:29.859890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4103131169.mount: Deactivated successfully. Jul 6 23:32:30.070804 containerd[1805]: time="2025-07-06T23:32:30.070756934Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:30.071034 containerd[1805]: time="2025-07-06T23:32:30.070983785Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 6 23:32:30.071388 containerd[1805]: time="2025-07-06T23:32:30.071337703Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:32:30.072077 containerd[1805]: time="2025-07-06T23:32:30.072035329Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.850984114s" Jul 6 23:32:30.072077 containerd[1805]: time="2025-07-06T23:32:30.072051415Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 6 23:32:30.073030 containerd[1805]: time="2025-07-06T23:32:30.073018526Z" level=info msg="CreateContainer within sandbox \"083394c4238eb4aaa099271d7a6676f6b14b4e3087089b8c13690f97114f84ec\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:32:30.077343 containerd[1805]: time="2025-07-06T23:32:30.077295766Z" level=info msg="CreateContainer within sandbox \"083394c4238eb4aaa099271d7a6676f6b14b4e3087089b8c13690f97114f84ec\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5\"" Jul 6 23:32:30.077584 containerd[1805]: time="2025-07-06T23:32:30.077542053Z" level=info msg="StartContainer for \"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5\"" Jul 6 23:32:30.092679 systemd[1]: Started cri-containerd-92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5.scope - libcontainer container 92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5. Jul 6 23:32:30.103924 containerd[1805]: time="2025-07-06T23:32:30.103896511Z" level=info msg="StartContainer for \"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5\" returns successfully" Jul 6 23:32:30.132607 containerd[1805]: time="2025-07-06T23:32:30.132232585Z" level=info msg="CreateContainer within sandbox \"dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:32:30.135636 kubelet[3124]: I0706 23:32:30.135586 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-chg6t" podStartSLOduration=1.5442727870000001 podStartE2EDuration="7.135570235s" podCreationTimestamp="2025-07-06 23:32:23 +0000 UTC" firstStartedPulling="2025-07-06 23:32:24.48114365 +0000 UTC m=+5.439960893" lastFinishedPulling="2025-07-06 23:32:30.072441106 +0000 UTC m=+11.031258341" observedRunningTime="2025-07-06 23:32:30.135226822 +0000 UTC m=+11.094044056" watchObservedRunningTime="2025-07-06 23:32:30.135570235 +0000 UTC m=+11.094387467" Jul 6 23:32:30.141875 containerd[1805]: time="2025-07-06T23:32:30.141847463Z" level=info msg="CreateContainer within sandbox \"dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9\"" Jul 6 23:32:30.142231 containerd[1805]: time="2025-07-06T23:32:30.142213813Z" level=info msg="StartContainer for \"ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9\"" Jul 6 23:32:30.166778 systemd[1]: Started cri-containerd-ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9.scope - libcontainer container ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9. Jul 6 23:32:30.178354 containerd[1805]: time="2025-07-06T23:32:30.178330509Z" level=info msg="StartContainer for \"ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9\" returns successfully" Jul 6 23:32:30.184901 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:32:30.185048 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:32:30.185151 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:32:30.199069 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:32:30.199285 systemd[1]: cri-containerd-ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9.scope: Deactivated successfully. Jul 6 23:32:30.204966 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:32:30.365446 containerd[1805]: time="2025-07-06T23:32:30.365412363Z" level=info msg="shim disconnected" id=ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9 namespace=k8s.io Jul 6 23:32:30.365446 containerd[1805]: time="2025-07-06T23:32:30.365441122Z" level=warning msg="cleaning up after shim disconnected" id=ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9 namespace=k8s.io Jul 6 23:32:30.365446 containerd[1805]: time="2025-07-06T23:32:30.365445834Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:32:31.095806 update_engine[1792]: I20250706 23:32:31.095643 1792 update_attempter.cc:509] Updating boot flags... Jul 6 23:32:31.133028 containerd[1805]: time="2025-07-06T23:32:31.133007460Z" level=info msg="CreateContainer within sandbox \"dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:32:31.140554 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3779) Jul 6 23:32:31.145348 containerd[1805]: time="2025-07-06T23:32:31.145329699Z" level=info msg="CreateContainer within sandbox \"dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072\"" Jul 6 23:32:31.145687 containerd[1805]: time="2025-07-06T23:32:31.145672791Z" level=info msg="StartContainer for \"979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072\"" Jul 6 23:32:31.169533 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (3779) Jul 6 23:32:31.204827 systemd[1]: Started cri-containerd-979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072.scope - libcontainer container 979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072. Jul 6 23:32:31.222629 containerd[1805]: time="2025-07-06T23:32:31.222604695Z" level=info msg="StartContainer for \"979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072\" returns successfully" Jul 6 23:32:31.223580 systemd[1]: cri-containerd-979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072.scope: Deactivated successfully. Jul 6 23:32:31.233147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072-rootfs.mount: Deactivated successfully. Jul 6 23:32:31.235117 containerd[1805]: time="2025-07-06T23:32:31.235085160Z" level=info msg="shim disconnected" id=979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072 namespace=k8s.io Jul 6 23:32:31.235181 containerd[1805]: time="2025-07-06T23:32:31.235118140Z" level=warning msg="cleaning up after shim disconnected" id=979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072 namespace=k8s.io Jul 6 23:32:31.235181 containerd[1805]: time="2025-07-06T23:32:31.235126928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:32:32.135930 containerd[1805]: time="2025-07-06T23:32:32.135904119Z" level=info msg="CreateContainer within sandbox \"dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:32:32.140486 containerd[1805]: time="2025-07-06T23:32:32.140447054Z" level=info msg="CreateContainer within sandbox \"dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6\"" Jul 6 23:32:32.140806 containerd[1805]: time="2025-07-06T23:32:32.140740449Z" level=info msg="StartContainer for \"0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6\"" Jul 6 23:32:32.142074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180566098.mount: Deactivated successfully. Jul 6 23:32:32.170065 systemd[1]: Started cri-containerd-0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6.scope - libcontainer container 0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6. Jul 6 23:32:32.219184 containerd[1805]: time="2025-07-06T23:32:32.219140213Z" level=info msg="StartContainer for \"0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6\" returns successfully" Jul 6 23:32:32.219194 systemd[1]: cri-containerd-0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6.scope: Deactivated successfully. Jul 6 23:32:32.238625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6-rootfs.mount: Deactivated successfully. Jul 6 23:32:32.238903 containerd[1805]: time="2025-07-06T23:32:32.238840426Z" level=info msg="shim disconnected" id=0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6 namespace=k8s.io Jul 6 23:32:32.238903 containerd[1805]: time="2025-07-06T23:32:32.238876366Z" level=warning msg="cleaning up after shim disconnected" id=0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6 namespace=k8s.io Jul 6 23:32:32.238903 containerd[1805]: time="2025-07-06T23:32:32.238882218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:32:33.149744 containerd[1805]: time="2025-07-06T23:32:33.149656161Z" level=info msg="CreateContainer within sandbox \"dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:32:33.162225 containerd[1805]: time="2025-07-06T23:32:33.162164378Z" level=info msg="CreateContainer within sandbox \"dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489\"" Jul 6 23:32:33.162492 containerd[1805]: time="2025-07-06T23:32:33.162480877Z" level=info msg="StartContainer for \"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489\"" Jul 6 23:32:33.163370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2395988129.mount: Deactivated successfully. Jul 6 23:32:33.182839 systemd[1]: Started cri-containerd-4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489.scope - libcontainer container 4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489. Jul 6 23:32:33.196266 containerd[1805]: time="2025-07-06T23:32:33.196238655Z" level=info msg="StartContainer for \"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489\" returns successfully" Jul 6 23:32:33.305664 kubelet[3124]: I0706 23:32:33.305620 3124 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:32:33.319060 systemd[1]: Created slice kubepods-burstable-pode32f3f85_2a0c_47f5_969f_4098afe68db6.slice - libcontainer container kubepods-burstable-pode32f3f85_2a0c_47f5_969f_4098afe68db6.slice. Jul 6 23:32:33.321516 systemd[1]: Created slice kubepods-burstable-pod60f39f66_1fe1_429f_8474_d51ca4192a9d.slice - libcontainer container kubepods-burstable-pod60f39f66_1fe1_429f_8474_d51ca4192a9d.slice. Jul 6 23:32:33.397363 kubelet[3124]: I0706 23:32:33.397310 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s67k7\" (UniqueName: \"kubernetes.io/projected/60f39f66-1fe1-429f-8474-d51ca4192a9d-kube-api-access-s67k7\") pod \"coredns-668d6bf9bc-hwxvv\" (UID: \"60f39f66-1fe1-429f-8474-d51ca4192a9d\") " pod="kube-system/coredns-668d6bf9bc-hwxvv" Jul 6 23:32:33.397363 kubelet[3124]: I0706 23:32:33.397342 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gggqj\" (UniqueName: \"kubernetes.io/projected/e32f3f85-2a0c-47f5-969f-4098afe68db6-kube-api-access-gggqj\") pod \"coredns-668d6bf9bc-tjbmn\" (UID: \"e32f3f85-2a0c-47f5-969f-4098afe68db6\") " pod="kube-system/coredns-668d6bf9bc-tjbmn" Jul 6 23:32:33.397363 kubelet[3124]: I0706 23:32:33.397355 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60f39f66-1fe1-429f-8474-d51ca4192a9d-config-volume\") pod \"coredns-668d6bf9bc-hwxvv\" (UID: \"60f39f66-1fe1-429f-8474-d51ca4192a9d\") " pod="kube-system/coredns-668d6bf9bc-hwxvv" Jul 6 23:32:33.397363 kubelet[3124]: I0706 23:32:33.397366 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e32f3f85-2a0c-47f5-969f-4098afe68db6-config-volume\") pod \"coredns-668d6bf9bc-tjbmn\" (UID: \"e32f3f85-2a0c-47f5-969f-4098afe68db6\") " pod="kube-system/coredns-668d6bf9bc-tjbmn" Jul 6 23:32:33.621507 containerd[1805]: time="2025-07-06T23:32:33.621375086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tjbmn,Uid:e32f3f85-2a0c-47f5-969f-4098afe68db6,Namespace:kube-system,Attempt:0,}" Jul 6 23:32:33.624459 containerd[1805]: time="2025-07-06T23:32:33.624422199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hwxvv,Uid:60f39f66-1fe1-429f-8474-d51ca4192a9d,Namespace:kube-system,Attempt:0,}" Jul 6 23:32:34.154874 kubelet[3124]: I0706 23:32:34.154830 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gzqkk" podStartSLOduration=7.115522062 podStartE2EDuration="11.154814538s" podCreationTimestamp="2025-07-06 23:32:23 +0000 UTC" firstStartedPulling="2025-07-06 23:32:24.181591069 +0000 UTC m=+5.140408303" lastFinishedPulling="2025-07-06 23:32:28.220883544 +0000 UTC m=+9.179700779" observedRunningTime="2025-07-06 23:32:34.154787152 +0000 UTC m=+15.113604391" watchObservedRunningTime="2025-07-06 23:32:34.154814538 +0000 UTC m=+15.113631776" Jul 6 23:32:34.976379 systemd-networkd[1717]: cilium_host: Link UP Jul 6 23:32:34.976535 systemd-networkd[1717]: cilium_net: Link UP Jul 6 23:32:34.976682 systemd-networkd[1717]: cilium_net: Gained carrier Jul 6 23:32:34.976822 systemd-networkd[1717]: cilium_host: Gained carrier Jul 6 23:32:35.030131 systemd-networkd[1717]: cilium_vxlan: Link UP Jul 6 23:32:35.030135 systemd-networkd[1717]: cilium_vxlan: Gained carrier Jul 6 23:32:35.166534 kernel: NET: Registered PF_ALG protocol family Jul 6 23:32:35.453577 systemd-networkd[1717]: cilium_net: Gained IPv6LL Jul 6 23:32:35.571052 systemd-networkd[1717]: lxc_health: Link UP Jul 6 23:32:35.571272 systemd-networkd[1717]: lxc_health: Gained carrier Jul 6 23:32:35.684581 kernel: eth0: renamed from tmpf2e11 Jul 6 23:32:35.708589 kernel: eth0: renamed from tmpbff2b Jul 6 23:32:35.718216 systemd-networkd[1717]: lxc322b02419d23: Link UP Jul 6 23:32:35.718352 systemd-networkd[1717]: lxc70af68a14a99: Link UP Jul 6 23:32:35.718670 systemd-networkd[1717]: lxc322b02419d23: Gained carrier Jul 6 23:32:35.718766 systemd-networkd[1717]: lxc70af68a14a99: Gained carrier Jul 6 23:32:35.940677 systemd-networkd[1717]: cilium_host: Gained IPv6LL Jul 6 23:32:36.453668 systemd-networkd[1717]: cilium_vxlan: Gained IPv6LL Jul 6 23:32:36.772660 systemd-networkd[1717]: lxc_health: Gained IPv6LL Jul 6 23:32:36.900658 systemd-networkd[1717]: lxc70af68a14a99: Gained IPv6LL Jul 6 23:32:36.964667 systemd-networkd[1717]: lxc322b02419d23: Gained IPv6LL Jul 6 23:32:37.150076 kubelet[3124]: I0706 23:32:37.150031 3124 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:32:37.971500 containerd[1805]: time="2025-07-06T23:32:37.971420719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:32:37.971500 containerd[1805]: time="2025-07-06T23:32:37.971456632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:32:37.971500 containerd[1805]: time="2025-07-06T23:32:37.971464852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:37.971807 containerd[1805]: time="2025-07-06T23:32:37.971676832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:32:37.971807 containerd[1805]: time="2025-07-06T23:32:37.971704031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:32:37.971807 containerd[1805]: time="2025-07-06T23:32:37.971711515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:37.971807 containerd[1805]: time="2025-07-06T23:32:37.971749115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:37.971807 containerd[1805]: time="2025-07-06T23:32:37.971750147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:32:37.989844 systemd[1]: Started cri-containerd-bff2b8fae1d5b678b29340903e9072490d47b2c1f276006bfe5af3714766ba4f.scope - libcontainer container bff2b8fae1d5b678b29340903e9072490d47b2c1f276006bfe5af3714766ba4f. Jul 6 23:32:37.990732 systemd[1]: Started cri-containerd-f2e1137815674da0ec648bb5bd3f3310f4b7b6b5e9c6b708c69586e22b183d00.scope - libcontainer container f2e1137815674da0ec648bb5bd3f3310f4b7b6b5e9c6b708c69586e22b183d00. Jul 6 23:32:38.012101 containerd[1805]: time="2025-07-06T23:32:38.012070512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tjbmn,Uid:e32f3f85-2a0c-47f5-969f-4098afe68db6,Namespace:kube-system,Attempt:0,} returns sandbox id \"bff2b8fae1d5b678b29340903e9072490d47b2c1f276006bfe5af3714766ba4f\"" Jul 6 23:32:38.012182 containerd[1805]: time="2025-07-06T23:32:38.012151755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hwxvv,Uid:60f39f66-1fe1-429f-8474-d51ca4192a9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2e1137815674da0ec648bb5bd3f3310f4b7b6b5e9c6b708c69586e22b183d00\"" Jul 6 23:32:38.013103 containerd[1805]: time="2025-07-06T23:32:38.013091233Z" level=info msg="CreateContainer within sandbox \"bff2b8fae1d5b678b29340903e9072490d47b2c1f276006bfe5af3714766ba4f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:32:38.013166 containerd[1805]: time="2025-07-06T23:32:38.013094830Z" level=info msg="CreateContainer within sandbox \"f2e1137815674da0ec648bb5bd3f3310f4b7b6b5e9c6b708c69586e22b183d00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:32:38.018071 containerd[1805]: time="2025-07-06T23:32:38.018048779Z" level=info msg="CreateContainer within sandbox \"bff2b8fae1d5b678b29340903e9072490d47b2c1f276006bfe5af3714766ba4f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"44d4a7c90a73455162856fa852ff861ac7907a139a69f142921304a554107154\"" Jul 6 23:32:38.018281 containerd[1805]: time="2025-07-06T23:32:38.018268116Z" level=info msg="StartContainer for \"44d4a7c90a73455162856fa852ff861ac7907a139a69f142921304a554107154\"" Jul 6 23:32:38.018992 containerd[1805]: time="2025-07-06T23:32:38.018974398Z" level=info msg="CreateContainer within sandbox \"f2e1137815674da0ec648bb5bd3f3310f4b7b6b5e9c6b708c69586e22b183d00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"18438c69f5e7749fc933205fae7984db8fbb43625bc633073911d0c2ae8c7aa9\"" Jul 6 23:32:38.019170 containerd[1805]: time="2025-07-06T23:32:38.019158534Z" level=info msg="StartContainer for \"18438c69f5e7749fc933205fae7984db8fbb43625bc633073911d0c2ae8c7aa9\"" Jul 6 23:32:38.042073 systemd[1]: Started cri-containerd-18438c69f5e7749fc933205fae7984db8fbb43625bc633073911d0c2ae8c7aa9.scope - libcontainer container 18438c69f5e7749fc933205fae7984db8fbb43625bc633073911d0c2ae8c7aa9. Jul 6 23:32:38.045399 systemd[1]: Started cri-containerd-44d4a7c90a73455162856fa852ff861ac7907a139a69f142921304a554107154.scope - libcontainer container 44d4a7c90a73455162856fa852ff861ac7907a139a69f142921304a554107154. Jul 6 23:32:38.081951 containerd[1805]: time="2025-07-06T23:32:38.081913639Z" level=info msg="StartContainer for \"18438c69f5e7749fc933205fae7984db8fbb43625bc633073911d0c2ae8c7aa9\" returns successfully" Jul 6 23:32:38.086198 containerd[1805]: time="2025-07-06T23:32:38.086167046Z" level=info msg="StartContainer for \"44d4a7c90a73455162856fa852ff861ac7907a139a69f142921304a554107154\" returns successfully" Jul 6 23:32:38.181224 kubelet[3124]: I0706 23:32:38.181184 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tjbmn" podStartSLOduration=15.181170564 podStartE2EDuration="15.181170564s" podCreationTimestamp="2025-07-06 23:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:32:38.180989349 +0000 UTC m=+19.139806596" watchObservedRunningTime="2025-07-06 23:32:38.181170564 +0000 UTC m=+19.139987803" Jul 6 23:32:38.181631 kubelet[3124]: I0706 23:32:38.181248 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hwxvv" podStartSLOduration=15.181244458 podStartE2EDuration="15.181244458s" podCreationTimestamp="2025-07-06 23:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:32:38.174343233 +0000 UTC m=+19.133160528" watchObservedRunningTime="2025-07-06 23:32:38.181244458 +0000 UTC m=+19.140061693" Jul 6 23:32:53.067843 kubelet[3124]: I0706 23:32:53.067711 3124 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:35:20.727074 systemd[1]: Started sshd@9-147.75.203.59:22-185.156.73.234:38986.service - OpenSSH per-connection server daemon (185.156.73.234:38986). Jul 6 23:35:22.700361 sshd[4719]: Invalid user telecomadmin from 185.156.73.234 port 38986 Jul 6 23:35:22.849260 sshd[4719]: Connection closed by invalid user telecomadmin 185.156.73.234 port 38986 [preauth] Jul 6 23:35:22.852750 systemd[1]: sshd@9-147.75.203.59:22-185.156.73.234:38986.service: Deactivated successfully. Jul 6 23:35:24.194815 update_engine[1792]: I20250706 23:35:24.194660 1792 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 6 23:35:24.194815 update_engine[1792]: I20250706 23:35:24.194761 1792 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 6 23:35:24.196089 update_engine[1792]: I20250706 23:35:24.195140 1792 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 6 23:35:24.196381 update_engine[1792]: I20250706 23:35:24.196338 1792 omaha_request_params.cc:62] Current group set to stable Jul 6 23:35:24.196457 update_engine[1792]: I20250706 23:35:24.196441 1792 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 6 23:35:24.196499 update_engine[1792]: I20250706 23:35:24.196453 1792 update_attempter.cc:643] Scheduling an action processor start. Jul 6 23:35:24.196499 update_engine[1792]: I20250706 23:35:24.196470 1792 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 6 23:35:24.196588 update_engine[1792]: I20250706 23:35:24.196501 1792 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 6 23:35:24.196669 update_engine[1792]: I20250706 23:35:24.196590 1792 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 6 23:35:24.196669 update_engine[1792]: I20250706 23:35:24.196604 1792 omaha_request_action.cc:272] Request: Jul 6 23:35:24.196669 update_engine[1792]: Jul 6 23:35:24.196669 update_engine[1792]: Jul 6 23:35:24.196669 update_engine[1792]: Jul 6 23:35:24.196669 update_engine[1792]: Jul 6 23:35:24.196669 update_engine[1792]: Jul 6 23:35:24.196669 update_engine[1792]: Jul 6 23:35:24.196669 update_engine[1792]: Jul 6 23:35:24.196669 update_engine[1792]: Jul 6 23:35:24.196669 update_engine[1792]: I20250706 23:35:24.196611 1792 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:35:24.197011 locksmithd[1841]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 6 23:35:24.198130 update_engine[1792]: I20250706 23:35:24.198081 1792 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:35:24.198484 update_engine[1792]: I20250706 23:35:24.198438 1792 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:35:24.198838 update_engine[1792]: E20250706 23:35:24.198781 1792 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:35:24.198914 update_engine[1792]: I20250706 23:35:24.198851 1792 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 6 23:35:34.095813 update_engine[1792]: I20250706 23:35:34.095657 1792 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:35:34.096851 update_engine[1792]: I20250706 23:35:34.096246 1792 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:35:34.096971 update_engine[1792]: I20250706 23:35:34.096863 1792 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:35:34.097367 update_engine[1792]: E20250706 23:35:34.097258 1792 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:35:34.097572 update_engine[1792]: I20250706 23:35:34.097427 1792 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 6 23:35:44.105676 update_engine[1792]: I20250706 23:35:44.105561 1792 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:35:44.106777 update_engine[1792]: I20250706 23:35:44.106096 1792 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:35:44.106777 update_engine[1792]: I20250706 23:35:44.106683 1792 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:35:44.107170 update_engine[1792]: E20250706 23:35:44.107097 1792 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:35:44.107294 update_engine[1792]: I20250706 23:35:44.107236 1792 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 6 23:35:54.099212 update_engine[1792]: I20250706 23:35:54.099058 1792 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:35:54.100282 update_engine[1792]: I20250706 23:35:54.099646 1792 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:35:54.100282 update_engine[1792]: I20250706 23:35:54.100240 1792 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:35:54.100994 update_engine[1792]: E20250706 23:35:54.100908 1792 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:35:54.101219 update_engine[1792]: I20250706 23:35:54.101027 1792 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 6 23:35:54.101219 update_engine[1792]: I20250706 23:35:54.101056 1792 omaha_request_action.cc:617] Omaha request response: Jul 6 23:35:54.101418 update_engine[1792]: E20250706 23:35:54.101219 1792 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 6 23:35:54.101418 update_engine[1792]: I20250706 23:35:54.101267 1792 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 6 23:35:54.101418 update_engine[1792]: I20250706 23:35:54.101285 1792 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:35:54.101418 update_engine[1792]: I20250706 23:35:54.101300 1792 update_attempter.cc:306] Processing Done. Jul 6 23:35:54.101418 update_engine[1792]: E20250706 23:35:54.101332 1792 update_attempter.cc:619] Update failed. Jul 6 23:35:54.101418 update_engine[1792]: I20250706 23:35:54.101349 1792 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 6 23:35:54.101418 update_engine[1792]: I20250706 23:35:54.101364 1792 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 6 23:35:54.101418 update_engine[1792]: I20250706 23:35:54.101388 1792 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 6 23:35:54.102141 update_engine[1792]: I20250706 23:35:54.101562 1792 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 6 23:35:54.102141 update_engine[1792]: I20250706 23:35:54.101626 1792 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 6 23:35:54.102141 update_engine[1792]: I20250706 23:35:54.101646 1792 omaha_request_action.cc:272] Request: Jul 6 23:35:54.102141 update_engine[1792]: Jul 6 23:35:54.102141 update_engine[1792]: Jul 6 23:35:54.102141 update_engine[1792]: Jul 6 23:35:54.102141 update_engine[1792]: Jul 6 23:35:54.102141 update_engine[1792]: Jul 6 23:35:54.102141 update_engine[1792]: Jul 6 23:35:54.102141 update_engine[1792]: I20250706 23:35:54.101662 1792 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 6 23:35:54.102141 update_engine[1792]: I20250706 23:35:54.102080 1792 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 6 23:35:54.103167 update_engine[1792]: I20250706 23:35:54.102591 1792 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 6 23:35:54.103167 update_engine[1792]: E20250706 23:35:54.102972 1792 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 6 23:35:54.103167 update_engine[1792]: I20250706 23:35:54.103096 1792 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 6 23:35:54.103167 update_engine[1792]: I20250706 23:35:54.103124 1792 omaha_request_action.cc:617] Omaha request response: Jul 6 23:35:54.103167 update_engine[1792]: I20250706 23:35:54.103143 1792 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:35:54.103167 update_engine[1792]: I20250706 23:35:54.103158 1792 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 6 23:35:54.103777 locksmithd[1841]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 6 23:35:54.104434 update_engine[1792]: I20250706 23:35:54.103173 1792 update_attempter.cc:306] Processing Done. Jul 6 23:35:54.104434 update_engine[1792]: I20250706 23:35:54.103190 1792 update_attempter.cc:310] Error event sent. Jul 6 23:35:54.104434 update_engine[1792]: I20250706 23:35:54.103216 1792 update_check_scheduler.cc:74] Next update check in 41m11s Jul 6 23:35:54.104748 locksmithd[1841]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 6 23:38:39.821395 systemd[1]: Started sshd@10-147.75.203.59:22-139.178.89.65:48014.service - OpenSSH per-connection server daemon (139.178.89.65:48014). Jul 6 23:38:39.883534 sshd[4750]: Accepted publickey for core from 139.178.89.65 port 48014 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:38:39.884833 sshd-session[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:39.888943 systemd-logind[1790]: New session 12 of user core. Jul 6 23:38:39.910019 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:38:40.040868 sshd[4752]: Connection closed by 139.178.89.65 port 48014 Jul 6 23:38:40.041036 sshd-session[4750]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:40.043027 systemd[1]: sshd@10-147.75.203.59:22-139.178.89.65:48014.service: Deactivated successfully. Jul 6 23:38:40.043941 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:38:40.044373 systemd-logind[1790]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:38:40.045010 systemd-logind[1790]: Removed session 12. Jul 6 23:38:45.087063 systemd[1]: Started sshd@11-147.75.203.59:22-139.178.89.65:48028.service - OpenSSH per-connection server daemon (139.178.89.65:48028). Jul 6 23:38:45.118973 sshd[4778]: Accepted publickey for core from 139.178.89.65 port 48028 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:38:45.119625 sshd-session[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:45.122289 systemd-logind[1790]: New session 13 of user core. Jul 6 23:38:45.139104 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:38:45.238091 sshd[4780]: Connection closed by 139.178.89.65 port 48028 Jul 6 23:38:45.238282 sshd-session[4778]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:45.239949 systemd[1]: sshd@11-147.75.203.59:22-139.178.89.65:48028.service: Deactivated successfully. Jul 6 23:38:45.240908 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:38:45.241597 systemd-logind[1790]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:38:45.242076 systemd-logind[1790]: Removed session 13. Jul 6 23:38:50.253253 systemd[1]: Started sshd@12-147.75.203.59:22-139.178.89.65:55724.service - OpenSSH per-connection server daemon (139.178.89.65:55724). Jul 6 23:38:50.288548 sshd[4805]: Accepted publickey for core from 139.178.89.65 port 55724 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:38:50.289148 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:50.291735 systemd-logind[1790]: New session 14 of user core. Jul 6 23:38:50.313648 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:38:50.403153 sshd[4807]: Connection closed by 139.178.89.65 port 55724 Jul 6 23:38:50.403359 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:50.405053 systemd[1]: sshd@12-147.75.203.59:22-139.178.89.65:55724.service: Deactivated successfully. Jul 6 23:38:50.406105 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:38:50.406976 systemd-logind[1790]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:38:50.407671 systemd-logind[1790]: Removed session 14. Jul 6 23:38:55.435842 systemd[1]: Started sshd@13-147.75.203.59:22-139.178.89.65:55732.service - OpenSSH per-connection server daemon (139.178.89.65:55732). Jul 6 23:38:55.475642 sshd[4834]: Accepted publickey for core from 139.178.89.65 port 55732 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:38:55.476476 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:55.479879 systemd-logind[1790]: New session 15 of user core. Jul 6 23:38:55.492779 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:38:55.583468 sshd[4836]: Connection closed by 139.178.89.65 port 55732 Jul 6 23:38:55.583732 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:55.599170 systemd[1]: sshd@13-147.75.203.59:22-139.178.89.65:55732.service: Deactivated successfully. Jul 6 23:38:55.600726 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:38:55.602111 systemd-logind[1790]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:38:55.603450 systemd[1]: Started sshd@14-147.75.203.59:22-139.178.89.65:55746.service - OpenSSH per-connection server daemon (139.178.89.65:55746). Jul 6 23:38:55.604503 systemd-logind[1790]: Removed session 15. Jul 6 23:38:55.639985 sshd[4861]: Accepted publickey for core from 139.178.89.65 port 55746 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:38:55.640623 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:55.643295 systemd-logind[1790]: New session 16 of user core. Jul 6 23:38:55.658777 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:38:55.800284 sshd[4865]: Connection closed by 139.178.89.65 port 55746 Jul 6 23:38:55.800477 sshd-session[4861]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:55.813764 systemd[1]: sshd@14-147.75.203.59:22-139.178.89.65:55746.service: Deactivated successfully. Jul 6 23:38:55.814627 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:38:55.815320 systemd-logind[1790]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:38:55.816024 systemd[1]: Started sshd@15-147.75.203.59:22-139.178.89.65:55752.service - OpenSSH per-connection server daemon (139.178.89.65:55752). Jul 6 23:38:55.816426 systemd-logind[1790]: Removed session 16. Jul 6 23:38:55.850652 sshd[4887]: Accepted publickey for core from 139.178.89.65 port 55752 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:38:55.851296 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:38:55.854161 systemd-logind[1790]: New session 17 of user core. Jul 6 23:38:55.868827 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:38:55.954472 sshd[4890]: Connection closed by 139.178.89.65 port 55752 Jul 6 23:38:55.954696 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Jul 6 23:38:55.956386 systemd[1]: sshd@15-147.75.203.59:22-139.178.89.65:55752.service: Deactivated successfully. Jul 6 23:38:55.957360 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:38:55.958065 systemd-logind[1790]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:38:55.958526 systemd-logind[1790]: Removed session 17. Jul 6 23:39:00.986819 systemd[1]: Started sshd@16-147.75.203.59:22-139.178.89.65:42796.service - OpenSSH per-connection server daemon (139.178.89.65:42796). Jul 6 23:39:01.018399 sshd[4915]: Accepted publickey for core from 139.178.89.65 port 42796 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:39:01.019013 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:01.021723 systemd-logind[1790]: New session 18 of user core. Jul 6 23:39:01.034016 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:39:01.128084 sshd[4917]: Connection closed by 139.178.89.65 port 42796 Jul 6 23:39:01.128281 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:01.150894 systemd[1]: sshd@16-147.75.203.59:22-139.178.89.65:42796.service: Deactivated successfully. Jul 6 23:39:01.155319 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:39:01.159065 systemd-logind[1790]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:39:01.182690 systemd[1]: Started sshd@17-147.75.203.59:22-139.178.89.65:42800.service - OpenSSH per-connection server daemon (139.178.89.65:42800). Jul 6 23:39:01.186035 systemd-logind[1790]: Removed session 18. Jul 6 23:39:01.249576 sshd[4941]: Accepted publickey for core from 139.178.89.65 port 42800 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:39:01.250294 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:01.253167 systemd-logind[1790]: New session 19 of user core. Jul 6 23:39:01.265994 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:39:01.451003 sshd[4944]: Connection closed by 139.178.89.65 port 42800 Jul 6 23:39:01.451275 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:01.469119 systemd[1]: sshd@17-147.75.203.59:22-139.178.89.65:42800.service: Deactivated successfully. Jul 6 23:39:01.470953 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:39:01.471448 systemd-logind[1790]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:39:01.472370 systemd[1]: Started sshd@18-147.75.203.59:22-139.178.89.65:42816.service - OpenSSH per-connection server daemon (139.178.89.65:42816). Jul 6 23:39:01.472937 systemd-logind[1790]: Removed session 19. Jul 6 23:39:01.506621 sshd[4966]: Accepted publickey for core from 139.178.89.65 port 42816 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:39:01.507277 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:01.510276 systemd-logind[1790]: New session 20 of user core. Jul 6 23:39:01.518722 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:39:02.386772 sshd[4971]: Connection closed by 139.178.89.65 port 42816 Jul 6 23:39:02.386983 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:02.402998 systemd[1]: sshd@18-147.75.203.59:22-139.178.89.65:42816.service: Deactivated successfully. Jul 6 23:39:02.404607 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:39:02.405854 systemd-logind[1790]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:39:02.422057 systemd[1]: Started sshd@19-147.75.203.59:22-139.178.89.65:42824.service - OpenSSH per-connection server daemon (139.178.89.65:42824). Jul 6 23:39:02.423617 systemd-logind[1790]: Removed session 20. Jul 6 23:39:02.487933 sshd[5002]: Accepted publickey for core from 139.178.89.65 port 42824 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:39:02.489133 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:02.493946 systemd-logind[1790]: New session 21 of user core. Jul 6 23:39:02.503789 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:39:02.644521 sshd[5005]: Connection closed by 139.178.89.65 port 42824 Jul 6 23:39:02.644685 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:02.670010 systemd[1]: sshd@19-147.75.203.59:22-139.178.89.65:42824.service: Deactivated successfully. Jul 6 23:39:02.674967 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:39:02.678724 systemd-logind[1790]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:39:02.702380 systemd[1]: Started sshd@20-147.75.203.59:22-139.178.89.65:42832.service - OpenSSH per-connection server daemon (139.178.89.65:42832). Jul 6 23:39:02.705275 systemd-logind[1790]: Removed session 21. Jul 6 23:39:02.772717 sshd[5027]: Accepted publickey for core from 139.178.89.65 port 42832 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:39:02.773537 sshd-session[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:02.776378 systemd-logind[1790]: New session 22 of user core. Jul 6 23:39:02.797740 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:39:02.928546 sshd[5030]: Connection closed by 139.178.89.65 port 42832 Jul 6 23:39:02.928688 sshd-session[5027]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:02.930345 systemd[1]: sshd@20-147.75.203.59:22-139.178.89.65:42832.service: Deactivated successfully. Jul 6 23:39:02.931305 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:39:02.931994 systemd-logind[1790]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:39:02.932456 systemd-logind[1790]: Removed session 22. Jul 6 23:39:07.945493 systemd[1]: Started sshd@21-147.75.203.59:22-139.178.89.65:42846.service - OpenSSH per-connection server daemon (139.178.89.65:42846). Jul 6 23:39:07.979368 sshd[5057]: Accepted publickey for core from 139.178.89.65 port 42846 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:39:07.980080 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:07.982992 systemd-logind[1790]: New session 23 of user core. Jul 6 23:39:07.994007 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:39:08.088746 sshd[5059]: Connection closed by 139.178.89.65 port 42846 Jul 6 23:39:08.088957 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:08.090606 systemd[1]: sshd@21-147.75.203.59:22-139.178.89.65:42846.service: Deactivated successfully. Jul 6 23:39:08.091695 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:39:08.092532 systemd-logind[1790]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:39:08.093321 systemd-logind[1790]: Removed session 23. Jul 6 23:39:13.120760 systemd[1]: Started sshd@22-147.75.203.59:22-139.178.89.65:57844.service - OpenSSH per-connection server daemon (139.178.89.65:57844). Jul 6 23:39:13.152984 sshd[5083]: Accepted publickey for core from 139.178.89.65 port 57844 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:39:13.153616 sshd-session[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:13.156182 systemd-logind[1790]: New session 24 of user core. Jul 6 23:39:13.172835 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:39:13.256312 sshd[5085]: Connection closed by 139.178.89.65 port 57844 Jul 6 23:39:13.256506 sshd-session[5083]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:13.258183 systemd[1]: sshd@22-147.75.203.59:22-139.178.89.65:57844.service: Deactivated successfully. Jul 6 23:39:13.259188 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:39:13.259994 systemd-logind[1790]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:39:13.260496 systemd-logind[1790]: Removed session 24. Jul 6 23:39:18.287283 systemd[1]: Started sshd@23-147.75.203.59:22-139.178.89.65:57850.service - OpenSSH per-connection server daemon (139.178.89.65:57850). Jul 6 23:39:18.322937 sshd[5110]: Accepted publickey for core from 139.178.89.65 port 57850 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:39:18.323586 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:18.326126 systemd-logind[1790]: New session 25 of user core. Jul 6 23:39:18.349027 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:39:18.446746 sshd[5112]: Connection closed by 139.178.89.65 port 57850 Jul 6 23:39:18.446942 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:18.471972 systemd[1]: sshd@23-147.75.203.59:22-139.178.89.65:57850.service: Deactivated successfully. Jul 6 23:39:18.476761 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:39:18.480521 systemd-logind[1790]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:39:18.497342 systemd[1]: Started sshd@24-147.75.203.59:22-139.178.89.65:57858.service - OpenSSH per-connection server daemon (139.178.89.65:57858). Jul 6 23:39:18.500681 systemd-logind[1790]: Removed session 25. Jul 6 23:39:18.561989 sshd[5135]: Accepted publickey for core from 139.178.89.65 port 57858 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:39:18.562705 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:18.565597 systemd-logind[1790]: New session 26 of user core. Jul 6 23:39:18.578823 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:39:19.925689 containerd[1805]: time="2025-07-06T23:39:19.925663078Z" level=info msg="StopContainer for \"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5\" with timeout 30 (s)" Jul 6 23:39:19.925927 containerd[1805]: time="2025-07-06T23:39:19.925874636Z" level=info msg="Stop container \"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5\" with signal terminated" Jul 6 23:39:19.930990 systemd[1]: cri-containerd-92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5.scope: Deactivated successfully. Jul 6 23:39:19.946156 containerd[1805]: time="2025-07-06T23:39:19.946110577Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:39:19.948938 containerd[1805]: time="2025-07-06T23:39:19.948921143Z" level=info msg="StopContainer for \"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489\" with timeout 2 (s)" Jul 6 23:39:19.949070 containerd[1805]: time="2025-07-06T23:39:19.949058623Z" level=info msg="Stop container \"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489\" with signal terminated" Jul 6 23:39:19.949089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5-rootfs.mount: Deactivated successfully. Jul 6 23:39:19.952583 systemd-networkd[1717]: lxc_health: Link DOWN Jul 6 23:39:19.952586 systemd-networkd[1717]: lxc_health: Lost carrier Jul 6 23:39:19.952873 containerd[1805]: time="2025-07-06T23:39:19.952594660Z" level=info msg="shim disconnected" id=92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5 namespace=k8s.io Jul 6 23:39:19.952873 containerd[1805]: time="2025-07-06T23:39:19.952627663Z" level=warning msg="cleaning up after shim disconnected" id=92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5 namespace=k8s.io Jul 6 23:39:19.952873 containerd[1805]: time="2025-07-06T23:39:19.952632797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:39:19.960293 containerd[1805]: time="2025-07-06T23:39:19.960251656Z" level=info msg="StopContainer for \"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5\" returns successfully" Jul 6 23:39:19.961086 containerd[1805]: time="2025-07-06T23:39:19.961042705Z" level=info msg="StopPodSandbox for \"083394c4238eb4aaa099271d7a6676f6b14b4e3087089b8c13690f97114f84ec\"" Jul 6 23:39:19.961086 containerd[1805]: time="2025-07-06T23:39:19.961062774Z" level=info msg="Container to stop \"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:39:19.962486 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-083394c4238eb4aaa099271d7a6676f6b14b4e3087089b8c13690f97114f84ec-shm.mount: Deactivated successfully. Jul 6 23:39:19.970892 systemd[1]: cri-containerd-4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489.scope: Deactivated successfully. Jul 6 23:39:19.971057 systemd[1]: cri-containerd-4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489.scope: Consumed 6.508s CPU time, 169.3M memory peak, 120K read from disk, 13.3M written to disk. Jul 6 23:39:19.979992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489-rootfs.mount: Deactivated successfully. Jul 6 23:39:19.981844 containerd[1805]: time="2025-07-06T23:39:19.981772687Z" level=info msg="shim disconnected" id=4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489 namespace=k8s.io Jul 6 23:39:19.981842 systemd[1]: cri-containerd-083394c4238eb4aaa099271d7a6676f6b14b4e3087089b8c13690f97114f84ec.scope: Deactivated successfully. Jul 6 23:39:19.982042 containerd[1805]: time="2025-07-06T23:39:19.981845679Z" level=warning msg="cleaning up after shim disconnected" id=4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489 namespace=k8s.io Jul 6 23:39:19.982042 containerd[1805]: time="2025-07-06T23:39:19.981852774Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:39:19.989166 containerd[1805]: time="2025-07-06T23:39:19.989113478Z" level=info msg="StopContainer for \"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489\" returns successfully" Jul 6 23:39:19.989409 containerd[1805]: time="2025-07-06T23:39:19.989398951Z" level=info msg="StopPodSandbox for \"dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97\"" Jul 6 23:39:19.989437 containerd[1805]: time="2025-07-06T23:39:19.989418022Z" level=info msg="Container to stop \"e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:39:19.989463 containerd[1805]: time="2025-07-06T23:39:19.989438304Z" level=info msg="Container to stop \"ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:39:19.989463 containerd[1805]: time="2025-07-06T23:39:19.989443613Z" level=info msg="Container to stop \"979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:39:19.989463 containerd[1805]: time="2025-07-06T23:39:19.989448175Z" level=info msg="Container to stop \"0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:39:19.989463 containerd[1805]: time="2025-07-06T23:39:19.989452814Z" level=info msg="Container to stop \"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:39:19.990672 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97-shm.mount: Deactivated successfully. Jul 6 23:39:20.016897 systemd[1]: cri-containerd-dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97.scope: Deactivated successfully. Jul 6 23:39:20.027145 containerd[1805]: time="2025-07-06T23:39:20.027106787Z" level=info msg="shim disconnected" id=dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97 namespace=k8s.io Jul 6 23:39:20.027145 containerd[1805]: time="2025-07-06T23:39:20.027140591Z" level=warning msg="cleaning up after shim disconnected" id=dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97 namespace=k8s.io Jul 6 23:39:20.027145 containerd[1805]: time="2025-07-06T23:39:20.027148132Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:39:20.032912 containerd[1805]: time="2025-07-06T23:39:20.032871904Z" level=info msg="shim disconnected" id=083394c4238eb4aaa099271d7a6676f6b14b4e3087089b8c13690f97114f84ec namespace=k8s.io Jul 6 23:39:20.032912 containerd[1805]: time="2025-07-06T23:39:20.032912374Z" level=warning msg="cleaning up after shim disconnected" id=083394c4238eb4aaa099271d7a6676f6b14b4e3087089b8c13690f97114f84ec namespace=k8s.io Jul 6 23:39:20.033016 containerd[1805]: time="2025-07-06T23:39:20.032921632Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:39:20.033716 containerd[1805]: time="2025-07-06T23:39:20.033701134Z" level=info msg="TearDown network for sandbox \"dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97\" successfully" Jul 6 23:39:20.033745 containerd[1805]: time="2025-07-06T23:39:20.033716362Z" level=info msg="StopPodSandbox for \"dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97\" returns successfully" Jul 6 23:39:20.040254 containerd[1805]: time="2025-07-06T23:39:20.040208364Z" level=info msg="TearDown network for sandbox \"083394c4238eb4aaa099271d7a6676f6b14b4e3087089b8c13690f97114f84ec\" successfully" Jul 6 23:39:20.040254 containerd[1805]: time="2025-07-06T23:39:20.040226183Z" level=info msg="StopPodSandbox for \"083394c4238eb4aaa099271d7a6676f6b14b4e3087089b8c13690f97114f84ec\" returns successfully" Jul 6 23:39:20.183176 kubelet[3124]: I0706 23:39:20.182910 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-etc-cni-netd\") pod \"e1d12d00-bfab-465e-bb69-f2d25979c176\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " Jul 6 23:39:20.183176 kubelet[3124]: I0706 23:39:20.183027 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1d12d00-bfab-465e-bb69-f2d25979c176-clustermesh-secrets\") pod \"e1d12d00-bfab-465e-bb69-f2d25979c176\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " Jul 6 23:39:20.183176 kubelet[3124]: I0706 23:39:20.183078 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-bpf-maps\") pod \"e1d12d00-bfab-465e-bb69-f2d25979c176\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " Jul 6 23:39:20.183176 kubelet[3124]: I0706 23:39:20.183049 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e1d12d00-bfab-465e-bb69-f2d25979c176" (UID: "e1d12d00-bfab-465e-bb69-f2d25979c176"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:39:20.183176 kubelet[3124]: I0706 23:39:20.183135 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmp9w\" (UniqueName: \"kubernetes.io/projected/3b0cc778-4007-44d2-9744-7444c8ab67da-kube-api-access-dmp9w\") pod \"3b0cc778-4007-44d2-9744-7444c8ab67da\" (UID: \"3b0cc778-4007-44d2-9744-7444c8ab67da\") " Jul 6 23:39:20.183176 kubelet[3124]: I0706 23:39:20.183187 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-lib-modules\") pod \"e1d12d00-bfab-465e-bb69-f2d25979c176\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " Jul 6 23:39:20.185415 kubelet[3124]: I0706 23:39:20.183234 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-cilium-cgroup\") pod \"e1d12d00-bfab-465e-bb69-f2d25979c176\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " Jul 6 23:39:20.185415 kubelet[3124]: I0706 23:39:20.183287 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-hostproc\") pod \"e1d12d00-bfab-465e-bb69-f2d25979c176\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " Jul 6 23:39:20.185415 kubelet[3124]: I0706 23:39:20.183282 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e1d12d00-bfab-465e-bb69-f2d25979c176" (UID: "e1d12d00-bfab-465e-bb69-f2d25979c176"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:39:20.185415 kubelet[3124]: I0706 23:39:20.183343 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1d12d00-bfab-465e-bb69-f2d25979c176-cilium-config-path\") pod \"e1d12d00-bfab-465e-bb69-f2d25979c176\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " Jul 6 23:39:20.185415 kubelet[3124]: I0706 23:39:20.183328 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e1d12d00-bfab-465e-bb69-f2d25979c176" (UID: "e1d12d00-bfab-465e-bb69-f2d25979c176"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:39:20.185415 kubelet[3124]: I0706 23:39:20.183395 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b0cc778-4007-44d2-9744-7444c8ab67da-cilium-config-path\") pod \"3b0cc778-4007-44d2-9744-7444c8ab67da\" (UID: \"3b0cc778-4007-44d2-9744-7444c8ab67da\") " Jul 6 23:39:20.186512 kubelet[3124]: I0706 23:39:20.183388 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e1d12d00-bfab-465e-bb69-f2d25979c176" (UID: "e1d12d00-bfab-465e-bb69-f2d25979c176"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:39:20.186512 kubelet[3124]: I0706 23:39:20.183439 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-hostproc" (OuterVolumeSpecName: "hostproc") pod "e1d12d00-bfab-465e-bb69-f2d25979c176" (UID: "e1d12d00-bfab-465e-bb69-f2d25979c176"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:39:20.186512 kubelet[3124]: I0706 23:39:20.183444 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-host-proc-sys-net\") pod \"e1d12d00-bfab-465e-bb69-f2d25979c176\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " Jul 6 23:39:20.186512 kubelet[3124]: I0706 23:39:20.183503 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e1d12d00-bfab-465e-bb69-f2d25979c176" (UID: "e1d12d00-bfab-465e-bb69-f2d25979c176"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:39:20.186512 kubelet[3124]: I0706 23:39:20.183644 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-cni-path\") pod \"e1d12d00-bfab-465e-bb69-f2d25979c176\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " Jul 6 23:39:20.187461 kubelet[3124]: I0706 23:39:20.183764 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1d12d00-bfab-465e-bb69-f2d25979c176-hubble-tls\") pod \"e1d12d00-bfab-465e-bb69-f2d25979c176\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " Jul 6 23:39:20.187461 kubelet[3124]: I0706 23:39:20.183840 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-cni-path" (OuterVolumeSpecName: "cni-path") pod "e1d12d00-bfab-465e-bb69-f2d25979c176" (UID: "e1d12d00-bfab-465e-bb69-f2d25979c176"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:39:20.187461 kubelet[3124]: I0706 23:39:20.183852 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-cilium-run\") pod \"e1d12d00-bfab-465e-bb69-f2d25979c176\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " Jul 6 23:39:20.187461 kubelet[3124]: I0706 23:39:20.183922 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e1d12d00-bfab-465e-bb69-f2d25979c176" (UID: "e1d12d00-bfab-465e-bb69-f2d25979c176"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:39:20.187461 kubelet[3124]: I0706 23:39:20.183982 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-xtables-lock\") pod \"e1d12d00-bfab-465e-bb69-f2d25979c176\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " Jul 6 23:39:20.187461 kubelet[3124]: I0706 23:39:20.184060 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7rb8\" (UniqueName: \"kubernetes.io/projected/e1d12d00-bfab-465e-bb69-f2d25979c176-kube-api-access-b7rb8\") pod \"e1d12d00-bfab-465e-bb69-f2d25979c176\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " Jul 6 23:39:20.188498 kubelet[3124]: I0706 23:39:20.184112 3124 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-host-proc-sys-kernel\") pod \"e1d12d00-bfab-465e-bb69-f2d25979c176\" (UID: \"e1d12d00-bfab-465e-bb69-f2d25979c176\") " Jul 6 23:39:20.188498 kubelet[3124]: I0706 23:39:20.184163 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e1d12d00-bfab-465e-bb69-f2d25979c176" (UID: "e1d12d00-bfab-465e-bb69-f2d25979c176"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:39:20.188498 kubelet[3124]: I0706 23:39:20.184227 3124 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-etc-cni-netd\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.188498 kubelet[3124]: I0706 23:39:20.184279 3124 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-bpf-maps\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.188498 kubelet[3124]: I0706 23:39:20.184291 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e1d12d00-bfab-465e-bb69-f2d25979c176" (UID: "e1d12d00-bfab-465e-bb69-f2d25979c176"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 6 23:39:20.188498 kubelet[3124]: I0706 23:39:20.184329 3124 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-lib-modules\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.189578 kubelet[3124]: I0706 23:39:20.184359 3124 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-cilium-cgroup\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.189578 kubelet[3124]: I0706 23:39:20.184385 3124 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-hostproc\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.189578 kubelet[3124]: I0706 23:39:20.184410 3124 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-cni-path\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.189578 kubelet[3124]: I0706 23:39:20.184436 3124 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-host-proc-sys-net\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.189578 kubelet[3124]: I0706 23:39:20.184463 3124 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-cilium-run\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.190406 kubelet[3124]: I0706 23:39:20.189769 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b0cc778-4007-44d2-9744-7444c8ab67da-kube-api-access-dmp9w" (OuterVolumeSpecName: "kube-api-access-dmp9w") pod "3b0cc778-4007-44d2-9744-7444c8ab67da" (UID: "3b0cc778-4007-44d2-9744-7444c8ab67da"). InnerVolumeSpecName "kube-api-access-dmp9w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:39:20.190406 kubelet[3124]: I0706 23:39:20.190073 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d12d00-bfab-465e-bb69-f2d25979c176-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e1d12d00-bfab-465e-bb69-f2d25979c176" (UID: "e1d12d00-bfab-465e-bb69-f2d25979c176"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:39:20.190406 kubelet[3124]: I0706 23:39:20.190185 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d12d00-bfab-465e-bb69-f2d25979c176-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e1d12d00-bfab-465e-bb69-f2d25979c176" (UID: "e1d12d00-bfab-465e-bb69-f2d25979c176"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:39:20.191063 kubelet[3124]: I0706 23:39:20.190655 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d12d00-bfab-465e-bb69-f2d25979c176-kube-api-access-b7rb8" (OuterVolumeSpecName: "kube-api-access-b7rb8") pod "e1d12d00-bfab-465e-bb69-f2d25979c176" (UID: "e1d12d00-bfab-465e-bb69-f2d25979c176"). InnerVolumeSpecName "kube-api-access-b7rb8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:39:20.191452 kubelet[3124]: I0706 23:39:20.191382 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b0cc778-4007-44d2-9744-7444c8ab67da-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3b0cc778-4007-44d2-9744-7444c8ab67da" (UID: "3b0cc778-4007-44d2-9744-7444c8ab67da"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:39:20.193147 kubelet[3124]: I0706 23:39:20.193076 3124 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d12d00-bfab-465e-bb69-f2d25979c176-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e1d12d00-bfab-465e-bb69-f2d25979c176" (UID: "e1d12d00-bfab-465e-bb69-f2d25979c176"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:39:20.248514 kubelet[3124]: I0706 23:39:20.248438 3124 scope.go:117] "RemoveContainer" containerID="92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5" Jul 6 23:39:20.251175 containerd[1805]: time="2025-07-06T23:39:20.251103384Z" level=info msg="RemoveContainer for \"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5\"" Jul 6 23:39:20.253698 systemd[1]: Removed slice kubepods-besteffort-pod3b0cc778_4007_44d2_9744_7444c8ab67da.slice - libcontainer container kubepods-besteffort-pod3b0cc778_4007_44d2_9744_7444c8ab67da.slice. Jul 6 23:39:20.254415 containerd[1805]: time="2025-07-06T23:39:20.254398244Z" level=info msg="RemoveContainer for \"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5\" returns successfully" Jul 6 23:39:20.254510 kubelet[3124]: I0706 23:39:20.254500 3124 scope.go:117] "RemoveContainer" containerID="92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5" Jul 6 23:39:20.254615 containerd[1805]: time="2025-07-06T23:39:20.254596442Z" level=error msg="ContainerStatus for \"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5\": not found" Jul 6 23:39:20.254698 kubelet[3124]: E0706 23:39:20.254658 3124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5\": not found" containerID="92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5" Jul 6 23:39:20.254725 kubelet[3124]: I0706 23:39:20.254674 3124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5"} err="failed to get container status \"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"92fccfe7a7353ce7bb3bd2335c2cbc3b1d5c71fa7514087a21711765b70f74a5\": not found" Jul 6 23:39:20.254725 kubelet[3124]: I0706 23:39:20.254714 3124 scope.go:117] "RemoveContainer" containerID="4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489" Jul 6 23:39:20.255145 containerd[1805]: time="2025-07-06T23:39:20.255133770Z" level=info msg="RemoveContainer for \"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489\"" Jul 6 23:39:20.255410 systemd[1]: Removed slice kubepods-burstable-pode1d12d00_bfab_465e_bb69_f2d25979c176.slice - libcontainer container kubepods-burstable-pode1d12d00_bfab_465e_bb69_f2d25979c176.slice. Jul 6 23:39:20.255467 systemd[1]: kubepods-burstable-pode1d12d00_bfab_465e_bb69_f2d25979c176.slice: Consumed 6.568s CPU time, 169.8M memory peak, 120K read from disk, 13.3M written to disk. Jul 6 23:39:20.256561 containerd[1805]: time="2025-07-06T23:39:20.256538311Z" level=info msg="RemoveContainer for \"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489\" returns successfully" Jul 6 23:39:20.256656 kubelet[3124]: I0706 23:39:20.256637 3124 scope.go:117] "RemoveContainer" containerID="0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6" Jul 6 23:39:20.257337 containerd[1805]: time="2025-07-06T23:39:20.257327221Z" level=info msg="RemoveContainer for \"0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6\"" Jul 6 23:39:20.258448 containerd[1805]: time="2025-07-06T23:39:20.258437099Z" level=info msg="RemoveContainer for \"0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6\" returns successfully" Jul 6 23:39:20.258540 kubelet[3124]: I0706 23:39:20.258518 3124 scope.go:117] "RemoveContainer" containerID="979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072" Jul 6 23:39:20.258967 containerd[1805]: time="2025-07-06T23:39:20.258953378Z" level=info msg="RemoveContainer for \"979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072\"" Jul 6 23:39:20.260092 containerd[1805]: time="2025-07-06T23:39:20.260078476Z" level=info msg="RemoveContainer for \"979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072\" returns successfully" Jul 6 23:39:20.260169 kubelet[3124]: I0706 23:39:20.260154 3124 scope.go:117] "RemoveContainer" containerID="ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9" Jul 6 23:39:20.260652 containerd[1805]: time="2025-07-06T23:39:20.260642025Z" level=info msg="RemoveContainer for \"ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9\"" Jul 6 23:39:20.261811 containerd[1805]: time="2025-07-06T23:39:20.261801739Z" level=info msg="RemoveContainer for \"ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9\" returns successfully" Jul 6 23:39:20.261897 kubelet[3124]: I0706 23:39:20.261888 3124 scope.go:117] "RemoveContainer" containerID="e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3" Jul 6 23:39:20.262339 containerd[1805]: time="2025-07-06T23:39:20.262328124Z" level=info msg="RemoveContainer for \"e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3\"" Jul 6 23:39:20.263438 containerd[1805]: time="2025-07-06T23:39:20.263427923Z" level=info msg="RemoveContainer for \"e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3\" returns successfully" Jul 6 23:39:20.263490 kubelet[3124]: I0706 23:39:20.263480 3124 scope.go:117] "RemoveContainer" containerID="4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489" Jul 6 23:39:20.263599 containerd[1805]: time="2025-07-06T23:39:20.263585100Z" level=error msg="ContainerStatus for \"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489\": not found" Jul 6 23:39:20.263680 kubelet[3124]: E0706 23:39:20.263669 3124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489\": not found" containerID="4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489" Jul 6 23:39:20.263710 kubelet[3124]: I0706 23:39:20.263685 3124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489"} err="failed to get container status \"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ef98b88294e70151c819d379ff773dec20a15c8e2732be392c4bb8adbb44489\": not found" Jul 6 23:39:20.263710 kubelet[3124]: I0706 23:39:20.263695 3124 scope.go:117] "RemoveContainer" containerID="0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6" Jul 6 23:39:20.263800 containerd[1805]: time="2025-07-06T23:39:20.263784953Z" level=error msg="ContainerStatus for \"0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6\": not found" Jul 6 23:39:20.263861 kubelet[3124]: E0706 23:39:20.263850 3124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6\": not found" containerID="0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6" Jul 6 23:39:20.263886 kubelet[3124]: I0706 23:39:20.263864 3124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6"} err="failed to get container status \"0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6\": rpc error: code = NotFound desc = an error occurred when try to find container \"0872efaeed14c766860a71f1c5b3fb6e65d5d0e1726e14e8bd2a04579ede1ba6\": not found" Jul 6 23:39:20.263886 kubelet[3124]: I0706 23:39:20.263875 3124 scope.go:117] "RemoveContainer" containerID="979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072" Jul 6 23:39:20.263980 containerd[1805]: time="2025-07-06T23:39:20.263961905Z" level=error msg="ContainerStatus for \"979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072\": not found" Jul 6 23:39:20.264034 kubelet[3124]: E0706 23:39:20.264025 3124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072\": not found" containerID="979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072" Jul 6 23:39:20.264058 kubelet[3124]: I0706 23:39:20.264038 3124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072"} err="failed to get container status \"979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072\": rpc error: code = NotFound desc = an error occurred when try to find container \"979d754844113be205570c5e04960a97ae9876aa75934312fa741f37a40f0072\": not found" Jul 6 23:39:20.264058 kubelet[3124]: I0706 23:39:20.264052 3124 scope.go:117] "RemoveContainer" containerID="ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9" Jul 6 23:39:20.264136 containerd[1805]: time="2025-07-06T23:39:20.264121711Z" level=error msg="ContainerStatus for \"ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9\": not found" Jul 6 23:39:20.264190 kubelet[3124]: E0706 23:39:20.264169 3124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9\": not found" containerID="ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9" Jul 6 23:39:20.264190 kubelet[3124]: I0706 23:39:20.264180 3124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9"} err="failed to get container status \"ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef3e1b6a473a91e4cfcad386adc786b6d7d18a59524d4be9d95adffd735a33b9\": not found" Jul 6 23:39:20.264190 kubelet[3124]: I0706 23:39:20.264189 3124 scope.go:117] "RemoveContainer" containerID="e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3" Jul 6 23:39:20.264264 containerd[1805]: time="2025-07-06T23:39:20.264252626Z" level=error msg="ContainerStatus for \"e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3\": not found" Jul 6 23:39:20.264307 kubelet[3124]: E0706 23:39:20.264299 3124 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3\": not found" containerID="e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3" Jul 6 23:39:20.264330 kubelet[3124]: I0706 23:39:20.264311 3124 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3"} err="failed to get container status \"e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4185cafba6c865a1c0c921fe37b10899031507b625f9b2e0fe2e7927002a9e3\": not found" Jul 6 23:39:20.285052 kubelet[3124]: I0706 23:39:20.285010 3124 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-host-proc-sys-kernel\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.285052 kubelet[3124]: I0706 23:39:20.285021 3124 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1d12d00-bfab-465e-bb69-f2d25979c176-clustermesh-secrets\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.285052 kubelet[3124]: I0706 23:39:20.285028 3124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dmp9w\" (UniqueName: \"kubernetes.io/projected/3b0cc778-4007-44d2-9744-7444c8ab67da-kube-api-access-dmp9w\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.285052 kubelet[3124]: I0706 23:39:20.285039 3124 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1d12d00-bfab-465e-bb69-f2d25979c176-cilium-config-path\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.285052 kubelet[3124]: I0706 23:39:20.285044 3124 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b0cc778-4007-44d2-9744-7444c8ab67da-cilium-config-path\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.285052 kubelet[3124]: I0706 23:39:20.285049 3124 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1d12d00-bfab-465e-bb69-f2d25979c176-xtables-lock\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.285052 kubelet[3124]: I0706 23:39:20.285055 3124 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b7rb8\" (UniqueName: \"kubernetes.io/projected/e1d12d00-bfab-465e-bb69-f2d25979c176-kube-api-access-b7rb8\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.285190 kubelet[3124]: I0706 23:39:20.285061 3124 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1d12d00-bfab-465e-bb69-f2d25979c176-hubble-tls\") on node \"ci-4230.2.1-a-901fa91dbf\" DevicePath \"\"" Jul 6 23:39:20.934756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-083394c4238eb4aaa099271d7a6676f6b14b4e3087089b8c13690f97114f84ec-rootfs.mount: Deactivated successfully. Jul 6 23:39:20.934813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dda55fad8857629e1f0efc8ab3382b7883605b28e3d03df7cbd2128e3e4c0e97-rootfs.mount: Deactivated successfully. Jul 6 23:39:20.934851 systemd[1]: var-lib-kubelet-pods-3b0cc778\x2d4007\x2d44d2\x2d9744\x2d7444c8ab67da-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddmp9w.mount: Deactivated successfully. Jul 6 23:39:20.934890 systemd[1]: var-lib-kubelet-pods-e1d12d00\x2dbfab\x2d465e\x2dbb69\x2df2d25979c176-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db7rb8.mount: Deactivated successfully. Jul 6 23:39:20.934927 systemd[1]: var-lib-kubelet-pods-e1d12d00\x2dbfab\x2d465e\x2dbb69\x2df2d25979c176-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:39:20.934965 systemd[1]: var-lib-kubelet-pods-e1d12d00\x2dbfab\x2d465e\x2dbb69\x2df2d25979c176-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:39:21.109710 kubelet[3124]: I0706 23:39:21.109582 3124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b0cc778-4007-44d2-9744-7444c8ab67da" path="/var/lib/kubelet/pods/3b0cc778-4007-44d2-9744-7444c8ab67da/volumes" Jul 6 23:39:21.110944 kubelet[3124]: I0706 23:39:21.110862 3124 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d12d00-bfab-465e-bb69-f2d25979c176" path="/var/lib/kubelet/pods/e1d12d00-bfab-465e-bb69-f2d25979c176/volumes" Jul 6 23:39:21.890110 sshd[5138]: Connection closed by 139.178.89.65 port 57858 Jul 6 23:39:21.890971 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:21.917529 systemd[1]: sshd@24-147.75.203.59:22-139.178.89.65:57858.service: Deactivated successfully. Jul 6 23:39:21.918484 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:39:21.919170 systemd-logind[1790]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:39:21.935866 systemd[1]: Started sshd@25-147.75.203.59:22-139.178.89.65:38506.service - OpenSSH per-connection server daemon (139.178.89.65:38506). Jul 6 23:39:21.936679 systemd-logind[1790]: Removed session 26. Jul 6 23:39:21.967968 sshd[5310]: Accepted publickey for core from 139.178.89.65 port 38506 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:39:21.968605 sshd-session[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:21.971392 systemd-logind[1790]: New session 27 of user core. Jul 6 23:39:21.990772 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:39:22.327443 sshd[5313]: Connection closed by 139.178.89.65 port 38506 Jul 6 23:39:22.327642 sshd-session[5310]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:22.332130 kubelet[3124]: I0706 23:39:22.332111 3124 memory_manager.go:355] "RemoveStaleState removing state" podUID="e1d12d00-bfab-465e-bb69-f2d25979c176" containerName="cilium-agent" Jul 6 23:39:22.332130 kubelet[3124]: I0706 23:39:22.332126 3124 memory_manager.go:355] "RemoveStaleState removing state" podUID="3b0cc778-4007-44d2-9744-7444c8ab67da" containerName="cilium-operator" Jul 6 23:39:22.336384 systemd[1]: sshd@25-147.75.203.59:22-139.178.89.65:38506.service: Deactivated successfully. Jul 6 23:39:22.337465 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:39:22.338260 systemd-logind[1790]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:39:22.339321 systemd[1]: Started sshd@26-147.75.203.59:22-139.178.89.65:38522.service - OpenSSH per-connection server daemon (139.178.89.65:38522). Jul 6 23:39:22.340423 systemd-logind[1790]: Removed session 27. Jul 6 23:39:22.342838 systemd[1]: Created slice kubepods-burstable-pod687e86c4_26d6_4895_8efd_5c44ef96f101.slice - libcontainer container kubepods-burstable-pod687e86c4_26d6_4895_8efd_5c44ef96f101.slice. Jul 6 23:39:22.373994 sshd[5335]: Accepted publickey for core from 139.178.89.65 port 38522 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:39:22.374658 sshd-session[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:22.377372 systemd-logind[1790]: New session 28 of user core. Jul 6 23:39:22.394827 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 6 23:39:22.401528 kubelet[3124]: I0706 23:39:22.401464 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/687e86c4-26d6-4895-8efd-5c44ef96f101-hubble-tls\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.401528 kubelet[3124]: I0706 23:39:22.401488 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/687e86c4-26d6-4895-8efd-5c44ef96f101-cilium-cgroup\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.401528 kubelet[3124]: I0706 23:39:22.401502 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/687e86c4-26d6-4895-8efd-5c44ef96f101-host-proc-sys-kernel\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.401528 kubelet[3124]: I0706 23:39:22.401512 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/687e86c4-26d6-4895-8efd-5c44ef96f101-host-proc-sys-net\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.401635 kubelet[3124]: I0706 23:39:22.401537 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/687e86c4-26d6-4895-8efd-5c44ef96f101-cilium-ipsec-secrets\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.401635 kubelet[3124]: I0706 23:39:22.401558 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/687e86c4-26d6-4895-8efd-5c44ef96f101-cilium-run\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.401635 kubelet[3124]: I0706 23:39:22.401571 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/687e86c4-26d6-4895-8efd-5c44ef96f101-hostproc\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.401635 kubelet[3124]: I0706 23:39:22.401585 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/687e86c4-26d6-4895-8efd-5c44ef96f101-etc-cni-netd\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.401635 kubelet[3124]: I0706 23:39:22.401595 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/687e86c4-26d6-4895-8efd-5c44ef96f101-cilium-config-path\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.401734 kubelet[3124]: I0706 23:39:22.401640 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/687e86c4-26d6-4895-8efd-5c44ef96f101-bpf-maps\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.401734 kubelet[3124]: I0706 23:39:22.401657 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/687e86c4-26d6-4895-8efd-5c44ef96f101-lib-modules\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.401734 kubelet[3124]: I0706 23:39:22.401670 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/687e86c4-26d6-4895-8efd-5c44ef96f101-xtables-lock\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.401734 kubelet[3124]: I0706 23:39:22.401680 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/687e86c4-26d6-4895-8efd-5c44ef96f101-cni-path\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.401734 kubelet[3124]: I0706 23:39:22.401690 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w22f6\" (UniqueName: \"kubernetes.io/projected/687e86c4-26d6-4895-8efd-5c44ef96f101-kube-api-access-w22f6\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.401734 kubelet[3124]: I0706 23:39:22.401702 3124 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/687e86c4-26d6-4895-8efd-5c44ef96f101-clustermesh-secrets\") pod \"cilium-8shpg\" (UID: \"687e86c4-26d6-4895-8efd-5c44ef96f101\") " pod="kube-system/cilium-8shpg" Jul 6 23:39:22.443082 sshd[5339]: Connection closed by 139.178.89.65 port 38522 Jul 6 23:39:22.443509 sshd-session[5335]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:22.467002 systemd[1]: sshd@26-147.75.203.59:22-139.178.89.65:38522.service: Deactivated successfully. Jul 6 23:39:22.471291 systemd[1]: session-28.scope: Deactivated successfully. Jul 6 23:39:22.473542 systemd-logind[1790]: Session 28 logged out. Waiting for processes to exit. Jul 6 23:39:22.493387 systemd[1]: Started sshd@27-147.75.203.59:22-139.178.89.65:38534.service - OpenSSH per-connection server daemon (139.178.89.65:38534). Jul 6 23:39:22.496235 systemd-logind[1790]: Removed session 28. Jul 6 23:39:22.538015 sshd[5345]: Accepted publickey for core from 139.178.89.65 port 38534 ssh2: RSA SHA256:xm5lCXeiJfraDc9zsTFlg1rz9X9COHhjINb5RvTltOM Jul 6 23:39:22.538711 sshd-session[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:39:22.541404 systemd-logind[1790]: New session 29 of user core. Jul 6 23:39:22.556837 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 6 23:39:22.644884 containerd[1805]: time="2025-07-06T23:39:22.644767865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8shpg,Uid:687e86c4-26d6-4895-8efd-5c44ef96f101,Namespace:kube-system,Attempt:0,}" Jul 6 23:39:22.653629 containerd[1805]: time="2025-07-06T23:39:22.653431860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:39:22.653629 containerd[1805]: time="2025-07-06T23:39:22.653591593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:39:22.653629 containerd[1805]: time="2025-07-06T23:39:22.653601313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:39:22.653744 containerd[1805]: time="2025-07-06T23:39:22.653644701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:39:22.677835 systemd[1]: Started cri-containerd-bedaac4fdaff4f83fa8a2fd710ad9d4c018488f1c04da557e7d75701ad603eb1.scope - libcontainer container bedaac4fdaff4f83fa8a2fd710ad9d4c018488f1c04da557e7d75701ad603eb1. Jul 6 23:39:22.688443 containerd[1805]: time="2025-07-06T23:39:22.688421509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8shpg,Uid:687e86c4-26d6-4895-8efd-5c44ef96f101,Namespace:kube-system,Attempt:0,} returns sandbox id \"bedaac4fdaff4f83fa8a2fd710ad9d4c018488f1c04da557e7d75701ad603eb1\"" Jul 6 23:39:22.689669 containerd[1805]: time="2025-07-06T23:39:22.689625478Z" level=info msg="CreateContainer within sandbox \"bedaac4fdaff4f83fa8a2fd710ad9d4c018488f1c04da557e7d75701ad603eb1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:39:22.706665 containerd[1805]: time="2025-07-06T23:39:22.706620621Z" level=info msg="CreateContainer within sandbox \"bedaac4fdaff4f83fa8a2fd710ad9d4c018488f1c04da557e7d75701ad603eb1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aa575d7a24a589797a54718d16a5a14d5c1e8dceed91f0dd5900f2004f1d29a6\"" Jul 6 23:39:22.706876 containerd[1805]: time="2025-07-06T23:39:22.706842995Z" level=info msg="StartContainer for \"aa575d7a24a589797a54718d16a5a14d5c1e8dceed91f0dd5900f2004f1d29a6\"" Jul 6 23:39:22.728785 systemd[1]: Started cri-containerd-aa575d7a24a589797a54718d16a5a14d5c1e8dceed91f0dd5900f2004f1d29a6.scope - libcontainer container aa575d7a24a589797a54718d16a5a14d5c1e8dceed91f0dd5900f2004f1d29a6. Jul 6 23:39:22.743534 containerd[1805]: time="2025-07-06T23:39:22.743499323Z" level=info msg="StartContainer for \"aa575d7a24a589797a54718d16a5a14d5c1e8dceed91f0dd5900f2004f1d29a6\" returns successfully" Jul 6 23:39:22.750107 systemd[1]: cri-containerd-aa575d7a24a589797a54718d16a5a14d5c1e8dceed91f0dd5900f2004f1d29a6.scope: Deactivated successfully. Jul 6 23:39:22.773652 containerd[1805]: time="2025-07-06T23:39:22.773585890Z" level=info msg="shim disconnected" id=aa575d7a24a589797a54718d16a5a14d5c1e8dceed91f0dd5900f2004f1d29a6 namespace=k8s.io Jul 6 23:39:22.773652 containerd[1805]: time="2025-07-06T23:39:22.773621209Z" level=warning msg="cleaning up after shim disconnected" id=aa575d7a24a589797a54718d16a5a14d5c1e8dceed91f0dd5900f2004f1d29a6 namespace=k8s.io Jul 6 23:39:22.773652 containerd[1805]: time="2025-07-06T23:39:22.773626642Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:39:23.262901 containerd[1805]: time="2025-07-06T23:39:23.262872080Z" level=info msg="CreateContainer within sandbox \"bedaac4fdaff4f83fa8a2fd710ad9d4c018488f1c04da557e7d75701ad603eb1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:39:23.267173 containerd[1805]: time="2025-07-06T23:39:23.267156761Z" level=info msg="CreateContainer within sandbox \"bedaac4fdaff4f83fa8a2fd710ad9d4c018488f1c04da557e7d75701ad603eb1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8db1c314f5b6be2d506ab38555bd034c9f73ca427e8be95f18c7b3655aac39fb\"" Jul 6 23:39:23.267460 containerd[1805]: time="2025-07-06T23:39:23.267447211Z" level=info msg="StartContainer for \"8db1c314f5b6be2d506ab38555bd034c9f73ca427e8be95f18c7b3655aac39fb\"" Jul 6 23:39:23.297675 systemd[1]: Started cri-containerd-8db1c314f5b6be2d506ab38555bd034c9f73ca427e8be95f18c7b3655aac39fb.scope - libcontainer container 8db1c314f5b6be2d506ab38555bd034c9f73ca427e8be95f18c7b3655aac39fb. Jul 6 23:39:23.314988 containerd[1805]: time="2025-07-06T23:39:23.314951576Z" level=info msg="StartContainer for \"8db1c314f5b6be2d506ab38555bd034c9f73ca427e8be95f18c7b3655aac39fb\" returns successfully" Jul 6 23:39:23.321617 systemd[1]: cri-containerd-8db1c314f5b6be2d506ab38555bd034c9f73ca427e8be95f18c7b3655aac39fb.scope: Deactivated successfully. Jul 6 23:39:23.355890 containerd[1805]: time="2025-07-06T23:39:23.355845847Z" level=info msg="shim disconnected" id=8db1c314f5b6be2d506ab38555bd034c9f73ca427e8be95f18c7b3655aac39fb namespace=k8s.io Jul 6 23:39:23.355890 containerd[1805]: time="2025-07-06T23:39:23.355885138Z" level=warning msg="cleaning up after shim disconnected" id=8db1c314f5b6be2d506ab38555bd034c9f73ca427e8be95f18c7b3655aac39fb namespace=k8s.io Jul 6 23:39:23.355890 containerd[1805]: time="2025-07-06T23:39:23.355892024Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:39:24.234790 kubelet[3124]: E0706 23:39:24.234738 3124 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:39:24.266080 containerd[1805]: time="2025-07-06T23:39:24.266045370Z" level=info msg="CreateContainer within sandbox \"bedaac4fdaff4f83fa8a2fd710ad9d4c018488f1c04da557e7d75701ad603eb1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:39:24.272851 containerd[1805]: time="2025-07-06T23:39:24.272791153Z" level=info msg="CreateContainer within sandbox \"bedaac4fdaff4f83fa8a2fd710ad9d4c018488f1c04da557e7d75701ad603eb1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"20789a8bbcc2e021cfb61685f43d56c0e5ac5d753b54a3b9744ae99874dcef19\"" Jul 6 23:39:24.273241 containerd[1805]: time="2025-07-06T23:39:24.273191241Z" level=info msg="StartContainer for \"20789a8bbcc2e021cfb61685f43d56c0e5ac5d753b54a3b9744ae99874dcef19\"" Jul 6 23:39:24.297735 systemd[1]: Started cri-containerd-20789a8bbcc2e021cfb61685f43d56c0e5ac5d753b54a3b9744ae99874dcef19.scope - libcontainer container 20789a8bbcc2e021cfb61685f43d56c0e5ac5d753b54a3b9744ae99874dcef19. Jul 6 23:39:24.312241 containerd[1805]: time="2025-07-06T23:39:24.312217280Z" level=info msg="StartContainer for \"20789a8bbcc2e021cfb61685f43d56c0e5ac5d753b54a3b9744ae99874dcef19\" returns successfully" Jul 6 23:39:24.313463 systemd[1]: cri-containerd-20789a8bbcc2e021cfb61685f43d56c0e5ac5d753b54a3b9744ae99874dcef19.scope: Deactivated successfully. Jul 6 23:39:24.337969 containerd[1805]: time="2025-07-06T23:39:24.337917320Z" level=info msg="shim disconnected" id=20789a8bbcc2e021cfb61685f43d56c0e5ac5d753b54a3b9744ae99874dcef19 namespace=k8s.io Jul 6 23:39:24.337969 containerd[1805]: time="2025-07-06T23:39:24.337965327Z" level=warning msg="cleaning up after shim disconnected" id=20789a8bbcc2e021cfb61685f43d56c0e5ac5d753b54a3b9744ae99874dcef19 namespace=k8s.io Jul 6 23:39:24.337969 containerd[1805]: time="2025-07-06T23:39:24.337973483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:39:24.513601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20789a8bbcc2e021cfb61685f43d56c0e5ac5d753b54a3b9744ae99874dcef19-rootfs.mount: Deactivated successfully. Jul 6 23:39:25.276938 containerd[1805]: time="2025-07-06T23:39:25.276855871Z" level=info msg="CreateContainer within sandbox \"bedaac4fdaff4f83fa8a2fd710ad9d4c018488f1c04da557e7d75701ad603eb1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:39:25.284241 containerd[1805]: time="2025-07-06T23:39:25.284195494Z" level=info msg="CreateContainer within sandbox \"bedaac4fdaff4f83fa8a2fd710ad9d4c018488f1c04da557e7d75701ad603eb1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"27264840b49db0a56fcabd89a6869561d43c734ecc86f6f12ad6a6c9d6b35226\"" Jul 6 23:39:25.284534 containerd[1805]: time="2025-07-06T23:39:25.284515582Z" level=info msg="StartContainer for \"27264840b49db0a56fcabd89a6869561d43c734ecc86f6f12ad6a6c9d6b35226\"" Jul 6 23:39:25.315750 systemd[1]: Started cri-containerd-27264840b49db0a56fcabd89a6869561d43c734ecc86f6f12ad6a6c9d6b35226.scope - libcontainer container 27264840b49db0a56fcabd89a6869561d43c734ecc86f6f12ad6a6c9d6b35226. Jul 6 23:39:25.331710 systemd[1]: cri-containerd-27264840b49db0a56fcabd89a6869561d43c734ecc86f6f12ad6a6c9d6b35226.scope: Deactivated successfully. Jul 6 23:39:25.332502 containerd[1805]: time="2025-07-06T23:39:25.332479094Z" level=info msg="StartContainer for \"27264840b49db0a56fcabd89a6869561d43c734ecc86f6f12ad6a6c9d6b35226\" returns successfully" Jul 6 23:39:25.344520 containerd[1805]: time="2025-07-06T23:39:25.344489509Z" level=info msg="shim disconnected" id=27264840b49db0a56fcabd89a6869561d43c734ecc86f6f12ad6a6c9d6b35226 namespace=k8s.io Jul 6 23:39:25.344520 containerd[1805]: time="2025-07-06T23:39:25.344518392Z" level=warning msg="cleaning up after shim disconnected" id=27264840b49db0a56fcabd89a6869561d43c734ecc86f6f12ad6a6c9d6b35226 namespace=k8s.io Jul 6 23:39:25.344690 containerd[1805]: time="2025-07-06T23:39:25.344528849Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:39:25.514007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27264840b49db0a56fcabd89a6869561d43c734ecc86f6f12ad6a6c9d6b35226-rootfs.mount: Deactivated successfully. Jul 6 23:39:26.277500 containerd[1805]: time="2025-07-06T23:39:26.277463992Z" level=info msg="CreateContainer within sandbox \"bedaac4fdaff4f83fa8a2fd710ad9d4c018488f1c04da557e7d75701ad603eb1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:39:26.284631 containerd[1805]: time="2025-07-06T23:39:26.284567033Z" level=info msg="CreateContainer within sandbox \"bedaac4fdaff4f83fa8a2fd710ad9d4c018488f1c04da557e7d75701ad603eb1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2690ce8d567999f22f02ed3f15da252d268cc6d2e6029daf42cd98dbeae23912\"" Jul 6 23:39:26.285053 containerd[1805]: time="2025-07-06T23:39:26.285024737Z" level=info msg="StartContainer for \"2690ce8d567999f22f02ed3f15da252d268cc6d2e6029daf42cd98dbeae23912\"" Jul 6 23:39:26.310676 systemd[1]: Started cri-containerd-2690ce8d567999f22f02ed3f15da252d268cc6d2e6029daf42cd98dbeae23912.scope - libcontainer container 2690ce8d567999f22f02ed3f15da252d268cc6d2e6029daf42cd98dbeae23912. Jul 6 23:39:26.326371 containerd[1805]: time="2025-07-06T23:39:26.326340591Z" level=info msg="StartContainer for \"2690ce8d567999f22f02ed3f15da252d268cc6d2e6029daf42cd98dbeae23912\" returns successfully" Jul 6 23:39:26.515534 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 6 23:39:26.931752 kubelet[3124]: I0706 23:39:26.931710 3124 setters.go:602] "Node became not ready" node="ci-4230.2.1-a-901fa91dbf" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:39:26Z","lastTransitionTime":"2025-07-06T23:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:39:27.317134 kubelet[3124]: I0706 23:39:27.317046 3124 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8shpg" podStartSLOduration=5.317013036 podStartE2EDuration="5.317013036s" podCreationTimestamp="2025-07-06 23:39:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:39:27.317014418 +0000 UTC m=+428.275831725" watchObservedRunningTime="2025-07-06 23:39:27.317013036 +0000 UTC m=+428.275830299" Jul 6 23:39:29.854032 systemd-networkd[1717]: lxc_health: Link UP Jul 6 23:39:29.854406 systemd-networkd[1717]: lxc_health: Gained carrier Jul 6 23:39:31.364675 systemd-networkd[1717]: lxc_health: Gained IPv6LL Jul 6 23:39:35.069648 sshd[5352]: Connection closed by 139.178.89.65 port 38534 Jul 6 23:39:35.069818 sshd-session[5345]: pam_unix(sshd:session): session closed for user core Jul 6 23:39:35.071364 systemd[1]: sshd@27-147.75.203.59:22-139.178.89.65:38534.service: Deactivated successfully. Jul 6 23:39:35.072424 systemd[1]: session-29.scope: Deactivated successfully. Jul 6 23:39:35.073294 systemd-logind[1790]: Session 29 logged out. Waiting for processes to exit. Jul 6 23:39:35.073886 systemd-logind[1790]: Removed session 29.