Jul 7 00:42:20.904467 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:58:13 -00 2025 Jul 7 00:42:20.904482 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:42:20.904489 kernel: BIOS-provided physical RAM map: Jul 7 00:42:20.904493 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jul 7 00:42:20.904497 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jul 7 00:42:20.904501 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jul 7 00:42:20.904506 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jul 7 00:42:20.904510 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jul 7 00:42:20.904514 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000825bafff] usable Jul 7 00:42:20.904519 kernel: BIOS-e820: [mem 0x00000000825bb000-0x00000000825bbfff] ACPI NVS Jul 7 00:42:20.904523 kernel: BIOS-e820: [mem 0x00000000825bc000-0x00000000825bcfff] reserved Jul 7 00:42:20.904527 kernel: BIOS-e820: [mem 0x00000000825bd000-0x000000008afcdfff] usable Jul 7 00:42:20.904531 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Jul 7 00:42:20.904536 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Jul 7 00:42:20.904541 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Jul 7 00:42:20.904547 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Jul 7 00:42:20.904551 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jul 7 00:42:20.904556 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jul 7 00:42:20.904560 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 00:42:20.904565 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jul 7 00:42:20.904570 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jul 7 00:42:20.904574 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 7 00:42:20.904579 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jul 7 00:42:20.904584 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jul 7 00:42:20.904588 kernel: NX (Execute Disable) protection: active Jul 7 00:42:20.904593 kernel: APIC: Static calls initialized Jul 7 00:42:20.904599 kernel: SMBIOS 3.2.1 present. Jul 7 00:42:20.904603 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 2.6 12/05/2024 Jul 7 00:42:20.904608 kernel: DMI: Memory slots populated: 2/4 Jul 7 00:42:20.904613 kernel: tsc: Detected 3400.000 MHz processor Jul 7 00:42:20.904617 kernel: tsc: Detected 3399.906 MHz TSC Jul 7 00:42:20.904622 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:42:20.904627 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:42:20.904632 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jul 7 00:42:20.904637 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jul 7 00:42:20.904641 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:42:20.904647 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jul 7 00:42:20.904652 kernel: Using GB pages for direct mapping Jul 7 00:42:20.904657 kernel: ACPI: Early table checksum verification disabled Jul 7 00:42:20.904662 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jul 7 00:42:20.904668 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jul 7 00:42:20.904673 kernel: ACPI: FACP 0x000000008C58B5F0 000114 (v06 01072009 AMI 00010013) Jul 7 00:42:20.904678 kernel: ACPI: DSDT 0x000000008C54F268 03C386 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jul 7 00:42:20.904684 kernel: ACPI: FACS 0x000000008C66DF80 000040 Jul 7 00:42:20.904689 kernel: ACPI: APIC 0x000000008C58B708 00012C (v04 01072009 AMI 00010013) Jul 7 00:42:20.904694 kernel: ACPI: FPDT 0x000000008C58B838 000044 (v01 01072009 AMI 00010013) Jul 7 00:42:20.904699 kernel: ACPI: FIDT 0x000000008C58B880 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jul 7 00:42:20.904704 kernel: ACPI: MCFG 0x000000008C58B920 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jul 7 00:42:20.904709 kernel: ACPI: SPMI 0x000000008C58B960 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jul 7 00:42:20.904714 kernel: ACPI: SSDT 0x000000008C58B9A8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jul 7 00:42:20.904720 kernel: ACPI: SSDT 0x000000008C58D4C8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jul 7 00:42:20.904725 kernel: ACPI: SSDT 0x000000008C590690 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jul 7 00:42:20.904730 kernel: ACPI: HPET 0x000000008C5929C0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:42:20.904735 kernel: ACPI: SSDT 0x000000008C5929F8 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jul 7 00:42:20.904740 kernel: ACPI: SSDT 0x000000008C5939A8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jul 7 00:42:20.904745 kernel: ACPI: UEFI 0x000000008C5942A0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:42:20.904750 kernel: ACPI: LPIT 0x000000008C5942E8 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:42:20.904756 kernel: ACPI: SSDT 0x000000008C594380 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jul 7 00:42:20.904761 kernel: ACPI: SSDT 0x000000008C596B60 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jul 7 00:42:20.904767 kernel: ACPI: DBGP 0x000000008C598048 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:42:20.904772 kernel: ACPI: DBG2 0x000000008C598080 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jul 7 00:42:20.904777 kernel: ACPI: SSDT 0x000000008C5980D8 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jul 7 00:42:20.904782 kernel: ACPI: DMAR 0x000000008C599C40 000070 (v01 INTEL EDK2 00000002 01000013) Jul 7 00:42:20.904787 kernel: ACPI: SSDT 0x000000008C599CB0 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jul 7 00:42:20.904792 kernel: ACPI: TPM2 0x000000008C599DF8 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jul 7 00:42:20.904797 kernel: ACPI: SSDT 0x000000008C599E30 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jul 7 00:42:20.904802 kernel: ACPI: WSMT 0x000000008C59ABC0 000028 (v01 SUPERM 01072009 AMI 00010013) Jul 7 00:42:20.904808 kernel: ACPI: EINJ 0x000000008C59ABE8 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jul 7 00:42:20.904813 kernel: ACPI: ERST 0x000000008C59AD18 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jul 7 00:42:20.904818 kernel: ACPI: BERT 0x000000008C59AF48 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jul 7 00:42:20.904823 kernel: ACPI: HEST 0x000000008C59AF78 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jul 7 00:42:20.904828 kernel: ACPI: SSDT 0x000000008C59B1F8 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jul 7 00:42:20.904833 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b5f0-0x8c58b703] Jul 7 00:42:20.904838 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b5ed] Jul 7 00:42:20.904843 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Jul 7 00:42:20.904848 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b708-0x8c58b833] Jul 7 00:42:20.904854 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b838-0x8c58b87b] Jul 7 00:42:20.904859 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b880-0x8c58b91b] Jul 7 00:42:20.904864 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b920-0x8c58b95b] Jul 7 00:42:20.904869 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b960-0x8c58b9a0] Jul 7 00:42:20.904874 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58b9a8-0x8c58d4c3] Jul 7 00:42:20.904879 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d4c8-0x8c59068d] Jul 7 00:42:20.904884 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590690-0x8c5929ba] Jul 7 00:42:20.904889 kernel: ACPI: Reserving HPET table memory at [mem 0x8c5929c0-0x8c5929f7] Jul 7 00:42:20.904894 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5929f8-0x8c5939a5] Jul 7 00:42:20.904899 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5939a8-0x8c59429e] Jul 7 00:42:20.904904 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c5942a0-0x8c5942e1] Jul 7 00:42:20.904909 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c5942e8-0x8c59437b] Jul 7 00:42:20.904914 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594380-0x8c596b5d] Jul 7 00:42:20.904919 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596b60-0x8c598041] Jul 7 00:42:20.904924 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c598048-0x8c59807b] Jul 7 00:42:20.904929 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598080-0x8c5980d3] Jul 7 00:42:20.904934 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5980d8-0x8c599c3e] Jul 7 00:42:20.904939 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599c40-0x8c599caf] Jul 7 00:42:20.904945 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599cb0-0x8c599df3] Jul 7 00:42:20.904950 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599df8-0x8c599e2b] Jul 7 00:42:20.904955 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599e30-0x8c59abbe] Jul 7 00:42:20.904960 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59abc0-0x8c59abe7] Jul 7 00:42:20.904965 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59abe8-0x8c59ad17] Jul 7 00:42:20.904970 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad18-0x8c59af47] Jul 7 00:42:20.904975 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59af48-0x8c59af77] Jul 7 00:42:20.904980 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59af78-0x8c59b1f3] Jul 7 00:42:20.904984 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b1f8-0x8c59b359] Jul 7 00:42:20.904990 kernel: No NUMA configuration found Jul 7 00:42:20.904995 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jul 7 00:42:20.905000 kernel: NODE_DATA(0) allocated [mem 0x86eff8dc0-0x86effffff] Jul 7 00:42:20.905005 kernel: Zone ranges: Jul 7 00:42:20.905010 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:42:20.905015 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 00:42:20.905020 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jul 7 00:42:20.905025 kernel: Device empty Jul 7 00:42:20.905030 kernel: Movable zone start for each node Jul 7 00:42:20.905035 kernel: Early memory node ranges Jul 7 00:42:20.905041 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jul 7 00:42:20.905046 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jul 7 00:42:20.905051 kernel: node 0: [mem 0x0000000040400000-0x00000000825bafff] Jul 7 00:42:20.905056 kernel: node 0: [mem 0x00000000825bd000-0x000000008afcdfff] Jul 7 00:42:20.905064 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Jul 7 00:42:20.905072 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jul 7 00:42:20.905102 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jul 7 00:42:20.905108 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jul 7 00:42:20.905113 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:42:20.905135 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jul 7 00:42:20.905141 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 7 00:42:20.905146 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jul 7 00:42:20.905151 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jul 7 00:42:20.905156 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Jul 7 00:42:20.905162 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jul 7 00:42:20.905167 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jul 7 00:42:20.905172 kernel: ACPI: PM-Timer IO Port: 0x1808 Jul 7 00:42:20.905179 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 7 00:42:20.905184 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 7 00:42:20.905189 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 7 00:42:20.905194 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 7 00:42:20.905200 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 7 00:42:20.905205 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 7 00:42:20.905210 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 7 00:42:20.905215 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 7 00:42:20.905221 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 7 00:42:20.905227 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 7 00:42:20.905232 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 7 00:42:20.905237 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 7 00:42:20.905242 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 7 00:42:20.905248 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 7 00:42:20.905253 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 7 00:42:20.905258 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 7 00:42:20.905263 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jul 7 00:42:20.905269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 00:42:20.905275 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:42:20.905281 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:42:20.905286 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 00:42:20.905291 kernel: TSC deadline timer available Jul 7 00:42:20.905296 kernel: CPU topo: Max. logical packages: 1 Jul 7 00:42:20.905302 kernel: CPU topo: Max. logical dies: 1 Jul 7 00:42:20.905307 kernel: CPU topo: Max. dies per package: 1 Jul 7 00:42:20.905312 kernel: CPU topo: Max. threads per core: 2 Jul 7 00:42:20.905317 kernel: CPU topo: Num. cores per package: 8 Jul 7 00:42:20.905323 kernel: CPU topo: Num. threads per package: 16 Jul 7 00:42:20.905329 kernel: CPU topo: Allowing 16 present CPUs plus 0 hotplug CPUs Jul 7 00:42:20.905334 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jul 7 00:42:20.905340 kernel: Booting paravirtualized kernel on bare hardware Jul 7 00:42:20.905345 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:42:20.905351 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jul 7 00:42:20.905356 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jul 7 00:42:20.905361 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jul 7 00:42:20.905366 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 7 00:42:20.905372 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:42:20.905379 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:42:20.905384 kernel: random: crng init done Jul 7 00:42:20.905390 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jul 7 00:42:20.905395 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jul 7 00:42:20.905400 kernel: Fallback order for Node 0: 0 Jul 7 00:42:20.905405 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8363246 Jul 7 00:42:20.905411 kernel: Policy zone: Normal Jul 7 00:42:20.905416 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:42:20.905422 kernel: software IO TLB: area num 16. Jul 7 00:42:20.905428 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 7 00:42:20.905433 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 00:42:20.905438 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 00:42:20.905443 kernel: Dynamic Preempt: voluntary Jul 7 00:42:20.905449 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:42:20.905455 kernel: rcu: RCU event tracing is enabled. Jul 7 00:42:20.905460 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 7 00:42:20.905466 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:42:20.905472 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:42:20.905477 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:42:20.905482 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:42:20.905488 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 7 00:42:20.905493 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 00:42:20.905498 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 00:42:20.905504 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 00:42:20.905509 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jul 7 00:42:20.905515 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:42:20.905521 kernel: Console: colour VGA+ 80x25 Jul 7 00:42:20.905526 kernel: printk: legacy console [tty0] enabled Jul 7 00:42:20.905531 kernel: printk: legacy console [ttyS1] enabled Jul 7 00:42:20.905537 kernel: ACPI: Core revision 20240827 Jul 7 00:42:20.905542 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Jul 7 00:42:20.905548 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:42:20.905553 kernel: DMAR: Host address width 39 Jul 7 00:42:20.905558 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jul 7 00:42:20.905564 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jul 7 00:42:20.905570 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Jul 7 00:42:20.905575 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jul 7 00:42:20.905581 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jul 7 00:42:20.905586 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jul 7 00:42:20.905591 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jul 7 00:42:20.905596 kernel: x2apic enabled Jul 7 00:42:20.905602 kernel: APIC: Switched APIC routing to: cluster x2apic Jul 7 00:42:20.905607 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 00:42:20.905613 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jul 7 00:42:20.905619 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jul 7 00:42:20.905624 kernel: CPU0: Thermal monitoring enabled (TM1) Jul 7 00:42:20.905630 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 7 00:42:20.905635 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 7 00:42:20.905640 kernel: process: using mwait in idle threads Jul 7 00:42:20.905646 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:42:20.905651 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 7 00:42:20.905657 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 7 00:42:20.905662 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 7 00:42:20.905668 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 7 00:42:20.905674 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 00:42:20.905679 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 00:42:20.905684 kernel: TAA: Mitigation: TSX disabled Jul 7 00:42:20.905690 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jul 7 00:42:20.905695 kernel: SRBDS: Mitigation: Microcode Jul 7 00:42:20.905700 kernel: GDS: Mitigation: Microcode Jul 7 00:42:20.905706 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 00:42:20.905711 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:42:20.905717 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:42:20.905722 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:42:20.905728 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 7 00:42:20.905733 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 7 00:42:20.905738 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:42:20.905744 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 7 00:42:20.905749 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 7 00:42:20.905754 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jul 7 00:42:20.905760 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:42:20.905766 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:42:20.905771 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 00:42:20.905777 kernel: landlock: Up and running. Jul 7 00:42:20.905782 kernel: SELinux: Initializing. Jul 7 00:42:20.905788 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:42:20.905793 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:42:20.905798 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 7 00:42:20.905804 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jul 7 00:42:20.905809 kernel: ... version: 4 Jul 7 00:42:20.905815 kernel: ... bit width: 48 Jul 7 00:42:20.905821 kernel: ... generic registers: 4 Jul 7 00:42:20.905826 kernel: ... value mask: 0000ffffffffffff Jul 7 00:42:20.905831 kernel: ... max period: 00007fffffffffff Jul 7 00:42:20.905837 kernel: ... fixed-purpose events: 3 Jul 7 00:42:20.905842 kernel: ... event mask: 000000070000000f Jul 7 00:42:20.905847 kernel: signal: max sigframe size: 2032 Jul 7 00:42:20.905853 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jul 7 00:42:20.905858 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:42:20.905864 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:42:20.905870 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jul 7 00:42:20.905875 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jul 7 00:42:20.905880 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:42:20.905886 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:42:20.905891 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jul 7 00:42:20.905897 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 7 00:42:20.905902 kernel: smp: Brought up 1 node, 16 CPUs Jul 7 00:42:20.905908 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jul 7 00:42:20.905914 kernel: Memory: 32695180K/33452984K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 732528K reserved, 0K cma-reserved) Jul 7 00:42:20.905919 kernel: devtmpfs: initialized Jul 7 00:42:20.905924 kernel: x86/mm: Memory block size: 128MB Jul 7 00:42:20.905930 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x825bb000-0x825bbfff] (4096 bytes) Jul 7 00:42:20.905935 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Jul 7 00:42:20.905941 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:42:20.905946 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 7 00:42:20.905952 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:42:20.905958 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:42:20.905963 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:42:20.905969 kernel: audit: type=2000 audit(1751848932.158:1): state=initialized audit_enabled=0 res=1 Jul 7 00:42:20.905974 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:42:20.905979 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:42:20.905985 kernel: cpuidle: using governor menu Jul 7 00:42:20.905990 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:42:20.905995 kernel: dca service started, version 1.12.1 Jul 7 00:42:20.906001 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 7 00:42:20.906007 kernel: PCI: Using configuration type 1 for base access Jul 7 00:42:20.906013 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:42:20.906018 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:42:20.906024 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:42:20.906029 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:42:20.906034 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:42:20.906040 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:42:20.906045 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:42:20.906050 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:42:20.906057 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jul 7 00:42:20.906064 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:42:20.906069 kernel: ACPI: SSDT 0xFFFF9A1882104C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jul 7 00:42:20.906074 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:42:20.906102 kernel: ACPI: SSDT 0xFFFF9A18820FD000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jul 7 00:42:20.906108 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:42:20.906113 kernel: ACPI: SSDT 0xFFFF9A188162AF00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jul 7 00:42:20.906133 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:42:20.906138 kernel: ACPI: SSDT 0xFFFF9A1880F9C800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jul 7 00:42:20.906143 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:42:20.906150 kernel: ACPI: SSDT 0xFFFF9A1880FA5000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jul 7 00:42:20.906155 kernel: ACPI: Dynamic OEM Table Load: Jul 7 00:42:20.906161 kernel: ACPI: SSDT 0xFFFF9A1881801400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jul 7 00:42:20.906166 kernel: ACPI: Interpreter enabled Jul 7 00:42:20.906171 kernel: ACPI: PM: (supports S0 S5) Jul 7 00:42:20.906177 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:42:20.906182 kernel: HEST: Enabling Firmware First mode for corrected errors. Jul 7 00:42:20.906187 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jul 7 00:42:20.906193 kernel: HEST: Table parsing has been initialized. Jul 7 00:42:20.906199 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jul 7 00:42:20.906204 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:42:20.906210 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 7 00:42:20.906215 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jul 7 00:42:20.906221 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jul 7 00:42:20.906226 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jul 7 00:42:20.906231 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jul 7 00:42:20.906237 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jul 7 00:42:20.906242 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jul 7 00:42:20.906248 kernel: ACPI: \_TZ_.FN00: New power resource Jul 7 00:42:20.906254 kernel: ACPI: \_TZ_.FN01: New power resource Jul 7 00:42:20.906259 kernel: ACPI: \_TZ_.FN02: New power resource Jul 7 00:42:20.906264 kernel: ACPI: \_TZ_.FN03: New power resource Jul 7 00:42:20.906270 kernel: ACPI: \_TZ_.FN04: New power resource Jul 7 00:42:20.906275 kernel: ACPI: \PIN_: New power resource Jul 7 00:42:20.906280 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jul 7 00:42:20.906355 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:42:20.906409 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jul 7 00:42:20.906456 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jul 7 00:42:20.906464 kernel: PCI host bridge to bus 0000:00 Jul 7 00:42:20.906514 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:42:20.906558 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:42:20.906600 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:42:20.906642 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jul 7 00:42:20.906686 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jul 7 00:42:20.906728 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jul 7 00:42:20.906785 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:42:20.906845 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 PCIe Root Port Jul 7 00:42:20.906896 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 00:42:20.906945 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jul 7 00:42:20.907001 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 PCIe Root Port Jul 7 00:42:20.907051 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jul 7 00:42:20.907142 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Jul 7 00:42:20.907191 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 7 00:42:20.907239 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Jul 7 00:42:20.907291 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 conventional PCI endpoint Jul 7 00:42:20.907339 kernel: pci 0000:00:08.0: BAR 0 [mem 0x9551f000-0x9551ffff 64bit] Jul 7 00:42:20.907393 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 conventional PCI endpoint Jul 7 00:42:20.907441 kernel: pci 0000:00:12.0: BAR 0 [mem 0x9551e000-0x9551efff 64bit] Jul 7 00:42:20.907493 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 conventional PCI endpoint Jul 7 00:42:20.907542 kernel: pci 0000:00:14.0: BAR 0 [mem 0x95500000-0x9550ffff 64bit] Jul 7 00:42:20.907591 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jul 7 00:42:20.907648 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 conventional PCI endpoint Jul 7 00:42:20.907700 kernel: pci 0000:00:14.2: BAR 0 [mem 0x95512000-0x95513fff 64bit] Jul 7 00:42:20.907748 kernel: pci 0000:00:14.2: BAR 2 [mem 0x9551d000-0x9551dfff 64bit] Jul 7 00:42:20.907803 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 conventional PCI endpoint Jul 7 00:42:20.907852 kernel: pci 0000:00:15.0: BAR 0 [mem 0x00000000-0x00000fff 64bit] Jul 7 00:42:20.907905 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 conventional PCI endpoint Jul 7 00:42:20.907954 kernel: pci 0000:00:15.1: BAR 0 [mem 0x00000000-0x00000fff 64bit] Jul 7 00:42:20.908008 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 conventional PCI endpoint Jul 7 00:42:20.908056 kernel: pci 0000:00:16.0: BAR 0 [mem 0x9551a000-0x9551afff 64bit] Jul 7 00:42:20.908143 kernel: pci 0000:00:16.0: PME# supported from D3hot Jul 7 00:42:20.908195 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 conventional PCI endpoint Jul 7 00:42:20.908244 kernel: pci 0000:00:16.1: BAR 0 [mem 0x95519000-0x95519fff 64bit] Jul 7 00:42:20.908293 kernel: pci 0000:00:16.1: PME# supported from D3hot Jul 7 00:42:20.908346 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 conventional PCI endpoint Jul 7 00:42:20.908393 kernel: pci 0000:00:16.4: BAR 0 [mem 0x95518000-0x95518fff 64bit] Jul 7 00:42:20.908442 kernel: pci 0000:00:16.4: PME# supported from D3hot Jul 7 00:42:20.908495 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 conventional PCI endpoint Jul 7 00:42:20.908543 kernel: pci 0000:00:17.0: BAR 0 [mem 0x95510000-0x95511fff] Jul 7 00:42:20.908592 kernel: pci 0000:00:17.0: BAR 1 [mem 0x95517000-0x955170ff] Jul 7 00:42:20.908639 kernel: pci 0000:00:17.0: BAR 2 [io 0x6050-0x6057] Jul 7 00:42:20.908686 kernel: pci 0000:00:17.0: BAR 3 [io 0x6040-0x6043] Jul 7 00:42:20.908733 kernel: pci 0000:00:17.0: BAR 4 [io 0x6020-0x603f] Jul 7 00:42:20.908781 kernel: pci 0000:00:17.0: BAR 5 [mem 0x95516000-0x955167ff] Jul 7 00:42:20.908828 kernel: pci 0000:00:17.0: PME# supported from D3hot Jul 7 00:42:20.908881 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 PCIe Root Port Jul 7 00:42:20.908932 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jul 7 00:42:20.908980 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jul 7 00:42:20.909035 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 PCIe Root Port Jul 7 00:42:20.909110 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jul 7 00:42:20.909173 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 7 00:42:20.909222 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 7 00:42:20.909272 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jul 7 00:42:20.909325 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 PCIe Root Port Jul 7 00:42:20.909374 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jul 7 00:42:20.909422 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 7 00:42:20.909469 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 7 00:42:20.909517 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jul 7 00:42:20.909570 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 PCIe Root Port Jul 7 00:42:20.909620 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jul 7 00:42:20.909669 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jul 7 00:42:20.909722 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 PCIe Root Port Jul 7 00:42:20.909771 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jul 7 00:42:20.909819 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jul 7 00:42:20.909868 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Jul 7 00:42:20.909917 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Jul 7 00:42:20.909971 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 conventional PCI endpoint Jul 7 00:42:20.910021 kernel: pci 0000:00:1e.0: BAR 0 [mem 0x00000000-0x00000fff 64bit] Jul 7 00:42:20.910077 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 conventional PCI endpoint Jul 7 00:42:20.910169 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 00:42:20.910218 kernel: pci 0000:00:1f.4: BAR 0 [mem 0x95514000-0x955140ff 64bit] Jul 7 00:42:20.910266 kernel: pci 0000:00:1f.4: BAR 4 [io 0xefa0-0xefbf] Jul 7 00:42:20.910318 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 conventional PCI endpoint Jul 7 00:42:20.910368 kernel: pci 0000:00:1f.5: BAR 0 [mem 0xfe010000-0xfe010fff] Jul 7 00:42:20.910418 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 00:42:20.910473 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 7 00:42:20.910523 kernel: pci 0000:02:00.0: BAR 0 [mem 0x92000000-0x93ffffff 64bit pref] Jul 7 00:42:20.910573 kernel: pci 0000:02:00.0: ROM [mem 0x95200000-0x952fffff pref] Jul 7 00:42:20.910622 kernel: pci 0000:02:00.0: PME# supported from D3cold Jul 7 00:42:20.910671 kernel: pci 0000:02:00.0: VF BAR 0 [mem 0x00000000-0x000fffff 64bit pref] Jul 7 00:42:20.910722 kernel: pci 0000:02:00.0: VF BAR 0 [mem 0x00000000-0x007fffff 64bit pref]: contains BAR 0 for 8 VFs Jul 7 00:42:20.910776 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 7 00:42:20.910826 kernel: pci 0000:02:00.1: BAR 0 [mem 0x90000000-0x91ffffff 64bit pref] Jul 7 00:42:20.910876 kernel: pci 0000:02:00.1: ROM [mem 0x95100000-0x951fffff pref] Jul 7 00:42:20.910925 kernel: pci 0000:02:00.1: PME# supported from D3cold Jul 7 00:42:20.910973 kernel: pci 0000:02:00.1: VF BAR 0 [mem 0x00000000-0x000fffff 64bit pref] Jul 7 00:42:20.911023 kernel: pci 0000:02:00.1: VF BAR 0 [mem 0x00000000-0x007fffff 64bit pref]: contains BAR 0 for 8 VFs Jul 7 00:42:20.911097 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jul 7 00:42:20.911165 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jul 7 00:42:20.911220 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jul 7 00:42:20.911271 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 PCIe Endpoint Jul 7 00:42:20.911320 kernel: pci 0000:04:00.0: BAR 0 [mem 0x95400000-0x9547ffff] Jul 7 00:42:20.911369 kernel: pci 0000:04:00.0: BAR 2 [io 0x5000-0x501f] Jul 7 00:42:20.911417 kernel: pci 0000:04:00.0: BAR 3 [mem 0x95480000-0x95483fff] Jul 7 00:42:20.911469 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jul 7 00:42:20.911518 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jul 7 00:42:20.911572 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Jul 7 00:42:20.911622 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 PCIe Endpoint Jul 7 00:42:20.911671 kernel: pci 0000:05:00.0: BAR 0 [mem 0x95300000-0x9537ffff] Jul 7 00:42:20.911720 kernel: pci 0000:05:00.0: BAR 2 [io 0x4000-0x401f] Jul 7 00:42:20.911770 kernel: pci 0000:05:00.0: BAR 3 [mem 0x95380000-0x95383fff] Jul 7 00:42:20.911820 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Jul 7 00:42:20.911870 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jul 7 00:42:20.911919 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jul 7 00:42:20.911974 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jul 7 00:42:20.912040 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jul 7 00:42:20.912141 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jul 7 00:42:20.912192 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 7 00:42:20.912243 kernel: pci 0000:07:00.0: enabling Extended Tags Jul 7 00:42:20.912292 kernel: pci 0000:07:00.0: supports D1 D2 Jul 7 00:42:20.912341 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 00:42:20.912392 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jul 7 00:42:20.912448 kernel: pci_bus 0000:08: extended config space not accessible Jul 7 00:42:20.912508 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 conventional PCI endpoint Jul 7 00:42:20.912562 kernel: pci 0000:08:00.0: BAR 0 [mem 0x94000000-0x94ffffff] Jul 7 00:42:20.912615 kernel: pci 0000:08:00.0: BAR 1 [mem 0x95000000-0x9501ffff] Jul 7 00:42:20.912665 kernel: pci 0000:08:00.0: BAR 2 [io 0x3000-0x307f] Jul 7 00:42:20.912716 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:42:20.912767 kernel: pci 0000:08:00.0: supports D1 D2 Jul 7 00:42:20.912818 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 00:42:20.912867 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jul 7 00:42:20.912875 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jul 7 00:42:20.912881 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jul 7 00:42:20.912889 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jul 7 00:42:20.912895 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jul 7 00:42:20.912900 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jul 7 00:42:20.912906 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jul 7 00:42:20.912912 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jul 7 00:42:20.912918 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jul 7 00:42:20.912924 kernel: iommu: Default domain type: Translated Jul 7 00:42:20.912930 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:42:20.912936 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:42:20.912942 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:42:20.912948 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jul 7 00:42:20.912954 kernel: e820: reserve RAM buffer [mem 0x825bb000-0x83ffffff] Jul 7 00:42:20.912959 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Jul 7 00:42:20.912965 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Jul 7 00:42:20.912970 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jul 7 00:42:20.912976 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jul 7 00:42:20.913041 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Jul 7 00:42:20.913138 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Jul 7 00:42:20.913203 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:42:20.913212 kernel: vgaarb: loaded Jul 7 00:42:20.913218 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 7 00:42:20.913224 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Jul 7 00:42:20.913231 kernel: clocksource: Switched to clocksource tsc-early Jul 7 00:42:20.913237 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:42:20.913243 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:42:20.913248 kernel: pnp: PnP ACPI init Jul 7 00:42:20.913298 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jul 7 00:42:20.913346 kernel: pnp 00:02: [dma 0 disabled] Jul 7 00:42:20.913394 kernel: pnp 00:03: [dma 0 disabled] Jul 7 00:42:20.913446 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jul 7 00:42:20.913491 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jul 7 00:42:20.913539 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Jul 7 00:42:20.913583 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Jul 7 00:42:20.913627 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Jul 7 00:42:20.913670 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Jul 7 00:42:20.913713 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Jul 7 00:42:20.913759 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Jul 7 00:42:20.913802 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Jul 7 00:42:20.913846 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Jul 7 00:42:20.913893 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Jul 7 00:42:20.913937 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Jul 7 00:42:20.913980 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jul 7 00:42:20.914023 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Jul 7 00:42:20.914094 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Jul 7 00:42:20.914167 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Jul 7 00:42:20.914211 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Jul 7 00:42:20.914257 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Jul 7 00:42:20.914266 kernel: pnp: PnP ACPI: found 9 devices Jul 7 00:42:20.914272 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:42:20.914278 kernel: NET: Registered PF_INET protocol family Jul 7 00:42:20.914285 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:42:20.914291 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 7 00:42:20.914297 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:42:20.914303 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:42:20.914309 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 00:42:20.914315 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jul 7 00:42:20.914321 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 00:42:20.914327 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 00:42:20.914333 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:42:20.914339 kernel: NET: Registered PF_XDP protocol family Jul 7 00:42:20.914388 kernel: pci 0000:00:15.0: BAR 0 [mem 0x95515000-0x95515fff 64bit]: assigned Jul 7 00:42:20.914455 kernel: pci 0000:00:15.1: BAR 0 [mem 0x9551b000-0x9551bfff 64bit]: assigned Jul 7 00:42:20.914505 kernel: pci 0000:00:1e.0: BAR 0 [mem 0x9551c000-0x9551cfff 64bit]: assigned Jul 7 00:42:20.914568 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 00:42:20.914619 kernel: pci 0000:02:00.0: VF BAR 0 [mem size 0x00800000 64bit pref]: can't assign; no space Jul 7 00:42:20.914669 kernel: pci 0000:02:00.0: VF BAR 0 [mem size 0x00800000 64bit pref]: failed to assign Jul 7 00:42:20.914719 kernel: pci 0000:02:00.1: VF BAR 0 [mem size 0x00800000 64bit pref]: can't assign; no space Jul 7 00:42:20.914771 kernel: pci 0000:02:00.1: VF BAR 0 [mem size 0x00800000 64bit pref]: failed to assign Jul 7 00:42:20.914819 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jul 7 00:42:20.914867 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Jul 7 00:42:20.914915 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 7 00:42:20.914963 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jul 7 00:42:20.915011 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jul 7 00:42:20.915059 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 7 00:42:20.915167 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 7 00:42:20.915215 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jul 7 00:42:20.915263 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 7 00:42:20.915313 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 7 00:42:20.915363 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jul 7 00:42:20.915413 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jul 7 00:42:20.915462 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jul 7 00:42:20.915511 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 7 00:42:20.915560 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jul 7 00:42:20.915608 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jul 7 00:42:20.915656 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Jul 7 00:42:20.915699 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 7 00:42:20.915742 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:42:20.915787 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:42:20.915829 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:42:20.915870 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jul 7 00:42:20.915913 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jul 7 00:42:20.915962 kernel: pci_bus 0000:02: resource 1 [mem 0x95100000-0x952fffff] Jul 7 00:42:20.916007 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jul 7 00:42:20.916055 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Jul 7 00:42:20.916140 kernel: pci_bus 0000:04: resource 1 [mem 0x95400000-0x954fffff] Jul 7 00:42:20.916191 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jul 7 00:42:20.916235 kernel: pci_bus 0000:05: resource 1 [mem 0x95300000-0x953fffff] Jul 7 00:42:20.916285 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jul 7 00:42:20.916330 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jul 7 00:42:20.916377 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Jul 7 00:42:20.916426 kernel: pci_bus 0000:08: resource 1 [mem 0x94000000-0x950fffff] Jul 7 00:42:20.916434 kernel: PCI: CLS 64 bytes, default 64 Jul 7 00:42:20.916440 kernel: DMAR: No ATSR found Jul 7 00:42:20.916446 kernel: DMAR: No SATC found Jul 7 00:42:20.916452 kernel: DMAR: dmar0: Using Queued invalidation Jul 7 00:42:20.916500 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jul 7 00:42:20.916549 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jul 7 00:42:20.916598 kernel: pci 0000:00:01.1: Adding to iommu group 1 Jul 7 00:42:20.916648 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jul 7 00:42:20.916697 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jul 7 00:42:20.916745 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jul 7 00:42:20.916793 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jul 7 00:42:20.916841 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jul 7 00:42:20.916888 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jul 7 00:42:20.916934 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jul 7 00:42:20.916982 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jul 7 00:42:20.917033 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jul 7 00:42:20.917107 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jul 7 00:42:20.917186 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jul 7 00:42:20.917234 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jul 7 00:42:20.917282 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jul 7 00:42:20.917330 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jul 7 00:42:20.917378 kernel: pci 0000:00:1c.1: Adding to iommu group 12 Jul 7 00:42:20.917425 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jul 7 00:42:20.917476 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jul 7 00:42:20.917524 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jul 7 00:42:20.917571 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jul 7 00:42:20.917621 kernel: pci 0000:02:00.0: Adding to iommu group 1 Jul 7 00:42:20.917670 kernel: pci 0000:02:00.1: Adding to iommu group 1 Jul 7 00:42:20.917720 kernel: pci 0000:04:00.0: Adding to iommu group 15 Jul 7 00:42:20.917770 kernel: pci 0000:05:00.0: Adding to iommu group 16 Jul 7 00:42:20.917819 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jul 7 00:42:20.917871 kernel: pci 0000:08:00.0: Adding to iommu group 17 Jul 7 00:42:20.917880 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jul 7 00:42:20.917886 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 00:42:20.917891 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Jul 7 00:42:20.917897 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jul 7 00:42:20.917903 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jul 7 00:42:20.917909 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jul 7 00:42:20.917914 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jul 7 00:42:20.917964 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jul 7 00:42:20.917975 kernel: Initialise system trusted keyrings Jul 7 00:42:20.917980 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jul 7 00:42:20.917986 kernel: Key type asymmetric registered Jul 7 00:42:20.917992 kernel: Asymmetric key parser 'x509' registered Jul 7 00:42:20.917997 kernel: tsc: Refined TSC clocksource calibration: 3407.985 MHz Jul 7 00:42:20.918003 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fc5a980c, max_idle_ns: 440795300013 ns Jul 7 00:42:20.918009 kernel: clocksource: Switched to clocksource tsc Jul 7 00:42:20.918014 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 00:42:20.918021 kernel: io scheduler mq-deadline registered Jul 7 00:42:20.918027 kernel: io scheduler kyber registered Jul 7 00:42:20.918032 kernel: io scheduler bfq registered Jul 7 00:42:20.918106 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jul 7 00:42:20.918168 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 122 Jul 7 00:42:20.918216 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 123 Jul 7 00:42:20.918264 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 124 Jul 7 00:42:20.918312 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 125 Jul 7 00:42:20.918362 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 126 Jul 7 00:42:20.918410 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 127 Jul 7 00:42:20.918464 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jul 7 00:42:20.918473 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jul 7 00:42:20.918479 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jul 7 00:42:20.918485 kernel: pstore: Using crash dump compression: deflate Jul 7 00:42:20.918490 kernel: pstore: Registered erst as persistent store backend Jul 7 00:42:20.918496 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:42:20.918502 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:42:20.918509 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:42:20.918515 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 7 00:42:20.918565 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jul 7 00:42:20.918574 kernel: i8042: PNP: No PS/2 controller found. Jul 7 00:42:20.918617 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jul 7 00:42:20.918662 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jul 7 00:42:20.918706 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-07-07T00:42:19 UTC (1751848939) Jul 7 00:42:20.918753 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jul 7 00:42:20.918761 kernel: intel_pstate: Intel P-state driver initializing Jul 7 00:42:20.918767 kernel: intel_pstate: Disabling energy efficiency optimization Jul 7 00:42:20.918773 kernel: intel_pstate: HWP enabled Jul 7 00:42:20.918778 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:42:20.918784 kernel: Segment Routing with IPv6 Jul 7 00:42:20.918790 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:42:20.918796 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:42:20.918801 kernel: Key type dns_resolver registered Jul 7 00:42:20.918808 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jul 7 00:42:20.918814 kernel: microcode: Current revision: 0x00000102 Jul 7 00:42:20.918820 kernel: IPI shorthand broadcast: enabled Jul 7 00:42:20.918825 kernel: sched_clock: Marking stable (4832000734, 1487912124)->(6842762487, -522849629) Jul 7 00:42:20.918831 kernel: registered taskstats version 1 Jul 7 00:42:20.918837 kernel: Loading compiled-in X.509 certificates Jul 7 00:42:20.918842 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 025c05e23c9778f7a70ff09fb369dd949499fb06' Jul 7 00:42:20.918848 kernel: Demotion targets for Node 0: null Jul 7 00:42:20.918854 kernel: Key type .fscrypt registered Jul 7 00:42:20.918861 kernel: Key type fscrypt-provisioning registered Jul 7 00:42:20.918866 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:42:20.918872 kernel: ima: No architecture policies found Jul 7 00:42:20.918878 kernel: clk: Disabling unused clocks Jul 7 00:42:20.918883 kernel: Warning: unable to open an initial console. Jul 7 00:42:20.918889 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 00:42:20.918895 kernel: Write protecting the kernel read-only data: 24576k Jul 7 00:42:20.918901 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 00:42:20.918906 kernel: Run /init as init process Jul 7 00:42:20.918913 kernel: with arguments: Jul 7 00:42:20.918919 kernel: /init Jul 7 00:42:20.918924 kernel: with environment: Jul 7 00:42:20.918930 kernel: HOME=/ Jul 7 00:42:20.918935 kernel: TERM=linux Jul 7 00:42:20.918941 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:42:20.918947 systemd[1]: Successfully made /usr/ read-only. Jul 7 00:42:20.918955 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:42:20.918962 systemd[1]: Detected architecture x86-64. Jul 7 00:42:20.918968 systemd[1]: Running in initrd. Jul 7 00:42:20.918974 systemd[1]: No hostname configured, using default hostname. Jul 7 00:42:20.918979 systemd[1]: Hostname set to . Jul 7 00:42:20.918986 systemd[1]: Initializing machine ID from random generator. Jul 7 00:42:20.918991 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:42:20.918997 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:42:20.919004 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:42:20.919011 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:42:20.919017 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:42:20.919023 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:42:20.919029 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:42:20.919036 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:42:20.919042 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:42:20.919049 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:42:20.919055 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:42:20.919108 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:42:20.919114 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:42:20.919134 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:42:20.919140 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:42:20.919146 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:42:20.919151 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:42:20.919157 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:42:20.919165 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 00:42:20.919171 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:42:20.919177 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:42:20.919183 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:42:20.919188 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:42:20.919194 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:42:20.919200 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:42:20.919206 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:42:20.919217 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 00:42:20.919224 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:42:20.919230 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:42:20.919295 systemd-journald[299]: Collecting audit messages is disabled. Jul 7 00:42:20.919313 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:42:20.919354 systemd-journald[299]: Journal started Jul 7 00:42:20.919371 systemd-journald[299]: Runtime Journal (/run/log/journal/08a45329e24a4187839874bdf2db38cc) is 8M, max 640.1M, 632.1M free. Jul 7 00:42:20.929624 systemd-modules-load[301]: Inserted module 'overlay' Jul 7 00:42:20.952111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:42:20.952128 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:42:20.952655 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:42:20.952774 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:42:20.952871 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:42:20.967065 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:42:20.967865 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:42:20.968337 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:42:20.973754 systemd-modules-load[301]: Inserted module 'br_netfilter' Jul 7 00:42:20.974063 kernel: Bridge firewalling registered Jul 7 00:42:20.983269 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:42:20.985630 systemd-tmpfiles[313]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 00:42:20.994662 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:42:21.107904 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:42:21.129720 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:42:21.153161 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:42:21.171012 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:42:21.184083 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:42:21.219885 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:42:21.221291 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:42:21.223962 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:42:21.231385 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:42:21.242653 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:42:21.245705 systemd-resolved[331]: Positive Trust Anchors: Jul 7 00:42:21.245709 systemd-resolved[331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:42:21.245733 systemd-resolved[331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:42:21.247376 systemd-resolved[331]: Defaulting to hostname 'linux'. Jul 7 00:42:21.268288 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:42:21.282244 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:42:21.380207 dracut-cmdline[343]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:42:21.471095 kernel: SCSI subsystem initialized Jul 7 00:42:21.484095 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:42:21.497127 kernel: iscsi: registered transport (tcp) Jul 7 00:42:21.520157 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:42:21.520175 kernel: QLogic iSCSI HBA Driver Jul 7 00:42:21.530079 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:42:21.569237 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:42:21.580452 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:42:21.687159 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:42:21.700769 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:42:21.815100 kernel: raid6: avx2x4 gen() 18829 MB/s Jul 7 00:42:21.836093 kernel: raid6: avx2x2 gen() 41311 MB/s Jul 7 00:42:21.862147 kernel: raid6: avx2x1 gen() 45918 MB/s Jul 7 00:42:21.862164 kernel: raid6: using algorithm avx2x1 gen() 45918 MB/s Jul 7 00:42:21.889222 kernel: raid6: .... xor() 23115 MB/s, rmw enabled Jul 7 00:42:21.889238 kernel: raid6: using avx2x2 recovery algorithm Jul 7 00:42:21.910120 kernel: xor: automatically using best checksumming function avx Jul 7 00:42:22.021102 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:42:22.024890 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:42:22.034009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:42:22.081607 systemd-udevd[555]: Using default interface naming scheme 'v255'. Jul 7 00:42:22.084978 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:42:22.110717 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:42:22.142018 dracut-pre-trigger[566]: rd.md=0: removing MD RAID activation Jul 7 00:42:22.156396 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:42:22.167298 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:42:22.254162 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:42:22.289807 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:42:22.289823 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 00:42:22.289831 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 00:42:22.256775 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:42:22.318610 kernel: ACPI: bus type USB registered Jul 7 00:42:22.318629 kernel: usbcore: registered new interface driver usbfs Jul 7 00:42:22.318637 kernel: usbcore: registered new interface driver hub Jul 7 00:42:22.318644 kernel: usbcore: registered new device driver usb Jul 7 00:42:22.308705 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:42:22.420293 kernel: PTP clock support registered Jul 7 00:42:22.420310 kernel: AES CTR mode by8 optimization enabled Jul 7 00:42:22.420322 kernel: libata version 3.00 loaded. Jul 7 00:42:22.420330 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 7 00:42:22.420421 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jul 7 00:42:22.420488 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jul 7 00:42:22.420551 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 7 00:42:22.420613 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jul 7 00:42:22.420675 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jul 7 00:42:22.420736 kernel: hub 1-0:1.0: USB hub found Jul 7 00:42:22.420818 kernel: hub 1-0:1.0: 16 ports detected Jul 7 00:42:22.420886 kernel: hub 2-0:1.0: USB hub found Jul 7 00:42:22.420957 kernel: hub 2-0:1.0: 10 ports detected Jul 7 00:42:22.421023 kernel: ahci 0000:00:17.0: version 3.0 Jul 7 00:42:22.421094 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 7 00:42:22.421103 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 7 00:42:22.421110 kernel: ahci 0000:00:17.0: AHCI vers 0001.0301, 32 command slots, 6 Gbps, SATA mode Jul 7 00:42:22.308776 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:42:22.611122 kernel: ahci 0000:00:17.0: 8/8 ports implemented (port mask 0xff) Jul 7 00:42:22.611256 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jul 7 00:42:22.611354 kernel: scsi host0: ahci Jul 7 00:42:22.611434 kernel: scsi host1: ahci Jul 7 00:42:22.611517 kernel: igb 0000:04:00.0: added PHC on eth0 Jul 7 00:42:22.611622 kernel: scsi host2: ahci Jul 7 00:42:22.611701 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 7 00:42:22.611796 kernel: scsi host3: ahci Jul 7 00:42:22.611882 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1b:26 Jul 7 00:42:22.611974 kernel: scsi host4: ahci Jul 7 00:42:22.612070 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Jul 7 00:42:22.612165 kernel: scsi host5: ahci Jul 7 00:42:22.612250 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 7 00:42:22.612323 kernel: scsi host6: ahci Jul 7 00:42:22.612398 kernel: scsi host7: ahci Jul 7 00:42:22.612471 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 136 lpm-pol 0 Jul 7 00:42:22.612482 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 136 lpm-pol 0 Jul 7 00:42:22.612491 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 136 lpm-pol 0 Jul 7 00:42:22.612501 kernel: igb 0000:05:00.0: added PHC on eth1 Jul 7 00:42:22.612584 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 136 lpm-pol 0 Jul 7 00:42:22.612595 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 7 00:42:22.612670 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 136 lpm-pol 0 Jul 7 00:42:22.612681 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1b:27 Jul 7 00:42:22.612755 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 136 lpm-pol 0 Jul 7 00:42:22.612765 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Jul 7 00:42:22.612840 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 136 lpm-pol 0 Jul 7 00:42:22.612849 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 7 00:42:22.612924 kernel: ata8: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516480 irq 136 lpm-pol 0 Jul 7 00:42:22.318764 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:42:22.644164 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jul 7 00:42:22.526147 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:42:22.611283 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:42:22.761622 kernel: mlx5_core 0000:02:00.0: PTM is not supported by PCIe Jul 7 00:42:22.761738 kernel: hub 1-14:1.0: USB hub found Jul 7 00:42:22.761833 kernel: mlx5_core 0000:02:00.0: firmware version: 14.31.1014 Jul 7 00:42:22.761911 kernel: hub 1-14:1.0: 4 ports detected Jul 7 00:42:22.771785 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 7 00:42:22.800681 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:42:22.903067 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 7 00:42:22.903087 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 7 00:42:22.909071 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 00:42:22.915094 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jul 7 00:42:22.920065 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 00:42:22.926089 kernel: ata8: SATA link down (SStatus 0 SControl 300) Jul 7 00:42:22.932092 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 00:42:22.938092 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 7 00:42:22.943064 kernel: ata2.00: Model 'Micron_5300_MTFDDAK480TDT', rev ' D3MU001', applying quirks: zeroaftertrim Jul 7 00:42:22.959661 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jul 7 00:42:22.960065 kernel: ata1.00: Model 'Micron_5300_MTFDDAK480TDT', rev ' D3MU001', applying quirks: zeroaftertrim Jul 7 00:42:22.976814 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jul 7 00:42:22.988106 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 7 00:42:22.988122 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 7 00:42:23.006072 kernel: ata2.00: Features: NCQ-prio Jul 7 00:42:23.006088 kernel: ata1.00: Features: NCQ-prio Jul 7 00:42:23.026120 kernel: ata2.00: configured for UDMA/133 Jul 7 00:42:23.026140 kernel: ata1.00: configured for UDMA/133 Jul 7 00:42:23.030096 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jul 7 00:42:23.039133 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jul 7 00:42:23.052067 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Jul 7 00:42:23.052167 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jul 7 00:42:23.052184 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jul 7 00:42:23.070227 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Jul 7 00:42:23.078065 kernel: ata2.00: Enabling discard_zeroes_data Jul 7 00:42:23.078082 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Jul 7 00:42:23.082603 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 00:42:23.082620 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 7 00:42:23.082731 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 7 00:42:23.082803 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Jul 7 00:42:23.082864 kernel: sd 0:0:0:0: [sdb] Write Protect is off Jul 7 00:42:23.082928 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jul 7 00:42:23.082991 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 00:42:23.083050 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jul 7 00:42:23.083118 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 00:42:23.146315 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jul 7 00:42:23.151418 kernel: sd 1:0:0:0: [sda] Write Protect is off Jul 7 00:42:23.160774 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jul 7 00:42:23.160856 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 00:42:23.160924 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Jul 7 00:42:23.160989 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jul 7 00:42:23.177726 kernel: ata2.00: Enabling discard_zeroes_data Jul 7 00:42:23.197295 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:42:23.197312 kernel: GPT:9289727 != 937703087 Jul 7 00:42:23.203583 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:42:23.207404 kernel: GPT:9289727 != 937703087 Jul 7 00:42:23.212841 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:42:23.218126 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:42:23.223335 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jul 7 00:42:23.231067 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 00:42:23.243001 kernel: usbcore: registered new interface driver usbhid Jul 7 00:42:23.243021 kernel: usbhid: USB HID core driver Jul 7 00:42:23.254312 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Jul 7 00:42:23.279170 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jul 7 00:42:23.275059 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Jul 7 00:42:23.311899 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jul 7 00:42:23.364272 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jul 7 00:42:23.364369 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jul 7 00:42:23.364378 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jul 7 00:42:23.358845 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Jul 7 00:42:23.383225 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 7 00:42:23.383164 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Jul 7 00:42:23.440171 kernel: mlx5_core 0000:02:00.1: PTM is not supported by PCIe Jul 7 00:42:23.440338 kernel: mlx5_core 0000:02:00.1: firmware version: 14.31.1014 Jul 7 00:42:23.440408 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 7 00:42:23.434510 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:42:23.465186 kernel: ata2.00: Enabling discard_zeroes_data Jul 7 00:42:23.465250 disk-uuid[778]: Primary Header is updated. Jul 7 00:42:23.465250 disk-uuid[778]: Secondary Entries is updated. Jul 7 00:42:23.465250 disk-uuid[778]: Secondary Header is updated. Jul 7 00:42:23.488130 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:42:23.716120 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jul 7 00:42:23.728320 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Jul 7 00:42:24.003132 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 7 00:42:24.017067 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Jul 7 00:42:24.017180 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Jul 7 00:42:24.030536 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:42:24.048844 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:42:24.049128 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:42:24.078305 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:42:24.099569 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:42:24.149428 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:42:24.486779 kernel: ata2.00: Enabling discard_zeroes_data Jul 7 00:42:24.501456 disk-uuid[779]: The operation has completed successfully. Jul 7 00:42:24.509180 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:42:24.541635 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:42:24.541702 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:42:24.566724 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:42:24.590816 sh[826]: Success Jul 7 00:42:24.618333 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:42:24.618353 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:42:24.627577 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 00:42:24.640067 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 7 00:42:24.681603 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:42:24.682640 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:42:24.718928 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:42:24.781150 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 00:42:24.781166 kernel: BTRFS: device fsid 9d729180-1373-4e9f-840c-4db0e9220239 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (839) Jul 7 00:42:24.781174 kernel: BTRFS info (device dm-0): first mount of filesystem 9d729180-1373-4e9f-840c-4db0e9220239 Jul 7 00:42:24.781181 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:42:24.781191 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 00:42:24.788501 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:42:24.788708 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:42:24.812195 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:42:24.812617 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:42:24.830666 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:42:24.907176 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (862) Jul 7 00:42:24.907192 kernel: BTRFS info (device sda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:42:24.907200 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:42:24.907207 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:42:24.920068 kernel: BTRFS info (device sda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:42:24.921413 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:42:24.931852 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:42:24.953887 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:42:24.973034 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:42:25.020809 systemd-networkd[1009]: lo: Link UP Jul 7 00:42:25.020812 systemd-networkd[1009]: lo: Gained carrier Jul 7 00:42:25.023234 systemd-networkd[1009]: Enumeration completed Jul 7 00:42:25.023303 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:42:25.023897 systemd-networkd[1009]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:42:25.026320 systemd[1]: Reached target network.target - Network. Jul 7 00:42:25.052014 systemd-networkd[1009]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:42:25.079512 systemd-networkd[1009]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:42:25.105110 ignition[1008]: Ignition 2.21.0 Jul 7 00:42:25.105116 ignition[1008]: Stage: fetch-offline Jul 7 00:42:25.107863 unknown[1008]: fetched base config from "system" Jul 7 00:42:25.105135 ignition[1008]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:42:25.107866 unknown[1008]: fetched user config from "system" Jul 7 00:42:25.105141 ignition[1008]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:42:25.108945 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:42:25.105193 ignition[1008]: parsed url from cmdline: "" Jul 7 00:42:25.124379 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 00:42:25.105195 ignition[1008]: no config URL provided Jul 7 00:42:25.124911 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:42:25.105198 ignition[1008]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:42:25.105223 ignition[1008]: parsing config with SHA512: 0164069ea4581f3e4015e8e46c33f3e5662adb42d598706cf72f332abf7680fb5aa7d2ba22929b9339778a49414dffffa87cac9022a5c8b63e291c16f7fbc6bf Jul 7 00:42:25.108139 ignition[1008]: fetch-offline: fetch-offline passed Jul 7 00:42:25.108142 ignition[1008]: POST message to Packet Timeline Jul 7 00:42:25.108145 ignition[1008]: POST Status error: resource requires networking Jul 7 00:42:25.108176 ignition[1008]: Ignition finished successfully Jul 7 00:42:25.162836 ignition[1025]: Ignition 2.21.0 Jul 7 00:42:25.162840 ignition[1025]: Stage: kargs Jul 7 00:42:25.261193 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jul 7 00:42:25.261123 systemd-networkd[1009]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:42:25.162933 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:42:25.162938 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:42:25.163976 ignition[1025]: kargs: kargs passed Jul 7 00:42:25.163979 ignition[1025]: POST message to Packet Timeline Jul 7 00:42:25.163993 ignition[1025]: GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:42:25.164301 ignition[1025]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57258->[::1]:53: read: connection refused Jul 7 00:42:25.364618 ignition[1025]: GET https://metadata.packet.net/metadata: attempt #2 Jul 7 00:42:25.365705 ignition[1025]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35137->[::1]:53: read: connection refused Jul 7 00:42:25.502112 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jul 7 00:42:25.503810 systemd-networkd[1009]: eno1: Link UP Jul 7 00:42:25.503992 systemd-networkd[1009]: eno2: Link UP Jul 7 00:42:25.504185 systemd-networkd[1009]: enp2s0f0np0: Link UP Jul 7 00:42:25.504400 systemd-networkd[1009]: enp2s0f0np0: Gained carrier Jul 7 00:42:25.513354 systemd-networkd[1009]: enp2s0f1np1: Link UP Jul 7 00:42:25.514226 systemd-networkd[1009]: enp2s0f1np1: Gained carrier Jul 7 00:42:25.559267 systemd-networkd[1009]: enp2s0f0np0: DHCPv4 address 139.178.70.5/31, gateway 139.178.70.4 acquired from 145.40.83.140 Jul 7 00:42:25.766198 ignition[1025]: GET https://metadata.packet.net/metadata: attempt #3 Jul 7 00:42:25.767273 ignition[1025]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51787->[::1]:53: read: connection refused Jul 7 00:42:26.567406 ignition[1025]: GET https://metadata.packet.net/metadata: attempt #4 Jul 7 00:42:26.568560 ignition[1025]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58810->[::1]:53: read: connection refused Jul 7 00:42:26.729599 systemd-networkd[1009]: enp2s0f0np0: Gained IPv6LL Jul 7 00:42:27.433621 systemd-networkd[1009]: enp2s0f1np1: Gained IPv6LL Jul 7 00:42:28.170203 ignition[1025]: GET https://metadata.packet.net/metadata: attempt #5 Jul 7 00:42:28.171519 ignition[1025]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56808->[::1]:53: read: connection refused Jul 7 00:42:31.374822 ignition[1025]: GET https://metadata.packet.net/metadata: attempt #6 Jul 7 00:42:32.272993 ignition[1025]: GET result: OK Jul 7 00:42:33.406359 ignition[1025]: Ignition finished successfully Jul 7 00:42:33.412503 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:42:33.423871 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:42:33.464247 ignition[1048]: Ignition 2.21.0 Jul 7 00:42:33.464252 ignition[1048]: Stage: disks Jul 7 00:42:33.464329 ignition[1048]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:42:33.464334 ignition[1048]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:42:33.464743 ignition[1048]: disks: disks passed Jul 7 00:42:33.464746 ignition[1048]: POST message to Packet Timeline Jul 7 00:42:33.464757 ignition[1048]: GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:42:34.621489 ignition[1048]: GET result: OK Jul 7 00:42:34.951705 ignition[1048]: Ignition finished successfully Jul 7 00:42:34.956013 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:42:34.969206 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:42:34.987308 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:42:35.006282 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:42:35.026328 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:42:35.045329 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:42:35.064775 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:42:35.110987 systemd-fsck[1070]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 00:42:35.120454 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:42:35.133392 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:42:35.239075 kernel: EXT4-fs (sda9): mounted filesystem 98c55dfc-aac4-4fdd-8ec0-1f5587b3aa36 r/w with ordered data mode. Quota mode: none. Jul 7 00:42:35.239513 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:42:35.248040 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:42:35.265026 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:42:35.275633 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:42:35.296498 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 00:42:35.313065 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 (8:6) scanned by mount (1079) Jul 7 00:42:35.332448 kernel: BTRFS info (device sda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:42:35.332589 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:42:35.339473 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:42:35.361432 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jul 7 00:42:35.361512 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:42:35.361545 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:42:35.424253 coreos-metadata[1081]: Jul 07 00:42:35.413 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 00:42:35.391201 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:42:35.456210 coreos-metadata[1097]: Jul 07 00:42:35.413 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 00:42:35.415243 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:42:35.432098 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:42:35.490723 initrd-setup-root[1111]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:42:35.499186 initrd-setup-root[1118]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:42:35.509205 initrd-setup-root[1125]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:42:35.519232 initrd-setup-root[1132]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:42:35.550397 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:42:35.559966 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:42:35.590443 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:42:35.615328 kernel: BTRFS info (device sda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:42:35.608640 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:42:35.637021 ignition[1199]: INFO : Ignition 2.21.0 Jul 7 00:42:35.637021 ignition[1199]: INFO : Stage: mount Jul 7 00:42:35.650252 ignition[1199]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:42:35.650252 ignition[1199]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:42:35.650252 ignition[1199]: INFO : mount: mount passed Jul 7 00:42:35.650252 ignition[1199]: INFO : POST message to Packet Timeline Jul 7 00:42:35.650252 ignition[1199]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:42:35.647740 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:42:36.270925 coreos-metadata[1097]: Jul 07 00:42:36.270 INFO Fetch successful Jul 7 00:42:36.350537 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 7 00:42:36.350592 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jul 7 00:42:36.420090 coreos-metadata[1081]: Jul 07 00:42:36.420 INFO Fetch successful Jul 7 00:42:36.460909 coreos-metadata[1081]: Jul 07 00:42:36.460 INFO wrote hostname ci-4344.1.1-a-7d9f698c61 to /sysroot/etc/hostname Jul 7 00:42:36.462727 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:42:36.683923 ignition[1199]: INFO : GET result: OK Jul 7 00:42:37.060029 ignition[1199]: INFO : Ignition finished successfully Jul 7 00:42:37.065363 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:42:37.077847 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:42:37.110841 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:42:37.167792 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 (8:6) scanned by mount (1225) Jul 7 00:42:37.167815 kernel: BTRFS info (device sda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:42:37.175879 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:42:37.181777 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:42:37.187565 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:42:37.217117 ignition[1242]: INFO : Ignition 2.21.0 Jul 7 00:42:37.217117 ignition[1242]: INFO : Stage: files Jul 7 00:42:37.229275 ignition[1242]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:42:37.229275 ignition[1242]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:42:37.229275 ignition[1242]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:42:37.229275 ignition[1242]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:42:37.229275 ignition[1242]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:42:37.229275 ignition[1242]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:42:37.229275 ignition[1242]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:42:37.229275 ignition[1242]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:42:37.229275 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 00:42:37.229275 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 7 00:42:37.220605 unknown[1242]: wrote ssh authorized keys file for user: core Jul 7 00:42:37.357177 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:42:37.417260 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 00:42:37.417260 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:42:37.447244 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:42:37.447244 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:42:37.447244 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:42:37.447244 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:42:37.447244 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:42:37.447244 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:42:37.447244 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:42:37.447244 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:42:37.447244 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:42:37.447244 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 00:42:37.447244 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 00:42:37.447244 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 00:42:37.447244 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 7 00:42:38.301466 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 00:42:38.555229 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 00:42:38.555229 ignition[1242]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 00:42:38.582270 ignition[1242]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:42:38.582270 ignition[1242]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:42:38.582270 ignition[1242]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 00:42:38.582270 ignition[1242]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:42:38.582270 ignition[1242]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:42:38.582270 ignition[1242]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:42:38.582270 ignition[1242]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:42:38.582270 ignition[1242]: INFO : files: files passed Jul 7 00:42:38.582270 ignition[1242]: INFO : POST message to Packet Timeline Jul 7 00:42:38.582270 ignition[1242]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:42:39.627001 ignition[1242]: INFO : GET result: OK Jul 7 00:42:39.996520 ignition[1242]: INFO : Ignition finished successfully Jul 7 00:42:40.000930 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:42:40.017533 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:42:40.031724 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:42:40.061445 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:42:40.061525 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:42:40.078512 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:42:40.107376 initrd-setup-root-after-ignition[1285]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:42:40.107376 initrd-setup-root-after-ignition[1285]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:42:40.098664 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:42:40.152295 initrd-setup-root-after-ignition[1289]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:42:40.118268 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:42:40.240264 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:42:40.240327 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:42:40.240669 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:42:40.276299 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:42:40.294518 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:42:40.296788 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:42:40.352827 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:42:40.366050 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:42:40.429606 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:42:40.439697 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:42:40.459771 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:42:40.478667 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:42:40.479036 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:42:40.514427 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:42:40.523675 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:42:40.540610 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:42:40.556591 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:42:40.575596 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:42:40.594699 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:42:40.614689 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:42:40.632731 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:42:40.651795 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:42:40.670721 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:42:40.689538 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:42:40.705595 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:42:40.706053 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:42:40.739473 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:42:40.748635 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:42:40.767550 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:42:40.768009 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:42:40.787606 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:42:40.788055 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:42:40.816583 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:42:40.817081 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:42:40.834808 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:42:40.850464 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:42:40.850936 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:42:40.870734 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:42:40.889705 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:42:40.905650 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:42:40.905983 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:42:40.924743 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:42:40.925091 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:42:40.945653 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:42:40.946117 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:42:41.075270 ignition[1310]: INFO : Ignition 2.21.0 Jul 7 00:42:41.075270 ignition[1310]: INFO : Stage: umount Jul 7 00:42:41.075270 ignition[1310]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:42:41.075270 ignition[1310]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 00:42:41.075270 ignition[1310]: INFO : umount: umount passed Jul 7 00:42:41.075270 ignition[1310]: INFO : POST message to Packet Timeline Jul 7 00:42:41.075270 ignition[1310]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 00:42:40.962707 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:42:40.963152 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:42:40.979624 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 00:42:40.980037 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 00:42:40.998908 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:42:41.011264 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:42:41.011351 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:42:41.029641 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:42:41.038210 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:42:41.038378 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:42:41.065355 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:42:41.065427 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:42:41.100674 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:42:41.101509 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:42:41.101595 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:42:41.128323 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:42:41.128540 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:42:41.978044 ignition[1310]: INFO : GET result: OK Jul 7 00:42:42.343218 ignition[1310]: INFO : Ignition finished successfully Jul 7 00:42:42.346761 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:42:42.347075 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:42:42.361238 systemd[1]: Stopped target network.target - Network. Jul 7 00:42:42.374323 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:42:42.374483 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:42:42.393403 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:42:42.393541 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:42:42.409440 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:42:42.409584 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:42:42.425547 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:42:42.425707 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:42:42.443525 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:42:42.443698 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:42:42.459840 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:42:42.476505 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:42:42.494207 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:42:42.494461 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:42:42.515372 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 00:42:42.515939 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:42:42.516227 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:42:42.522728 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 00:42:42.524528 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 00:42:42.545398 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:42:42.545527 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:42:42.567434 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:42:42.582263 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:42:42.582307 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:42:42.601272 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:42:42.601308 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:42:42.621375 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:42:42.621432 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:42:42.638285 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:42:42.638450 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:42:42.658610 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:42:42.682441 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:42:42.682616 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:42:42.683696 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:42:42.684047 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:42:42.701667 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:42:42.701800 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:42:42.708268 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:42:42.708287 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:42:42.745454 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:42:42.745585 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:42:42.773477 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:42:42.773703 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:42:42.810273 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:42:42.810507 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:42:42.839478 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:42:42.856254 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 00:42:42.856493 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:42:42.866824 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:42:42.866965 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:42:43.151135 systemd-journald[299]: Received SIGTERM from PID 1 (systemd). Jul 7 00:42:42.898567 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:42:42.898705 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:42:42.921298 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 00:42:42.921444 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 00:42:42.921562 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:42:42.922583 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:42:42.922799 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:42:43.005082 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:42:43.005364 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:42:43.017192 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:42:43.036273 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:42:43.087340 systemd[1]: Switching root. Jul 7 00:42:43.265136 systemd-journald[299]: Journal stopped Jul 7 00:42:44.939800 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:42:44.939815 kernel: SELinux: policy capability open_perms=1 Jul 7 00:42:44.939824 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:42:44.939829 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:42:44.939835 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:42:44.939840 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:42:44.939846 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:42:44.939852 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:42:44.939858 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 00:42:44.939864 kernel: audit: type=1403 audit(1751848963.387:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:42:44.939871 systemd[1]: Successfully loaded SELinux policy in 80.680ms. Jul 7 00:42:44.939879 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.222ms. Jul 7 00:42:44.939886 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:42:44.939892 systemd[1]: Detected architecture x86-64. Jul 7 00:42:44.939900 systemd[1]: Detected first boot. Jul 7 00:42:44.939907 systemd[1]: Hostname set to . Jul 7 00:42:44.939913 systemd[1]: Initializing machine ID from random generator. Jul 7 00:42:44.939921 zram_generator::config[1364]: No configuration found. Jul 7 00:42:44.939928 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:42:44.939935 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 00:42:44.939943 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:42:44.939950 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:42:44.939956 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:42:44.939963 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:42:44.939969 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:42:44.939976 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:42:44.939983 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:42:44.939990 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:42:44.939997 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:42:44.940004 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:42:44.940011 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:42:44.940017 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:42:44.940024 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:42:44.940031 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:42:44.940038 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:42:44.940045 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:42:44.940053 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:42:44.940063 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jul 7 00:42:44.940070 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:42:44.940077 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:42:44.940085 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:42:44.940092 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:42:44.940099 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:42:44.940107 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:42:44.940114 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:42:44.940121 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:42:44.940128 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:42:44.940135 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:42:44.940142 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:42:44.940148 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:42:44.940155 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 00:42:44.940164 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:42:44.940171 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:42:44.940178 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:42:44.940185 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:42:44.940192 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:42:44.940200 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:42:44.940207 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:42:44.940214 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:42:44.940222 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:42:44.940229 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:42:44.940236 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:42:44.940243 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:42:44.940250 systemd[1]: Reached target machines.target - Containers. Jul 7 00:42:44.940258 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:42:44.940266 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:42:44.940273 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:42:44.940280 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:42:44.940287 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:42:44.940294 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:42:44.940301 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:42:44.940308 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:42:44.940315 kernel: ACPI: bus type drm_connector registered Jul 7 00:42:44.940322 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:42:44.940329 kernel: fuse: init (API version 7.41) Jul 7 00:42:44.940336 kernel: loop: module loaded Jul 7 00:42:44.940342 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:42:44.940349 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:42:44.940356 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:42:44.940363 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:42:44.940370 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:42:44.940379 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:42:44.940386 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:42:44.940393 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:42:44.940400 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:42:44.940418 systemd-journald[1467]: Collecting audit messages is disabled. Jul 7 00:42:44.940434 systemd-journald[1467]: Journal started Jul 7 00:42:44.940449 systemd-journald[1467]: Runtime Journal (/run/log/journal/c299ed155bd64c95a929499bfec13c57) is 8M, max 640.1M, 632.1M free. Jul 7 00:42:43.808258 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:42:43.821005 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 00:42:43.821254 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:42:44.957105 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:42:44.978111 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 00:42:44.999127 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:42:45.020313 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:42:45.020335 systemd[1]: Stopped verity-setup.service. Jul 7 00:42:45.045104 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:42:45.053105 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:42:45.061523 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:42:45.070216 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:42:45.079353 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:42:45.088338 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:42:45.097318 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:42:45.106516 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:42:45.115977 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:42:45.126910 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:42:45.137901 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:42:45.138373 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:42:45.148949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:42:45.149434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:42:45.159947 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:42:45.160422 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:42:45.169950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:42:45.170426 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:42:45.180940 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:42:45.181410 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:42:45.190958 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:42:45.191624 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:42:45.201143 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:42:45.211040 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:42:45.222010 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:42:45.233033 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 00:42:45.243999 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:42:45.275953 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:42:45.288563 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:42:45.317370 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:42:45.326231 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:42:45.326255 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:42:45.335980 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 00:42:45.347334 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:42:45.356353 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:42:45.369008 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:42:45.387263 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:42:45.398188 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:42:45.405292 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:42:45.408017 systemd-journald[1467]: Time spent on flushing to /var/log/journal/c299ed155bd64c95a929499bfec13c57 is 13.689ms for 1394 entries. Jul 7 00:42:45.408017 systemd-journald[1467]: System Journal (/var/log/journal/c299ed155bd64c95a929499bfec13c57) is 8M, max 195.6M, 187.6M free. Jul 7 00:42:45.438826 systemd-journald[1467]: Received client request to flush runtime journal. Jul 7 00:42:45.422174 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:42:45.422855 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:42:45.431801 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:42:45.442769 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:42:45.452064 kernel: loop0: detected capacity change from 0 to 229808 Jul 7 00:42:45.458206 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:42:45.472978 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:42:45.474066 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:42:45.483831 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:42:45.494342 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:42:45.504247 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:42:45.513102 kernel: loop1: detected capacity change from 0 to 146240 Jul 7 00:42:45.520272 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:42:45.530615 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:42:45.540857 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 00:42:45.559301 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:42:45.571066 kernel: loop2: detected capacity change from 0 to 8 Jul 7 00:42:45.576742 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:42:45.577144 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 00:42:45.587127 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Jul 7 00:42:45.587137 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Jul 7 00:42:45.589554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:42:45.600122 kernel: loop3: detected capacity change from 0 to 113872 Jul 7 00:42:45.654103 kernel: loop4: detected capacity change from 0 to 229808 Jul 7 00:42:45.675112 kernel: loop5: detected capacity change from 0 to 146240 Jul 7 00:42:45.699097 kernel: loop6: detected capacity change from 0 to 8 Jul 7 00:42:45.706116 kernel: loop7: detected capacity change from 0 to 113872 Jul 7 00:42:45.719640 (sd-merge)[1524]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jul 7 00:42:45.719888 (sd-merge)[1524]: Merged extensions into '/usr'. Jul 7 00:42:45.722293 systemd[1]: Reload requested from client PID 1502 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:42:45.722301 systemd[1]: Reloading... Jul 7 00:42:45.749211 ldconfig[1497]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:42:45.757141 zram_generator::config[1549]: No configuration found. Jul 7 00:42:45.817852 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:42:45.882077 systemd[1]: Reloading finished in 159 ms. Jul 7 00:42:45.901701 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:42:45.910586 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:42:45.920532 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:42:45.947557 systemd[1]: Starting ensure-sysext.service... Jul 7 00:42:45.955141 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:42:45.967212 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:42:45.978700 systemd-tmpfiles[1608]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 00:42:45.978733 systemd-tmpfiles[1608]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 00:42:45.978989 systemd-tmpfiles[1608]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:42:45.979289 systemd-tmpfiles[1608]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:42:45.980190 systemd-tmpfiles[1608]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:42:45.980494 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. Jul 7 00:42:45.980557 systemd-tmpfiles[1608]: ACLs are not supported, ignoring. Jul 7 00:42:45.983566 systemd-tmpfiles[1608]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:42:45.983573 systemd-tmpfiles[1608]: Skipping /boot Jul 7 00:42:45.984851 systemd[1]: Reload requested from client PID 1607 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:42:45.984863 systemd[1]: Reloading... Jul 7 00:42:45.993500 systemd-tmpfiles[1608]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:42:45.993508 systemd-tmpfiles[1608]: Skipping /boot Jul 7 00:42:46.004862 systemd-udevd[1609]: Using default interface naming scheme 'v255'. Jul 7 00:42:46.019101 zram_generator::config[1636]: No configuration found. Jul 7 00:42:46.069613 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jul 7 00:42:46.069678 kernel: ACPI: button: Sleep Button [SLPB] Jul 7 00:42:46.077406 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 00:42:46.085073 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:42:46.088069 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:42:46.088117 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jul 7 00:42:46.089116 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jul 7 00:42:46.105214 kernel: IPMI message handler: version 39.2 Jul 7 00:42:46.103755 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:42:46.137073 kernel: ipmi device interface Jul 7 00:42:46.178071 kernel: MACsec IEEE 802.1AE Jul 7 00:42:46.178122 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jul 7 00:42:46.178244 kernel: ipmi_si: IPMI System Interface driver Jul 7 00:42:46.178255 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jul 7 00:42:46.178339 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jul 7 00:42:46.178350 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jul 7 00:42:46.178359 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jul 7 00:42:46.178440 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jul 7 00:42:46.178511 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jul 7 00:42:46.178578 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jul 7 00:42:46.178589 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jul 7 00:42:46.209596 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Jul 7 00:42:46.212066 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jul 7 00:42:46.212219 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jul 7 00:42:46.235065 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Jul 7 00:42:46.281097 kernel: iTCO_vendor_support: vendor-support=0 Jul 7 00:42:46.296236 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jul 7 00:42:46.296425 systemd[1]: Reloading finished in 311 ms. Jul 7 00:42:46.308195 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:42:46.323071 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Jul 7 00:42:46.345051 kernel: intel_rapl_common: Found RAPL domain package Jul 7 00:42:46.345097 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jul 7 00:42:46.345216 kernel: intel_rapl_common: Found RAPL domain core Jul 7 00:42:46.355991 kernel: intel_rapl_common: Found RAPL domain dram Jul 7 00:42:46.365067 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 7 00:42:46.366557 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:42:46.387462 systemd[1]: Finished ensure-sysext.service. Jul 7 00:42:46.413548 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jul 7 00:42:46.422169 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:42:46.422828 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:42:46.439559 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:42:46.449203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:42:46.460041 augenrules[1832]: No rules Jul 7 00:42:46.466603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:42:46.487200 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:42:46.505288 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:42:46.515695 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:42:46.524223 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:42:46.524716 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:42:46.534137 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:42:46.534736 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:42:46.546001 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:42:46.546910 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:42:46.547729 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 00:42:46.571701 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:42:46.588254 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:42:46.598146 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:42:46.598699 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:42:46.603216 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:42:46.613272 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:42:46.613418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:42:46.613507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:42:46.613643 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:42:46.613731 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:42:46.613867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:42:46.613951 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:42:46.614088 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:42:46.614171 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:42:46.614320 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:42:46.614475 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:42:46.618234 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:42:46.618299 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:42:46.618962 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:42:46.619781 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:42:46.619806 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:42:46.620005 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:42:46.633794 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:42:46.649680 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:42:46.683569 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 00:42:46.685003 systemd-resolved[1845]: Positive Trust Anchors: Jul 7 00:42:46.685010 systemd-resolved[1845]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:42:46.685036 systemd-resolved[1845]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:42:46.687583 systemd-resolved[1845]: Using system hostname 'ci-4344.1.1-a-7d9f698c61'. Jul 7 00:42:46.691930 systemd-networkd[1844]: lo: Link UP Jul 7 00:42:46.691934 systemd-networkd[1844]: lo: Gained carrier Jul 7 00:42:46.694588 systemd-networkd[1844]: bond0: netdev ready Jul 7 00:42:46.695627 systemd-networkd[1844]: Enumeration completed Jul 7 00:42:46.706499 systemd-networkd[1844]: enp2s0f0np0: Configuring with /etc/systemd/network/10-b8:ce:f6:04:82:68.network. Jul 7 00:42:46.726488 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:42:46.735129 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:42:46.745259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:42:46.756116 systemd[1]: Reached target network.target - Network. Jul 7 00:42:46.763100 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:42:46.774109 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:42:46.782149 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:42:46.793112 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:42:46.804100 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 00:42:46.815104 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:42:46.826099 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:42:46.826116 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:42:46.834097 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:42:46.843180 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:42:46.852147 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:42:46.863097 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:42:46.870914 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:42:46.881754 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:42:46.891329 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 00:42:46.908446 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:42:46.917269 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 00:42:46.927739 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 00:42:46.938685 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:42:46.948418 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:42:46.957618 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:42:46.966161 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:42:46.973197 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:42:46.973214 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:42:46.973756 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:42:46.982844 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:42:46.991696 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:42:46.999643 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:42:47.004692 coreos-metadata[1884]: Jul 07 00:42:47.004 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 00:42:47.005638 coreos-metadata[1884]: Jul 07 00:42:47.005 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 00:42:47.017344 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:42:47.036385 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:42:47.039615 jq[1890]: false Jul 7 00:42:47.046157 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:42:47.054347 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 00:42:47.058991 extend-filesystems[1891]: Found /dev/sda6 Jul 7 00:42:47.063242 extend-filesystems[1891]: Found /dev/sda9 Jul 7 00:42:47.063242 extend-filesystems[1891]: Checking size of /dev/sda9 Jul 7 00:42:47.095130 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jul 7 00:42:47.094297 oslogin_cache_refresh[1892]: Refreshing passwd entry cache Jul 7 00:42:47.095293 extend-filesystems[1891]: Resized partition /dev/sda9 Jul 7 00:42:47.063828 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:42:47.103228 google_oslogin_nss_cache[1892]: oslogin_cache_refresh[1892]: Refreshing passwd entry cache Jul 7 00:42:47.103228 google_oslogin_nss_cache[1892]: oslogin_cache_refresh[1892]: Failure getting users, quitting Jul 7 00:42:47.103228 google_oslogin_nss_cache[1892]: oslogin_cache_refresh[1892]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:42:47.103228 google_oslogin_nss_cache[1892]: oslogin_cache_refresh[1892]: Refreshing group entry cache Jul 7 00:42:47.103228 google_oslogin_nss_cache[1892]: oslogin_cache_refresh[1892]: Failure getting groups, quitting Jul 7 00:42:47.103228 google_oslogin_nss_cache[1892]: oslogin_cache_refresh[1892]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:42:47.095437 oslogin_cache_refresh[1892]: Failure getting users, quitting Jul 7 00:42:47.103486 extend-filesystems[1903]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 00:42:47.090640 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:42:47.095445 oslogin_cache_refresh[1892]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:42:47.103761 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:42:47.095464 oslogin_cache_refresh[1892]: Refreshing group entry cache Jul 7 00:42:47.095746 oslogin_cache_refresh[1892]: Failure getting groups, quitting Jul 7 00:42:47.095751 oslogin_cache_refresh[1892]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:42:47.117710 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:42:47.144337 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:42:47.154168 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jul 7 00:42:47.161392 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:42:47.161719 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:42:47.177290 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:42:47.179808 systemd-logind[1917]: Watching system buttons on /dev/input/event3 (Power Button) Jul 7 00:42:47.179819 systemd-logind[1917]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 7 00:42:47.179829 systemd-logind[1917]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jul 7 00:42:47.179943 systemd-logind[1917]: New seat seat0. Jul 7 00:42:47.184536 update_engine[1922]: I20250707 00:42:47.184496 1922 main.cc:92] Flatcar Update Engine starting Jul 7 00:42:47.187800 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:42:47.188993 jq[1923]: true Jul 7 00:42:47.196701 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:42:47.206240 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:42:47.206363 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:42:47.206507 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 00:42:47.206621 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 00:42:47.215291 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:42:47.215405 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:42:47.224589 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:42:47.224698 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:42:47.245954 jq[1926]: true Jul 7 00:42:47.246320 (ntainerd)[1927]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:42:47.259593 tar[1925]: linux-amd64/LICENSE Jul 7 00:42:47.259725 tar[1925]: linux-amd64/helm Jul 7 00:42:47.267480 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jul 7 00:42:47.267643 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jul 7 00:42:47.300577 dbus-daemon[1885]: [system] SELinux support is enabled Jul 7 00:42:47.300689 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:42:47.301261 bash[1954]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:42:47.302185 update_engine[1922]: I20250707 00:42:47.302158 1922 update_check_scheduler.cc:74] Next update check in 10m48s Jul 7 00:42:47.304090 sshd_keygen[1920]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:42:47.310916 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:42:47.321389 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:42:47.331647 dbus-daemon[1885]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 00:42:47.331973 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:42:47.353595 systemd[1]: Starting sshkeys.service... Jul 7 00:42:47.359137 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:42:47.359163 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:42:47.369137 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:42:47.369150 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:42:47.379391 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:42:47.379518 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:42:47.392243 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:42:47.401903 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 00:42:47.412864 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 00:42:47.414684 containerd[1927]: time="2025-07-07T00:42:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 00:42:47.416173 containerd[1927]: time="2025-07-07T00:42:47.416154953Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 00:42:47.421298 containerd[1927]: time="2025-07-07T00:42:47.421260579Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.115µs" Jul 7 00:42:47.421298 containerd[1927]: time="2025-07-07T00:42:47.421273843Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 00:42:47.421298 containerd[1927]: time="2025-07-07T00:42:47.421284795Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 00:42:47.421365 containerd[1927]: time="2025-07-07T00:42:47.421359331Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 00:42:47.421380 containerd[1927]: time="2025-07-07T00:42:47.421368779Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 00:42:47.421397 containerd[1927]: time="2025-07-07T00:42:47.421382206Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:42:47.421422 containerd[1927]: time="2025-07-07T00:42:47.421412389Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:42:47.421446 containerd[1927]: time="2025-07-07T00:42:47.421420594Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:42:47.421556 containerd[1927]: time="2025-07-07T00:42:47.421543427Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:42:47.421556 containerd[1927]: time="2025-07-07T00:42:47.421554449Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:42:47.421586 containerd[1927]: time="2025-07-07T00:42:47.421561657Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:42:47.421586 containerd[1927]: time="2025-07-07T00:42:47.421566733Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 00:42:47.421619 containerd[1927]: time="2025-07-07T00:42:47.421605699Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 00:42:47.421730 containerd[1927]: time="2025-07-07T00:42:47.421721821Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:42:47.421751 containerd[1927]: time="2025-07-07T00:42:47.421741719Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:42:47.421751 containerd[1927]: time="2025-07-07T00:42:47.421748194Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 00:42:47.421784 containerd[1927]: time="2025-07-07T00:42:47.421766852Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 00:42:47.421917 containerd[1927]: time="2025-07-07T00:42:47.421909006Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 00:42:47.421947 containerd[1927]: time="2025-07-07T00:42:47.421939790Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:42:47.429442 containerd[1927]: time="2025-07-07T00:42:47.429391262Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 00:42:47.429442 containerd[1927]: time="2025-07-07T00:42:47.429415820Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 00:42:47.429442 containerd[1927]: time="2025-07-07T00:42:47.429425099Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 00:42:47.429442 containerd[1927]: time="2025-07-07T00:42:47.429431875Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 00:42:47.429442 containerd[1927]: time="2025-07-07T00:42:47.429438871Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 00:42:47.429524 containerd[1927]: time="2025-07-07T00:42:47.429444803Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 00:42:47.429524 containerd[1927]: time="2025-07-07T00:42:47.429457006Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 00:42:47.429524 containerd[1927]: time="2025-07-07T00:42:47.429466814Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 00:42:47.429524 containerd[1927]: time="2025-07-07T00:42:47.429473794Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 00:42:47.429524 containerd[1927]: time="2025-07-07T00:42:47.429479537Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 00:42:47.429524 containerd[1927]: time="2025-07-07T00:42:47.429484468Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 00:42:47.429678 containerd[1927]: time="2025-07-07T00:42:47.429602295Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 00:42:47.429828 containerd[1927]: time="2025-07-07T00:42:47.429811693Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 00:42:47.429869 containerd[1927]: time="2025-07-07T00:42:47.429835849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 00:42:47.429869 containerd[1927]: time="2025-07-07T00:42:47.429855601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 00:42:47.429921 containerd[1927]: time="2025-07-07T00:42:47.429907180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 00:42:47.429946 containerd[1927]: time="2025-07-07T00:42:47.429924641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 00:42:47.430012 containerd[1927]: time="2025-07-07T00:42:47.429939644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 00:42:47.430044 containerd[1927]: time="2025-07-07T00:42:47.430020196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 00:42:47.430044 containerd[1927]: time="2025-07-07T00:42:47.430030353Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 00:42:47.430044 containerd[1927]: time="2025-07-07T00:42:47.430038282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 00:42:47.430130 containerd[1927]: time="2025-07-07T00:42:47.430044875Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 00:42:47.430130 containerd[1927]: time="2025-07-07T00:42:47.430052256Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 00:42:47.430130 containerd[1927]: time="2025-07-07T00:42:47.430093966Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 00:42:47.430130 containerd[1927]: time="2025-07-07T00:42:47.430102437Z" level=info msg="Start snapshots syncer" Jul 7 00:42:47.430130 containerd[1927]: time="2025-07-07T00:42:47.430115673Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 00:42:47.430270 containerd[1927]: time="2025-07-07T00:42:47.430249051Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 00:42:47.430360 containerd[1927]: time="2025-07-07T00:42:47.430277767Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 00:42:47.430665 containerd[1927]: time="2025-07-07T00:42:47.430654200Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 00:42:47.430730 containerd[1927]: time="2025-07-07T00:42:47.430720113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 00:42:47.430759 containerd[1927]: time="2025-07-07T00:42:47.430735799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 00:42:47.430759 containerd[1927]: time="2025-07-07T00:42:47.430743583Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 00:42:47.430759 containerd[1927]: time="2025-07-07T00:42:47.430749332Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 00:42:47.430759 containerd[1927]: time="2025-07-07T00:42:47.430757951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 00:42:47.430850 containerd[1927]: time="2025-07-07T00:42:47.430766685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 00:42:47.430850 containerd[1927]: time="2025-07-07T00:42:47.430773205Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 00:42:47.430850 containerd[1927]: time="2025-07-07T00:42:47.430786857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 00:42:47.430850 containerd[1927]: time="2025-07-07T00:42:47.430794023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 00:42:47.430850 containerd[1927]: time="2025-07-07T00:42:47.430802363Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 00:42:47.431118 containerd[1927]: time="2025-07-07T00:42:47.431108491Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:42:47.431150 containerd[1927]: time="2025-07-07T00:42:47.431120273Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:42:47.431150 containerd[1927]: time="2025-07-07T00:42:47.431125806Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:42:47.431150 containerd[1927]: time="2025-07-07T00:42:47.431131202Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:42:47.431150 containerd[1927]: time="2025-07-07T00:42:47.431135947Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 00:42:47.431150 containerd[1927]: time="2025-07-07T00:42:47.431141559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 00:42:47.431150 containerd[1927]: time="2025-07-07T00:42:47.431147700Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 00:42:47.431291 containerd[1927]: time="2025-07-07T00:42:47.431159071Z" level=info msg="runtime interface created" Jul 7 00:42:47.431291 containerd[1927]: time="2025-07-07T00:42:47.431162688Z" level=info msg="created NRI interface" Jul 7 00:42:47.431291 containerd[1927]: time="2025-07-07T00:42:47.431167446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 00:42:47.431291 containerd[1927]: time="2025-07-07T00:42:47.431173340Z" level=info msg="Connect containerd service" Jul 7 00:42:47.431291 containerd[1927]: time="2025-07-07T00:42:47.431187626Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:42:47.431578 containerd[1927]: time="2025-07-07T00:42:47.431567535Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:42:47.434363 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:42:47.445402 coreos-metadata[1985]: Jul 07 00:42:47.445 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 00:42:47.446161 coreos-metadata[1985]: Jul 07 00:42:47.446 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 00:42:47.449359 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:42:47.459091 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:42:47.470575 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:42:47.478895 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jul 7 00:42:47.484681 locksmithd[1996]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:42:47.489295 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:42:47.536494 containerd[1927]: time="2025-07-07T00:42:47.536420514Z" level=info msg="Start subscribing containerd event" Jul 7 00:42:47.536494 containerd[1927]: time="2025-07-07T00:42:47.536466764Z" level=info msg="Start recovering state" Jul 7 00:42:47.536494 containerd[1927]: time="2025-07-07T00:42:47.536480594Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:42:47.536618 containerd[1927]: time="2025-07-07T00:42:47.536520587Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:42:47.536618 containerd[1927]: time="2025-07-07T00:42:47.536537554Z" level=info msg="Start event monitor" Jul 7 00:42:47.536618 containerd[1927]: time="2025-07-07T00:42:47.536548186Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:42:47.536618 containerd[1927]: time="2025-07-07T00:42:47.536554062Z" level=info msg="Start streaming server" Jul 7 00:42:47.536618 containerd[1927]: time="2025-07-07T00:42:47.536566125Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 00:42:47.536618 containerd[1927]: time="2025-07-07T00:42:47.536574962Z" level=info msg="runtime interface starting up..." Jul 7 00:42:47.536618 containerd[1927]: time="2025-07-07T00:42:47.536582014Z" level=info msg="starting plugins..." Jul 7 00:42:47.536618 containerd[1927]: time="2025-07-07T00:42:47.536593269Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 00:42:47.536779 containerd[1927]: time="2025-07-07T00:42:47.536722238Z" level=info msg="containerd successfully booted in 0.122238s" Jul 7 00:42:47.536781 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:42:47.541066 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jul 7 00:42:47.544834 tar[1925]: linux-amd64/README.md Jul 7 00:42:47.553067 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Jul 7 00:42:47.554302 systemd-networkd[1844]: enp2s0f1np1: Configuring with /etc/systemd/network/10-b8:ce:f6:04:82:69.network. Jul 7 00:42:47.569101 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:42:47.721118 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jul 7 00:42:47.732106 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Jul 7 00:42:47.732195 systemd-networkd[1844]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 7 00:42:47.732809 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 00:42:47.733372 systemd-networkd[1844]: enp2s0f0np0: Link UP Jul 7 00:42:47.733565 systemd-networkd[1844]: enp2s0f0np0: Gained carrier Jul 7 00:42:47.744066 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 7 00:42:47.744118 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jul 7 00:42:47.760173 systemd-networkd[1844]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:ce:f6:04:82:68.network. Jul 7 00:42:47.760388 systemd-networkd[1844]: enp2s0f1np1: Link UP Jul 7 00:42:47.760583 systemd-networkd[1844]: enp2s0f1np1: Gained carrier Jul 7 00:42:47.768225 systemd-networkd[1844]: bond0: Link UP Jul 7 00:42:47.768476 systemd-networkd[1844]: bond0: Gained carrier Jul 7 00:42:47.768633 systemd-timesyncd[1846]: Network configuration changed, trying to establish connection. Jul 7 00:42:47.769163 systemd-timesyncd[1846]: Network configuration changed, trying to establish connection. Jul 7 00:42:47.769504 systemd-timesyncd[1846]: Network configuration changed, trying to establish connection. Jul 7 00:42:47.769652 systemd-timesyncd[1846]: Network configuration changed, trying to establish connection. Jul 7 00:42:47.780748 extend-filesystems[1903]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 7 00:42:47.780748 extend-filesystems[1903]: old_desc_blocks = 1, new_desc_blocks = 56 Jul 7 00:42:47.780748 extend-filesystems[1903]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jul 7 00:42:47.810157 extend-filesystems[1891]: Resized filesystem in /dev/sda9 Jul 7 00:42:47.781560 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:42:47.781684 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:42:47.846644 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Jul 7 00:42:47.846659 kernel: bond0: active interface up! Jul 7 00:42:47.963125 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Jul 7 00:42:48.005749 coreos-metadata[1884]: Jul 07 00:42:48.005 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 7 00:42:48.446262 coreos-metadata[1985]: Jul 07 00:42:48.446 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 7 00:42:49.129382 systemd-networkd[1844]: bond0: Gained IPv6LL Jul 7 00:42:49.130908 systemd-timesyncd[1846]: Network configuration changed, trying to establish connection. Jul 7 00:42:49.385571 systemd-timesyncd[1846]: Network configuration changed, trying to establish connection. Jul 7 00:42:49.385632 systemd-timesyncd[1846]: Network configuration changed, trying to establish connection. Jul 7 00:42:49.386742 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:42:49.399077 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:42:49.412484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:42:49.436337 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:42:49.454892 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:42:50.172496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:42:50.182581 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:42:50.627135 kubelet[2044]: E0707 00:42:50.627071 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:42:50.628313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:42:50.628431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:42:50.628633 systemd[1]: kubelet.service: Consumed 615ms CPU time, 276.3M memory peak. Jul 7 00:42:51.146868 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:42:51.155879 systemd[1]: Started sshd@0-139.178.70.5:22-139.178.68.195:36774.service - OpenSSH per-connection server daemon (139.178.68.195:36774). Jul 7 00:42:51.215923 sshd[2062]: Accepted publickey for core from 139.178.68.195 port 36774 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:42:51.216821 sshd-session[2062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:42:51.224084 systemd-logind[1917]: New session 1 of user core. Jul 7 00:42:51.224987 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:42:51.233773 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:42:51.262114 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:42:51.273776 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:42:51.293694 (systemd)[2066]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:42:51.296142 systemd-logind[1917]: New session c1 of user core. Jul 7 00:42:51.370412 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Jul 7 00:42:51.370546 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Jul 7 00:42:51.436650 systemd[2066]: Queued start job for default target default.target. Jul 7 00:42:51.450635 systemd[2066]: Created slice app.slice - User Application Slice. Jul 7 00:42:51.450670 systemd[2066]: Reached target paths.target - Paths. Jul 7 00:42:51.450691 systemd[2066]: Reached target timers.target - Timers. Jul 7 00:42:51.451392 systemd[2066]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:42:51.457238 systemd[2066]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:42:51.457266 systemd[2066]: Reached target sockets.target - Sockets. Jul 7 00:42:51.457289 systemd[2066]: Reached target basic.target - Basic System. Jul 7 00:42:51.457310 systemd[2066]: Reached target default.target - Main User Target. Jul 7 00:42:51.457325 systemd[2066]: Startup finished in 154ms. Jul 7 00:42:51.457376 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:42:51.467013 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:42:51.530037 systemd[1]: Started sshd@1-139.178.70.5:22-139.178.68.195:56846.service - OpenSSH per-connection server daemon (139.178.68.195:56846). Jul 7 00:42:51.574654 sshd[2079]: Accepted publickey for core from 139.178.68.195 port 56846 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:42:51.575258 sshd-session[2079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:42:51.577652 systemd-logind[1917]: New session 2 of user core. Jul 7 00:42:51.596231 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:42:51.654793 sshd[2081]: Connection closed by 139.178.68.195 port 56846 Jul 7 00:42:51.654957 sshd-session[2079]: pam_unix(sshd:session): session closed for user core Jul 7 00:42:51.663127 systemd[1]: sshd@1-139.178.70.5:22-139.178.68.195:56846.service: Deactivated successfully. Jul 7 00:42:51.663892 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:42:51.664397 systemd-logind[1917]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:42:51.665442 systemd[1]: Started sshd@2-139.178.70.5:22-139.178.68.195:56854.service - OpenSSH per-connection server daemon (139.178.68.195:56854). Jul 7 00:42:51.675919 systemd-logind[1917]: Removed session 2. Jul 7 00:42:51.718751 sshd[2087]: Accepted publickey for core from 139.178.68.195 port 56854 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:42:51.719434 sshd-session[2087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:42:51.722076 systemd-logind[1917]: New session 3 of user core. Jul 7 00:42:51.741249 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:42:51.796447 sshd[2090]: Connection closed by 139.178.68.195 port 56854 Jul 7 00:42:51.796611 sshd-session[2087]: pam_unix(sshd:session): session closed for user core Jul 7 00:42:51.798216 systemd[1]: sshd@2-139.178.70.5:22-139.178.68.195:56854.service: Deactivated successfully. Jul 7 00:42:51.799026 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:42:51.799512 systemd-logind[1917]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:42:51.800042 systemd-logind[1917]: Removed session 3. Jul 7 00:42:52.134013 coreos-metadata[1985]: Jul 07 00:42:52.133 INFO Fetch successful Jul 7 00:42:52.165041 unknown[1985]: wrote ssh authorized keys file for user: core Jul 7 00:42:52.196335 update-ssh-keys[2096]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:42:52.196966 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 00:42:52.209256 systemd[1]: Finished sshkeys.service. Jul 7 00:42:52.448533 coreos-metadata[1884]: Jul 07 00:42:52.448 INFO Fetch successful Jul 7 00:42:52.512678 login[2004]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 00:42:52.515734 systemd-logind[1917]: New session 4 of user core. Jul 7 00:42:52.516384 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:42:52.518104 login[2003]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 00:42:52.520628 systemd-logind[1917]: New session 5 of user core. Jul 7 00:42:52.520998 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:42:52.521950 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:42:52.522987 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jul 7 00:42:52.858585 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jul 7 00:42:52.859940 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:42:52.860446 systemd[1]: Startup finished in 5.473s (kernel) + 23.088s (initrd) + 9.553s (userspace) = 38.116s. Jul 7 00:42:54.407608 systemd-timesyncd[1846]: Network configuration changed, trying to establish connection. Jul 7 00:43:00.812265 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:43:00.815765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:43:01.092996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:43:01.099055 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:43:01.128853 kubelet[2138]: E0707 00:43:01.128780 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:43:01.130897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:43:01.130974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:43:01.131183 systemd[1]: kubelet.service: Consumed 160ms CPU time, 114.4M memory peak. Jul 7 00:43:01.818963 systemd[1]: Started sshd@3-139.178.70.5:22-139.178.68.195:46828.service - OpenSSH per-connection server daemon (139.178.68.195:46828). Jul 7 00:43:01.861823 sshd[2155]: Accepted publickey for core from 139.178.68.195 port 46828 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:43:01.862442 sshd-session[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:43:01.864929 systemd-logind[1917]: New session 6 of user core. Jul 7 00:43:01.876329 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:43:01.926072 sshd[2157]: Connection closed by 139.178.68.195 port 46828 Jul 7 00:43:01.926349 sshd-session[2155]: pam_unix(sshd:session): session closed for user core Jul 7 00:43:01.949413 systemd[1]: sshd@3-139.178.70.5:22-139.178.68.195:46828.service: Deactivated successfully. Jul 7 00:43:01.950184 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:43:01.950677 systemd-logind[1917]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:43:01.951624 systemd[1]: Started sshd@4-139.178.70.5:22-139.178.68.195:46844.service - OpenSSH per-connection server daemon (139.178.68.195:46844). Jul 7 00:43:01.952116 systemd-logind[1917]: Removed session 6. Jul 7 00:43:02.001602 sshd[2163]: Accepted publickey for core from 139.178.68.195 port 46844 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:43:02.002173 sshd-session[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:43:02.004623 systemd-logind[1917]: New session 7 of user core. Jul 7 00:43:02.014326 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:43:02.061284 sshd[2166]: Connection closed by 139.178.68.195 port 46844 Jul 7 00:43:02.061609 sshd-session[2163]: pam_unix(sshd:session): session closed for user core Jul 7 00:43:02.085829 systemd[1]: sshd@4-139.178.70.5:22-139.178.68.195:46844.service: Deactivated successfully. Jul 7 00:43:02.089909 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:43:02.092291 systemd-logind[1917]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:43:02.098582 systemd[1]: Started sshd@5-139.178.70.5:22-139.178.68.195:46854.service - OpenSSH per-connection server daemon (139.178.68.195:46854). Jul 7 00:43:02.100343 systemd-logind[1917]: Removed session 7. Jul 7 00:43:02.143173 sshd[2172]: Accepted publickey for core from 139.178.68.195 port 46854 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:43:02.143753 sshd-session[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:43:02.146284 systemd-logind[1917]: New session 8 of user core. Jul 7 00:43:02.159330 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:43:02.208479 sshd[2174]: Connection closed by 139.178.68.195 port 46854 Jul 7 00:43:02.208728 sshd-session[2172]: pam_unix(sshd:session): session closed for user core Jul 7 00:43:02.227712 systemd[1]: sshd@5-139.178.70.5:22-139.178.68.195:46854.service: Deactivated successfully. Jul 7 00:43:02.230233 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:43:02.232351 systemd-logind[1917]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:43:02.237588 systemd[1]: Started sshd@6-139.178.70.5:22-139.178.68.195:46858.service - OpenSSH per-connection server daemon (139.178.68.195:46858). Jul 7 00:43:02.239333 systemd-logind[1917]: Removed session 8. Jul 7 00:43:02.332821 sshd[2180]: Accepted publickey for core from 139.178.68.195 port 46858 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:43:02.333489 sshd-session[2180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:43:02.336313 systemd-logind[1917]: New session 9 of user core. Jul 7 00:43:02.348220 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:43:02.404955 sudo[2183]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:43:02.405100 sudo[2183]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:43:02.417685 sudo[2183]: pam_unix(sudo:session): session closed for user root Jul 7 00:43:02.418521 sshd[2182]: Connection closed by 139.178.68.195 port 46858 Jul 7 00:43:02.418697 sshd-session[2180]: pam_unix(sshd:session): session closed for user core Jul 7 00:43:02.429513 systemd[1]: sshd@6-139.178.70.5:22-139.178.68.195:46858.service: Deactivated successfully. Jul 7 00:43:02.430459 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:43:02.431064 systemd-logind[1917]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:43:02.432362 systemd[1]: Started sshd@7-139.178.70.5:22-139.178.68.195:46874.service - OpenSSH per-connection server daemon (139.178.68.195:46874). Jul 7 00:43:02.433018 systemd-logind[1917]: Removed session 9. Jul 7 00:43:02.465058 sshd[2189]: Accepted publickey for core from 139.178.68.195 port 46874 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:43:02.465676 sshd-session[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:43:02.468402 systemd-logind[1917]: New session 10 of user core. Jul 7 00:43:02.477342 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:43:02.527581 sudo[2193]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:43:02.527777 sudo[2193]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:43:02.530172 sudo[2193]: pam_unix(sudo:session): session closed for user root Jul 7 00:43:02.532785 sudo[2192]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 00:43:02.532926 sudo[2192]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:43:02.538578 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:43:02.573029 augenrules[2215]: No rules Jul 7 00:43:02.573425 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:43:02.573554 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:43:02.574008 sudo[2192]: pam_unix(sudo:session): session closed for user root Jul 7 00:43:02.574719 sshd[2191]: Connection closed by 139.178.68.195 port 46874 Jul 7 00:43:02.574900 sshd-session[2189]: pam_unix(sshd:session): session closed for user core Jul 7 00:43:02.601779 systemd[1]: sshd@7-139.178.70.5:22-139.178.68.195:46874.service: Deactivated successfully. Jul 7 00:43:02.603019 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:43:02.603676 systemd-logind[1917]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:43:02.605327 systemd[1]: Started sshd@8-139.178.70.5:22-139.178.68.195:46878.service - OpenSSH per-connection server daemon (139.178.68.195:46878). Jul 7 00:43:02.605882 systemd-logind[1917]: Removed session 10. Jul 7 00:43:02.675977 sshd[2224]: Accepted publickey for core from 139.178.68.195 port 46878 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:43:02.677252 sshd-session[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:43:02.682106 systemd-logind[1917]: New session 11 of user core. Jul 7 00:43:02.693525 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:43:02.751524 sudo[2227]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:43:02.751668 sudo[2227]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:43:03.164261 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:43:03.177426 (dockerd)[2254]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:43:03.416122 dockerd[2254]: time="2025-07-07T00:43:03.416012420Z" level=info msg="Starting up" Jul 7 00:43:03.416888 dockerd[2254]: time="2025-07-07T00:43:03.416853361Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 00:43:03.442714 dockerd[2254]: time="2025-07-07T00:43:03.442665907Z" level=info msg="Loading containers: start." Jul 7 00:43:03.453097 kernel: Initializing XFRM netlink socket Jul 7 00:43:03.589769 systemd-timesyncd[1846]: Network configuration changed, trying to establish connection. Jul 7 00:43:03.609364 systemd-networkd[1844]: docker0: Link UP Jul 7 00:43:03.610549 dockerd[2254]: time="2025-07-07T00:43:03.610507053Z" level=info msg="Loading containers: done." Jul 7 00:43:03.617080 dockerd[2254]: time="2025-07-07T00:43:03.617024913Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:43:03.617151 dockerd[2254]: time="2025-07-07T00:43:03.617079628Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 00:43:03.617151 dockerd[2254]: time="2025-07-07T00:43:03.617133615Z" level=info msg="Initializing buildkit" Jul 7 00:43:03.628296 dockerd[2254]: time="2025-07-07T00:43:03.628250710Z" level=info msg="Completed buildkit initialization" Jul 7 00:43:03.630696 dockerd[2254]: time="2025-07-07T00:43:03.630627508Z" level=info msg="Daemon has completed initialization" Jul 7 00:43:03.630696 dockerd[2254]: time="2025-07-07T00:43:03.630657977Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:43:03.630761 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:42:20.280224 systemd-resolved[1845]: Clock change detected. Flushing caches. Jul 7 00:42:20.302356 systemd-journald[1467]: Time jumped backwards, rotating. Jul 7 00:42:20.280273 systemd-timesyncd[1846]: Contacted time server [2604:a880:1:20::17:5001]:123 (2.flatcar.pool.ntp.org). Jul 7 00:42:20.280296 systemd-timesyncd[1846]: Initial clock synchronization to Mon 2025-07-07 00:42:20.280184 UTC. Jul 7 00:42:20.887414 containerd[1927]: time="2025-07-07T00:42:20.887292788Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 7 00:42:21.549794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1927495695.mount: Deactivated successfully. Jul 7 00:42:22.366036 containerd[1927]: time="2025-07-07T00:42:22.366010904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:22.366246 containerd[1927]: time="2025-07-07T00:42:22.366197766Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 7 00:42:22.366579 containerd[1927]: time="2025-07-07T00:42:22.366539099Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:22.367882 containerd[1927]: time="2025-07-07T00:42:22.367867660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:22.368438 containerd[1927]: time="2025-07-07T00:42:22.368420788Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.481058431s" Jul 7 00:42:22.368462 containerd[1927]: time="2025-07-07T00:42:22.368443672Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 7 00:42:22.368723 containerd[1927]: time="2025-07-07T00:42:22.368713328Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 7 00:42:23.428807 containerd[1927]: time="2025-07-07T00:42:23.428758761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:23.428995 containerd[1927]: time="2025-07-07T00:42:23.428928123Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 7 00:42:23.429356 containerd[1927]: time="2025-07-07T00:42:23.429320962Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:23.430679 containerd[1927]: time="2025-07-07T00:42:23.430639448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:23.431190 containerd[1927]: time="2025-07-07T00:42:23.431151876Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.062421893s" Jul 7 00:42:23.431230 containerd[1927]: time="2025-07-07T00:42:23.431190553Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 7 00:42:23.431580 containerd[1927]: time="2025-07-07T00:42:23.431538796Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 7 00:42:24.325207 containerd[1927]: time="2025-07-07T00:42:24.325155301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:24.325398 containerd[1927]: time="2025-07-07T00:42:24.325361884Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 7 00:42:24.325693 containerd[1927]: time="2025-07-07T00:42:24.325680852Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:24.326940 containerd[1927]: time="2025-07-07T00:42:24.326904892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:24.327515 containerd[1927]: time="2025-07-07T00:42:24.327472517Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 895.91825ms" Jul 7 00:42:24.327515 containerd[1927]: time="2025-07-07T00:42:24.327488972Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 7 00:42:24.327786 containerd[1927]: time="2025-07-07T00:42:24.327722247Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 7 00:42:25.143229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3168526600.mount: Deactivated successfully. Jul 7 00:42:25.355848 containerd[1927]: time="2025-07-07T00:42:25.355783783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:25.356039 containerd[1927]: time="2025-07-07T00:42:25.355985935Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 7 00:42:25.356367 containerd[1927]: time="2025-07-07T00:42:25.356326907Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:25.357104 containerd[1927]: time="2025-07-07T00:42:25.357060476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:25.357475 containerd[1927]: time="2025-07-07T00:42:25.357433076Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.029694069s" Jul 7 00:42:25.357475 containerd[1927]: time="2025-07-07T00:42:25.357449230Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 7 00:42:25.357732 containerd[1927]: time="2025-07-07T00:42:25.357690847Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 7 00:42:25.881091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130244623.mount: Deactivated successfully. Jul 7 00:42:26.461427 containerd[1927]: time="2025-07-07T00:42:26.461402532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:26.461618 containerd[1927]: time="2025-07-07T00:42:26.461580842Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 7 00:42:26.462014 containerd[1927]: time="2025-07-07T00:42:26.461974644Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:26.463451 containerd[1927]: time="2025-07-07T00:42:26.463407204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:26.464053 containerd[1927]: time="2025-07-07T00:42:26.463989457Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.106283437s" Jul 7 00:42:26.464053 containerd[1927]: time="2025-07-07T00:42:26.464020500Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 7 00:42:26.464826 containerd[1927]: time="2025-07-07T00:42:26.464808877Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:42:26.967679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949314417.mount: Deactivated successfully. Jul 7 00:42:26.968715 containerd[1927]: time="2025-07-07T00:42:26.968697730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:42:26.968874 containerd[1927]: time="2025-07-07T00:42:26.968863018Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 00:42:26.969327 containerd[1927]: time="2025-07-07T00:42:26.969291937Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:42:26.970167 containerd[1927]: time="2025-07-07T00:42:26.970118487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:42:26.970542 containerd[1927]: time="2025-07-07T00:42:26.970501757Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 505.671147ms" Jul 7 00:42:26.970542 containerd[1927]: time="2025-07-07T00:42:26.970516872Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:42:26.970805 containerd[1927]: time="2025-07-07T00:42:26.970770190Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 7 00:42:27.479954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531903755.mount: Deactivated successfully. Jul 7 00:42:27.925586 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:42:27.926663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:42:28.271714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:42:28.273896 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:42:28.293745 kubelet[2670]: E0707 00:42:28.293719 2670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:42:28.295259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:42:28.295348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:42:28.295546 systemd[1]: kubelet.service: Consumed 119ms CPU time, 114.6M memory peak. Jul 7 00:42:28.614305 containerd[1927]: time="2025-07-07T00:42:28.614215452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:28.614482 containerd[1927]: time="2025-07-07T00:42:28.614368480Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 7 00:42:28.614772 containerd[1927]: time="2025-07-07T00:42:28.614733358Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:28.616283 containerd[1927]: time="2025-07-07T00:42:28.616244138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:28.616899 containerd[1927]: time="2025-07-07T00:42:28.616852575Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.646062474s" Jul 7 00:42:28.616899 containerd[1927]: time="2025-07-07T00:42:28.616874668Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 7 00:42:30.789149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:42:30.789257 systemd[1]: kubelet.service: Consumed 119ms CPU time, 114.6M memory peak. Jul 7 00:42:30.790538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:42:30.805054 systemd[1]: Reload requested from client PID 2741 ('systemctl') (unit session-11.scope)... Jul 7 00:42:30.805061 systemd[1]: Reloading... Jul 7 00:42:30.848188 zram_generator::config[2787]: No configuration found. Jul 7 00:42:30.906928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:42:31.000009 systemd[1]: Reloading finished in 194 ms. Jul 7 00:42:31.043781 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:42:31.043825 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:42:31.043961 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:42:31.043986 systemd[1]: kubelet.service: Consumed 57ms CPU time, 98.2M memory peak. Jul 7 00:42:31.044777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:42:31.290529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:42:31.292702 (kubelet)[2854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:42:31.311565 kubelet[2854]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:42:31.311565 kubelet[2854]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:42:31.311565 kubelet[2854]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:42:31.311565 kubelet[2854]: I0707 00:42:31.311507 2854 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:42:31.695717 kubelet[2854]: I0707 00:42:31.695677 2854 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 00:42:31.695717 kubelet[2854]: I0707 00:42:31.695688 2854 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:42:31.695829 kubelet[2854]: I0707 00:42:31.695795 2854 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 00:42:31.737190 kubelet[2854]: I0707 00:42:31.737120 2854 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:42:31.737779 kubelet[2854]: E0707 00:42:31.737741 2854 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 00:42:31.744148 kubelet[2854]: I0707 00:42:31.744110 2854 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:42:31.753606 kubelet[2854]: I0707 00:42:31.753572 2854 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:42:31.753737 kubelet[2854]: I0707 00:42:31.753695 2854 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:42:31.753836 kubelet[2854]: I0707 00:42:31.753710 2854 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-a-7d9f698c61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:42:31.753836 kubelet[2854]: I0707 00:42:31.753807 2854 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:42:31.753836 kubelet[2854]: I0707 00:42:31.753814 2854 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 00:42:31.753945 kubelet[2854]: I0707 00:42:31.753888 2854 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:42:31.755974 kubelet[2854]: I0707 00:42:31.755936 2854 kubelet.go:480] "Attempting to sync node with API server" Jul 7 00:42:31.755974 kubelet[2854]: I0707 00:42:31.755946 2854 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:42:31.755974 kubelet[2854]: I0707 00:42:31.755959 2854 kubelet.go:386] "Adding apiserver pod source" Jul 7 00:42:31.755974 kubelet[2854]: I0707 00:42:31.755966 2854 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:42:31.760086 kubelet[2854]: I0707 00:42:31.760075 2854 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:42:31.760563 kubelet[2854]: I0707 00:42:31.760510 2854 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 00:42:31.761284 kubelet[2854]: W0707 00:42:31.761248 2854 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:42:31.761393 kubelet[2854]: E0707 00:42:31.761330 2854 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 00:42:31.761434 kubelet[2854]: E0707 00:42:31.761389 2854 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-7d9f698c61&limit=500&resourceVersion=0\": dial tcp 139.178.70.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 00:42:31.764132 kubelet[2854]: I0707 00:42:31.764118 2854 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:42:31.764196 kubelet[2854]: I0707 00:42:31.764169 2854 server.go:1289] "Started kubelet" Jul 7 00:42:31.764764 kubelet[2854]: I0707 00:42:31.764737 2854 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:42:31.765155 kubelet[2854]: I0707 00:42:31.765144 2854 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:42:31.765155 kubelet[2854]: I0707 00:42:31.765148 2854 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:42:31.765216 kubelet[2854]: E0707 00:42:31.765154 2854 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:42:31.765216 kubelet[2854]: I0707 00:42:31.765215 2854 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:42:31.765261 kubelet[2854]: I0707 00:42:31.765228 2854 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:42:31.765261 kubelet[2854]: E0707 00:42:31.765212 2854 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-7d9f698c61\" not found" Jul 7 00:42:31.765311 kubelet[2854]: I0707 00:42:31.765276 2854 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:42:31.765385 kubelet[2854]: E0707 00:42:31.765367 2854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-7d9f698c61?timeout=10s\": dial tcp 139.178.70.5:6443: connect: connection refused" interval="200ms" Jul 7 00:42:31.765458 kubelet[2854]: E0707 00:42:31.765447 2854 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 00:42:31.765490 kubelet[2854]: I0707 00:42:31.765470 2854 factory.go:223] Registration of the systemd container factory successfully Jul 7 00:42:31.765528 kubelet[2854]: I0707 00:42:31.765498 2854 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:42:31.765560 kubelet[2854]: I0707 00:42:31.765543 2854 server.go:317] "Adding debug handlers to kubelet server" Jul 7 00:42:31.765592 kubelet[2854]: I0707 00:42:31.765520 2854 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:42:31.765682 kubelet[2854]: I0707 00:42:31.765672 2854 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:42:31.766025 kubelet[2854]: I0707 00:42:31.766019 2854 factory.go:223] Registration of the containerd container factory successfully Jul 7 00:42:31.772662 kubelet[2854]: E0707 00:42:31.768822 2854 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.5:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-a-7d9f698c61.184fd157060aaa50 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-a-7d9f698c61,UID:ci-4344.1.1-a-7d9f698c61,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-a-7d9f698c61,},FirstTimestamp:2025-07-07 00:42:31.76413448 +0000 UTC m=+0.469380052,LastTimestamp:2025-07-07 00:42:31.76413448 +0000 UTC m=+0.469380052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-a-7d9f698c61,}" Jul 7 00:42:31.775060 kubelet[2854]: I0707 00:42:31.775047 2854 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:42:31.775060 kubelet[2854]: I0707 00:42:31.775056 2854 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:42:31.775119 kubelet[2854]: I0707 00:42:31.775066 2854 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:42:31.776206 kubelet[2854]: I0707 00:42:31.776195 2854 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 00:42:31.776686 kubelet[2854]: I0707 00:42:31.776681 2854 policy_none.go:49] "None policy: Start" Jul 7 00:42:31.776713 kubelet[2854]: I0707 00:42:31.776690 2854 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:42:31.776713 kubelet[2854]: I0707 00:42:31.776696 2854 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:42:31.776742 kubelet[2854]: I0707 00:42:31.776733 2854 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 00:42:31.776756 kubelet[2854]: I0707 00:42:31.776745 2854 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 00:42:31.776771 kubelet[2854]: I0707 00:42:31.776759 2854 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:42:31.776771 kubelet[2854]: I0707 00:42:31.776765 2854 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 00:42:31.776800 kubelet[2854]: E0707 00:42:31.776792 2854 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:42:31.777042 kubelet[2854]: E0707 00:42:31.777028 2854 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 00:42:31.779110 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:42:31.805646 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:42:31.814857 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:42:31.832198 kubelet[2854]: E0707 00:42:31.832084 2854 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 00:42:31.832554 kubelet[2854]: I0707 00:42:31.832481 2854 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:42:31.832554 kubelet[2854]: I0707 00:42:31.832509 2854 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:42:31.832906 kubelet[2854]: I0707 00:42:31.832850 2854 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:42:31.834326 kubelet[2854]: E0707 00:42:31.834262 2854 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:42:31.834499 kubelet[2854]: E0707 00:42:31.834339 2854 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-a-7d9f698c61\" not found" Jul 7 00:42:31.886601 systemd[1]: Created slice kubepods-burstable-podf694e4cef501b3fbcfcddb0e19d0c243.slice - libcontainer container kubepods-burstable-podf694e4cef501b3fbcfcddb0e19d0c243.slice. Jul 7 00:42:31.911322 kubelet[2854]: E0707 00:42:31.911275 2854 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-7d9f698c61\" not found" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:31.913731 systemd[1]: Created slice kubepods-burstable-pod2dd5f196eee0acee1565b9dd868fa02a.slice - libcontainer container kubepods-burstable-pod2dd5f196eee0acee1565b9dd868fa02a.slice. Jul 7 00:42:31.930775 kubelet[2854]: E0707 00:42:31.930720 2854 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-7d9f698c61\" not found" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:31.934068 kubelet[2854]: I0707 00:42:31.934025 2854 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:31.934428 kubelet[2854]: E0707 00:42:31.934365 2854 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.5:6443/api/v1/nodes\": dial tcp 139.178.70.5:6443: connect: connection refused" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:31.934445 systemd[1]: Created slice kubepods-burstable-podfaf83eb61e57db83ce30e8e099fe2ef4.slice - libcontainer container kubepods-burstable-podfaf83eb61e57db83ce30e8e099fe2ef4.slice. Jul 7 00:42:31.936920 kubelet[2854]: E0707 00:42:31.936856 2854 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-7d9f698c61\" not found" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:31.967200 kubelet[2854]: E0707 00:42:31.966968 2854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-7d9f698c61?timeout=10s\": dial tcp 139.178.70.5:6443: connect: connection refused" interval="400ms" Jul 7 00:42:32.066867 kubelet[2854]: I0707 00:42:32.066744 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2dd5f196eee0acee1565b9dd868fa02a-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-7d9f698c61\" (UID: \"2dd5f196eee0acee1565b9dd868fa02a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:32.066867 kubelet[2854]: I0707 00:42:32.066862 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2dd5f196eee0acee1565b9dd868fa02a-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-a-7d9f698c61\" (UID: \"2dd5f196eee0acee1565b9dd868fa02a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:32.067226 kubelet[2854]: I0707 00:42:32.066940 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2dd5f196eee0acee1565b9dd868fa02a-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-7d9f698c61\" (UID: \"2dd5f196eee0acee1565b9dd868fa02a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:32.067226 kubelet[2854]: I0707 00:42:32.066987 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f694e4cef501b3fbcfcddb0e19d0c243-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-a-7d9f698c61\" (UID: \"f694e4cef501b3fbcfcddb0e19d0c243\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:32.067226 kubelet[2854]: I0707 00:42:32.067121 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f694e4cef501b3fbcfcddb0e19d0c243-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-a-7d9f698c61\" (UID: \"f694e4cef501b3fbcfcddb0e19d0c243\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:32.067580 kubelet[2854]: I0707 00:42:32.067244 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2dd5f196eee0acee1565b9dd868fa02a-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-a-7d9f698c61\" (UID: \"2dd5f196eee0acee1565b9dd868fa02a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:32.067580 kubelet[2854]: I0707 00:42:32.067309 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2dd5f196eee0acee1565b9dd868fa02a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-a-7d9f698c61\" (UID: \"2dd5f196eee0acee1565b9dd868fa02a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:32.067580 kubelet[2854]: I0707 00:42:32.067358 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/faf83eb61e57db83ce30e8e099fe2ef4-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-a-7d9f698c61\" (UID: \"faf83eb61e57db83ce30e8e099fe2ef4\") " pod="kube-system/kube-scheduler-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:32.067580 kubelet[2854]: I0707 00:42:32.067429 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f694e4cef501b3fbcfcddb0e19d0c243-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-a-7d9f698c61\" (UID: \"f694e4cef501b3fbcfcddb0e19d0c243\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:32.138162 kubelet[2854]: I0707 00:42:32.138069 2854 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:32.138787 kubelet[2854]: E0707 00:42:32.138729 2854 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.5:6443/api/v1/nodes\": dial tcp 139.178.70.5:6443: connect: connection refused" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:32.213357 containerd[1927]: time="2025-07-07T00:42:32.213255982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-a-7d9f698c61,Uid:f694e4cef501b3fbcfcddb0e19d0c243,Namespace:kube-system,Attempt:0,}" Jul 7 00:42:32.222046 containerd[1927]: time="2025-07-07T00:42:32.221980806Z" level=info msg="connecting to shim b59f3838ecaeeb417c41fd587ed75c59f5e7f4fe40a6950dfbeacd16019e7611" address="unix:///run/containerd/s/92b11f39e4b0fd0a61eed1d59e8796746236fbca1ef113f1857329a471c28f66" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:42:32.231245 containerd[1927]: time="2025-07-07T00:42:32.231222992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-a-7d9f698c61,Uid:2dd5f196eee0acee1565b9dd868fa02a,Namespace:kube-system,Attempt:0,}" Jul 7 00:42:32.238438 containerd[1927]: time="2025-07-07T00:42:32.238364326Z" level=info msg="connecting to shim bb000f4e0ce01a68f4d2a5d56be98a0a04324de2456c9acf8defc5b44bdc1e06" address="unix:///run/containerd/s/1b388f0eb2b23a041fe9b01d5e0fb97e6c4f455e6201d71aa39efde9f0e81379" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:42:32.238438 containerd[1927]: time="2025-07-07T00:42:32.238406365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-a-7d9f698c61,Uid:faf83eb61e57db83ce30e8e099fe2ef4,Namespace:kube-system,Attempt:0,}" Jul 7 00:42:32.239298 systemd[1]: Started cri-containerd-b59f3838ecaeeb417c41fd587ed75c59f5e7f4fe40a6950dfbeacd16019e7611.scope - libcontainer container b59f3838ecaeeb417c41fd587ed75c59f5e7f4fe40a6950dfbeacd16019e7611. Jul 7 00:42:32.246068 containerd[1927]: time="2025-07-07T00:42:32.246045281Z" level=info msg="connecting to shim b9879df2d59dd194b785353d6a956b616abf953e3f543371e096c6b05056bef8" address="unix:///run/containerd/s/dfdd0f2237a59dee6c687472274366b064d867338d709b851c7fbd3761e54779" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:42:32.248546 systemd[1]: Started cri-containerd-bb000f4e0ce01a68f4d2a5d56be98a0a04324de2456c9acf8defc5b44bdc1e06.scope - libcontainer container bb000f4e0ce01a68f4d2a5d56be98a0a04324de2456c9acf8defc5b44bdc1e06. Jul 7 00:42:32.255073 systemd[1]: Started cri-containerd-b9879df2d59dd194b785353d6a956b616abf953e3f543371e096c6b05056bef8.scope - libcontainer container b9879df2d59dd194b785353d6a956b616abf953e3f543371e096c6b05056bef8. Jul 7 00:42:32.268639 containerd[1927]: time="2025-07-07T00:42:32.268612836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-a-7d9f698c61,Uid:f694e4cef501b3fbcfcddb0e19d0c243,Namespace:kube-system,Attempt:0,} returns sandbox id \"b59f3838ecaeeb417c41fd587ed75c59f5e7f4fe40a6950dfbeacd16019e7611\"" Jul 7 00:42:32.271033 containerd[1927]: time="2025-07-07T00:42:32.271015292Z" level=info msg="CreateContainer within sandbox \"b59f3838ecaeeb417c41fd587ed75c59f5e7f4fe40a6950dfbeacd16019e7611\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:42:32.273736 containerd[1927]: time="2025-07-07T00:42:32.273719000Z" level=info msg="Container 54fbb99fcfe800ecbea843fdebf5294d6804c3c1603fd4214aa1df9deaf57dfa: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:42:32.276279 containerd[1927]: time="2025-07-07T00:42:32.276259746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-a-7d9f698c61,Uid:2dd5f196eee0acee1565b9dd868fa02a,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb000f4e0ce01a68f4d2a5d56be98a0a04324de2456c9acf8defc5b44bdc1e06\"" Jul 7 00:42:32.277363 containerd[1927]: time="2025-07-07T00:42:32.277349849Z" level=info msg="CreateContainer within sandbox \"b59f3838ecaeeb417c41fd587ed75c59f5e7f4fe40a6950dfbeacd16019e7611\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"54fbb99fcfe800ecbea843fdebf5294d6804c3c1603fd4214aa1df9deaf57dfa\"" Jul 7 00:42:32.277591 containerd[1927]: time="2025-07-07T00:42:32.277576572Z" level=info msg="StartContainer for \"54fbb99fcfe800ecbea843fdebf5294d6804c3c1603fd4214aa1df9deaf57dfa\"" Jul 7 00:42:32.277710 containerd[1927]: time="2025-07-07T00:42:32.277698072Z" level=info msg="CreateContainer within sandbox \"bb000f4e0ce01a68f4d2a5d56be98a0a04324de2456c9acf8defc5b44bdc1e06\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:42:32.278242 containerd[1927]: time="2025-07-07T00:42:32.278228951Z" level=info msg="connecting to shim 54fbb99fcfe800ecbea843fdebf5294d6804c3c1603fd4214aa1df9deaf57dfa" address="unix:///run/containerd/s/92b11f39e4b0fd0a61eed1d59e8796746236fbca1ef113f1857329a471c28f66" protocol=ttrpc version=3 Jul 7 00:42:32.280205 containerd[1927]: time="2025-07-07T00:42:32.280186768Z" level=info msg="Container e485cc3762d06d0adbeafbf538a2d076366945308cd6423cf825eed81c2ec921: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:42:32.280472 containerd[1927]: time="2025-07-07T00:42:32.280458819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-a-7d9f698c61,Uid:faf83eb61e57db83ce30e8e099fe2ef4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9879df2d59dd194b785353d6a956b616abf953e3f543371e096c6b05056bef8\"" Jul 7 00:42:32.282245 containerd[1927]: time="2025-07-07T00:42:32.282229550Z" level=info msg="CreateContainer within sandbox \"b9879df2d59dd194b785353d6a956b616abf953e3f543371e096c6b05056bef8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:42:32.283512 containerd[1927]: time="2025-07-07T00:42:32.283499118Z" level=info msg="CreateContainer within sandbox \"bb000f4e0ce01a68f4d2a5d56be98a0a04324de2456c9acf8defc5b44bdc1e06\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e485cc3762d06d0adbeafbf538a2d076366945308cd6423cf825eed81c2ec921\"" Jul 7 00:42:32.283706 containerd[1927]: time="2025-07-07T00:42:32.283696286Z" level=info msg="StartContainer for \"e485cc3762d06d0adbeafbf538a2d076366945308cd6423cf825eed81c2ec921\"" Jul 7 00:42:32.284225 containerd[1927]: time="2025-07-07T00:42:32.284214382Z" level=info msg="connecting to shim e485cc3762d06d0adbeafbf538a2d076366945308cd6423cf825eed81c2ec921" address="unix:///run/containerd/s/1b388f0eb2b23a041fe9b01d5e0fb97e6c4f455e6201d71aa39efde9f0e81379" protocol=ttrpc version=3 Jul 7 00:42:32.285028 containerd[1927]: time="2025-07-07T00:42:32.285011871Z" level=info msg="Container 6503db8268ef049c2c5dc78eccffc46156a3a822db5757986bb2bdc30b96f0a3: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:42:32.287452 containerd[1927]: time="2025-07-07T00:42:32.287437608Z" level=info msg="CreateContainer within sandbox \"b9879df2d59dd194b785353d6a956b616abf953e3f543371e096c6b05056bef8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6503db8268ef049c2c5dc78eccffc46156a3a822db5757986bb2bdc30b96f0a3\"" Jul 7 00:42:32.287638 containerd[1927]: time="2025-07-07T00:42:32.287626570Z" level=info msg="StartContainer for \"6503db8268ef049c2c5dc78eccffc46156a3a822db5757986bb2bdc30b96f0a3\"" Jul 7 00:42:32.288165 containerd[1927]: time="2025-07-07T00:42:32.288152976Z" level=info msg="connecting to shim 6503db8268ef049c2c5dc78eccffc46156a3a822db5757986bb2bdc30b96f0a3" address="unix:///run/containerd/s/dfdd0f2237a59dee6c687472274366b064d867338d709b851c7fbd3761e54779" protocol=ttrpc version=3 Jul 7 00:42:32.300424 systemd[1]: Started cri-containerd-54fbb99fcfe800ecbea843fdebf5294d6804c3c1603fd4214aa1df9deaf57dfa.scope - libcontainer container 54fbb99fcfe800ecbea843fdebf5294d6804c3c1603fd4214aa1df9deaf57dfa. Jul 7 00:42:32.309825 systemd[1]: Started cri-containerd-6503db8268ef049c2c5dc78eccffc46156a3a822db5757986bb2bdc30b96f0a3.scope - libcontainer container 6503db8268ef049c2c5dc78eccffc46156a3a822db5757986bb2bdc30b96f0a3. Jul 7 00:42:32.312661 systemd[1]: Started cri-containerd-e485cc3762d06d0adbeafbf538a2d076366945308cd6423cf825eed81c2ec921.scope - libcontainer container e485cc3762d06d0adbeafbf538a2d076366945308cd6423cf825eed81c2ec921. Jul 7 00:42:32.367673 kubelet[2854]: E0707 00:42:32.367647 2854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-7d9f698c61?timeout=10s\": dial tcp 139.178.70.5:6443: connect: connection refused" interval="800ms" Jul 7 00:42:32.372200 containerd[1927]: time="2025-07-07T00:42:32.372174889Z" level=info msg="StartContainer for \"54fbb99fcfe800ecbea843fdebf5294d6804c3c1603fd4214aa1df9deaf57dfa\" returns successfully" Jul 7 00:42:32.376711 containerd[1927]: time="2025-07-07T00:42:32.376683535Z" level=info msg="StartContainer for \"6503db8268ef049c2c5dc78eccffc46156a3a822db5757986bb2bdc30b96f0a3\" returns successfully" Jul 7 00:42:32.377832 containerd[1927]: time="2025-07-07T00:42:32.377816800Z" level=info msg="StartContainer for \"e485cc3762d06d0adbeafbf538a2d076366945308cd6423cf825eed81c2ec921\" returns successfully" Jul 7 00:42:32.540540 kubelet[2854]: I0707 00:42:32.540493 2854 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:32.780784 kubelet[2854]: E0707 00:42:32.780768 2854 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-7d9f698c61\" not found" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:32.781486 kubelet[2854]: E0707 00:42:32.781478 2854 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-7d9f698c61\" not found" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:32.782188 kubelet[2854]: E0707 00:42:32.782178 2854 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-7d9f698c61\" not found" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:33.257103 kubelet[2854]: E0707 00:42:33.257068 2854 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-a-7d9f698c61\" not found" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:33.477659 kubelet[2854]: I0707 00:42:33.477584 2854 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:33.566042 kubelet[2854]: I0707 00:42:33.565817 2854 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:33.577867 kubelet[2854]: E0707 00:42:33.577765 2854 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-7d9f698c61\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:33.577867 kubelet[2854]: I0707 00:42:33.577823 2854 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:33.581337 kubelet[2854]: E0707 00:42:33.581246 2854 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-a-7d9f698c61\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:33.581337 kubelet[2854]: I0707 00:42:33.581298 2854 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:33.584939 kubelet[2854]: E0707 00:42:33.584881 2854 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-7d9f698c61\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:33.760436 kubelet[2854]: I0707 00:42:33.760307 2854 apiserver.go:52] "Watching apiserver" Jul 7 00:42:33.766072 kubelet[2854]: I0707 00:42:33.765979 2854 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:42:33.783498 kubelet[2854]: I0707 00:42:33.783449 2854 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:33.783736 kubelet[2854]: I0707 00:42:33.783647 2854 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:33.787572 kubelet[2854]: E0707 00:42:33.787484 2854 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-7d9f698c61\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:33.787572 kubelet[2854]: E0707 00:42:33.787529 2854 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-7d9f698c61\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:34.786200 kubelet[2854]: I0707 00:42:34.786115 2854 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:34.787198 kubelet[2854]: I0707 00:42:34.786364 2854 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:34.791922 kubelet[2854]: I0707 00:42:34.791850 2854 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:42:34.792156 kubelet[2854]: I0707 00:42:34.792065 2854 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:42:35.267539 systemd[1]: Reload requested from client PID 3176 ('systemctl') (unit session-11.scope)... Jul 7 00:42:35.267563 systemd[1]: Reloading... Jul 7 00:42:35.307136 zram_generator::config[3221]: No configuration found. Jul 7 00:42:35.365695 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:42:35.467281 systemd[1]: Reloading finished in 199 ms. Jul 7 00:42:35.485472 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:42:35.495660 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:42:35.495780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:42:35.495803 systemd[1]: kubelet.service: Consumed 858ms CPU time, 138.2M memory peak. Jul 7 00:42:35.497047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:42:35.813586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:42:35.818826 (kubelet)[3285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:42:35.838263 kubelet[3285]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:42:35.838263 kubelet[3285]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:42:35.838263 kubelet[3285]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:42:35.838463 kubelet[3285]: I0707 00:42:35.838295 3285 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:42:35.841461 kubelet[3285]: I0707 00:42:35.841420 3285 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 00:42:35.841461 kubelet[3285]: I0707 00:42:35.841430 3285 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:42:35.841771 kubelet[3285]: I0707 00:42:35.841732 3285 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 00:42:35.843259 kubelet[3285]: I0707 00:42:35.843247 3285 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 00:42:35.844408 kubelet[3285]: I0707 00:42:35.844398 3285 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:42:35.846032 kubelet[3285]: I0707 00:42:35.846023 3285 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:42:35.853491 kubelet[3285]: I0707 00:42:35.853463 3285 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:42:35.853608 kubelet[3285]: I0707 00:42:35.853565 3285 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:42:35.853690 kubelet[3285]: I0707 00:42:35.853580 3285 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-a-7d9f698c61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:42:35.853690 kubelet[3285]: I0707 00:42:35.853661 3285 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:42:35.853690 kubelet[3285]: I0707 00:42:35.853667 3285 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 00:42:35.853780 kubelet[3285]: I0707 00:42:35.853693 3285 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:42:35.853841 kubelet[3285]: I0707 00:42:35.853809 3285 kubelet.go:480] "Attempting to sync node with API server" Jul 7 00:42:35.853841 kubelet[3285]: I0707 00:42:35.853818 3285 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:42:35.853841 kubelet[3285]: I0707 00:42:35.853831 3285 kubelet.go:386] "Adding apiserver pod source" Jul 7 00:42:35.853841 kubelet[3285]: I0707 00:42:35.853839 3285 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:42:35.854335 kubelet[3285]: I0707 00:42:35.854323 3285 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:42:35.854767 kubelet[3285]: I0707 00:42:35.854755 3285 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 00:42:35.856386 kubelet[3285]: I0707 00:42:35.856376 3285 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:42:35.856425 kubelet[3285]: I0707 00:42:35.856405 3285 server.go:1289] "Started kubelet" Jul 7 00:42:35.856522 kubelet[3285]: I0707 00:42:35.856482 3285 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:42:35.856637 kubelet[3285]: I0707 00:42:35.856517 3285 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:42:35.856709 kubelet[3285]: I0707 00:42:35.856699 3285 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:42:35.858106 kubelet[3285]: I0707 00:42:35.858096 3285 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:42:35.858917 kubelet[3285]: I0707 00:42:35.858143 3285 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:42:35.858917 kubelet[3285]: I0707 00:42:35.858250 3285 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:42:35.858917 kubelet[3285]: E0707 00:42:35.858369 3285 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-7d9f698c61\" not found" Jul 7 00:42:35.858917 kubelet[3285]: I0707 00:42:35.858529 3285 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:42:35.858917 kubelet[3285]: E0707 00:42:35.858531 3285 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:42:35.858917 kubelet[3285]: I0707 00:42:35.858769 3285 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:42:35.859517 kubelet[3285]: I0707 00:42:35.859497 3285 factory.go:223] Registration of the systemd container factory successfully Jul 7 00:42:35.859558 kubelet[3285]: I0707 00:42:35.859548 3285 server.go:317] "Adding debug handlers to kubelet server" Jul 7 00:42:35.859587 kubelet[3285]: I0707 00:42:35.859576 3285 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:42:35.860074 kubelet[3285]: I0707 00:42:35.860049 3285 factory.go:223] Registration of the containerd container factory successfully Jul 7 00:42:35.864220 kubelet[3285]: I0707 00:42:35.864200 3285 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 00:42:35.864710 kubelet[3285]: I0707 00:42:35.864701 3285 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 00:42:35.864737 kubelet[3285]: I0707 00:42:35.864714 3285 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 00:42:35.864737 kubelet[3285]: I0707 00:42:35.864724 3285 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:42:35.864737 kubelet[3285]: I0707 00:42:35.864729 3285 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 00:42:35.864796 kubelet[3285]: E0707 00:42:35.864754 3285 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:42:35.873861 kubelet[3285]: I0707 00:42:35.873847 3285 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:42:35.873861 kubelet[3285]: I0707 00:42:35.873857 3285 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:42:35.873861 kubelet[3285]: I0707 00:42:35.873868 3285 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:42:35.873958 kubelet[3285]: I0707 00:42:35.873953 3285 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:42:35.873978 kubelet[3285]: I0707 00:42:35.873959 3285 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:42:35.873978 kubelet[3285]: I0707 00:42:35.873971 3285 policy_none.go:49] "None policy: Start" Jul 7 00:42:35.873978 kubelet[3285]: I0707 00:42:35.873976 3285 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:42:35.874020 kubelet[3285]: I0707 00:42:35.873981 3285 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:42:35.874034 kubelet[3285]: I0707 00:42:35.874029 3285 state_mem.go:75] "Updated machine memory state" Jul 7 00:42:35.875897 kubelet[3285]: E0707 00:42:35.875863 3285 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 00:42:35.875960 kubelet[3285]: I0707 00:42:35.875954 3285 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:42:35.875985 kubelet[3285]: I0707 00:42:35.875965 3285 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:42:35.876045 kubelet[3285]: I0707 00:42:35.876037 3285 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:42:35.876369 kubelet[3285]: E0707 00:42:35.876359 3285 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:42:35.966136 kubelet[3285]: I0707 00:42:35.966107 3285 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:35.966269 kubelet[3285]: I0707 00:42:35.966177 3285 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:35.966269 kubelet[3285]: I0707 00:42:35.966243 3285 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:35.969290 kubelet[3285]: I0707 00:42:35.969267 3285 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:42:35.969358 kubelet[3285]: E0707 00:42:35.969314 3285 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-7d9f698c61\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:35.969358 kubelet[3285]: I0707 00:42:35.969315 3285 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:42:35.969473 kubelet[3285]: I0707 00:42:35.969459 3285 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:42:35.969524 kubelet[3285]: E0707 00:42:35.969490 3285 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-7d9f698c61\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:35.978474 kubelet[3285]: I0707 00:42:35.978454 3285 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:35.983388 kubelet[3285]: I0707 00:42:35.983338 3285 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:35.983481 kubelet[3285]: I0707 00:42:35.983404 3285 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.060339 kubelet[3285]: I0707 00:42:36.060232 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f694e4cef501b3fbcfcddb0e19d0c243-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-a-7d9f698c61\" (UID: \"f694e4cef501b3fbcfcddb0e19d0c243\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.060339 kubelet[3285]: I0707 00:42:36.060327 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2dd5f196eee0acee1565b9dd868fa02a-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-7d9f698c61\" (UID: \"2dd5f196eee0acee1565b9dd868fa02a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.060760 kubelet[3285]: I0707 00:42:36.060408 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2dd5f196eee0acee1565b9dd868fa02a-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-7d9f698c61\" (UID: \"2dd5f196eee0acee1565b9dd868fa02a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.060760 kubelet[3285]: I0707 00:42:36.060469 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2dd5f196eee0acee1565b9dd868fa02a-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-a-7d9f698c61\" (UID: \"2dd5f196eee0acee1565b9dd868fa02a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.060760 kubelet[3285]: I0707 00:42:36.060513 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f694e4cef501b3fbcfcddb0e19d0c243-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-a-7d9f698c61\" (UID: \"f694e4cef501b3fbcfcddb0e19d0c243\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.060760 kubelet[3285]: I0707 00:42:36.060557 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f694e4cef501b3fbcfcddb0e19d0c243-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-a-7d9f698c61\" (UID: \"f694e4cef501b3fbcfcddb0e19d0c243\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.060760 kubelet[3285]: I0707 00:42:36.060617 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2dd5f196eee0acee1565b9dd868fa02a-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-a-7d9f698c61\" (UID: \"2dd5f196eee0acee1565b9dd868fa02a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.061285 kubelet[3285]: I0707 00:42:36.060662 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2dd5f196eee0acee1565b9dd868fa02a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-a-7d9f698c61\" (UID: \"2dd5f196eee0acee1565b9dd868fa02a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.061285 kubelet[3285]: I0707 00:42:36.060791 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/faf83eb61e57db83ce30e8e099fe2ef4-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-a-7d9f698c61\" (UID: \"faf83eb61e57db83ce30e8e099fe2ef4\") " pod="kube-system/kube-scheduler-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.854608 kubelet[3285]: I0707 00:42:36.854491 3285 apiserver.go:52] "Watching apiserver" Jul 7 00:42:36.859670 kubelet[3285]: I0707 00:42:36.859593 3285 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:42:36.871304 kubelet[3285]: I0707 00:42:36.871254 3285 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.871478 kubelet[3285]: I0707 00:42:36.871372 3285 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.871478 kubelet[3285]: I0707 00:42:36.871389 3285 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.878386 kubelet[3285]: I0707 00:42:36.878308 3285 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:42:36.878601 kubelet[3285]: E0707 00:42:36.878415 3285 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-7d9f698c61\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.878706 kubelet[3285]: I0707 00:42:36.878663 3285 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:42:36.878795 kubelet[3285]: I0707 00:42:36.878702 3285 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 00:42:36.878795 kubelet[3285]: E0707 00:42:36.878756 3285 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-a-7d9f698c61\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.878958 kubelet[3285]: E0707 00:42:36.878801 3285 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-7d9f698c61\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" Jul 7 00:42:36.890820 kubelet[3285]: I0707 00:42:36.890731 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-7d9f698c61" podStartSLOduration=1.890722465 podStartE2EDuration="1.890722465s" podCreationTimestamp="2025-07-07 00:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:42:36.886856033 +0000 UTC m=+1.065901145" watchObservedRunningTime="2025-07-07 00:42:36.890722465 +0000 UTC m=+1.069767577" Jul 7 00:42:36.894544 kubelet[3285]: I0707 00:42:36.894527 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-a-7d9f698c61" podStartSLOduration=2.894503096 podStartE2EDuration="2.894503096s" podCreationTimestamp="2025-07-07 00:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:42:36.890857505 +0000 UTC m=+1.069902614" watchObservedRunningTime="2025-07-07 00:42:36.894503096 +0000 UTC m=+1.073548204" Jul 7 00:42:36.898224 kubelet[3285]: I0707 00:42:36.898173 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-a-7d9f698c61" podStartSLOduration=2.898167256 podStartE2EDuration="2.898167256s" podCreationTimestamp="2025-07-07 00:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:42:36.894595363 +0000 UTC m=+1.073640471" watchObservedRunningTime="2025-07-07 00:42:36.898167256 +0000 UTC m=+1.077212366" Jul 7 00:42:40.161028 kubelet[3285]: I0707 00:42:40.160922 3285 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:42:40.161967 containerd[1927]: time="2025-07-07T00:42:40.161624461Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:42:40.162831 kubelet[3285]: I0707 00:42:40.162108 3285 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:42:41.033072 systemd[1]: Created slice kubepods-besteffort-pod39a49b1c_1145_4e0b_9ba2_af4647b0c32d.slice - libcontainer container kubepods-besteffort-pod39a49b1c_1145_4e0b_9ba2_af4647b0c32d.slice. Jul 7 00:42:41.092411 kubelet[3285]: I0707 00:42:41.092326 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/39a49b1c-1145-4e0b-9ba2-af4647b0c32d-kube-proxy\") pod \"kube-proxy-pzkpx\" (UID: \"39a49b1c-1145-4e0b-9ba2-af4647b0c32d\") " pod="kube-system/kube-proxy-pzkpx" Jul 7 00:42:41.092411 kubelet[3285]: I0707 00:42:41.092379 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39a49b1c-1145-4e0b-9ba2-af4647b0c32d-xtables-lock\") pod \"kube-proxy-pzkpx\" (UID: \"39a49b1c-1145-4e0b-9ba2-af4647b0c32d\") " pod="kube-system/kube-proxy-pzkpx" Jul 7 00:42:41.092411 kubelet[3285]: I0707 00:42:41.092412 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcc5m\" (UniqueName: \"kubernetes.io/projected/39a49b1c-1145-4e0b-9ba2-af4647b0c32d-kube-api-access-rcc5m\") pod \"kube-proxy-pzkpx\" (UID: \"39a49b1c-1145-4e0b-9ba2-af4647b0c32d\") " pod="kube-system/kube-proxy-pzkpx" Jul 7 00:42:41.092734 kubelet[3285]: I0707 00:42:41.092447 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39a49b1c-1145-4e0b-9ba2-af4647b0c32d-lib-modules\") pod \"kube-proxy-pzkpx\" (UID: \"39a49b1c-1145-4e0b-9ba2-af4647b0c32d\") " pod="kube-system/kube-proxy-pzkpx" Jul 7 00:42:41.292122 systemd[1]: Created slice kubepods-besteffort-podbc9de9f7_d6e3_41fc_ba5f_de60b8660861.slice - libcontainer container kubepods-besteffort-podbc9de9f7_d6e3_41fc_ba5f_de60b8660861.slice. Jul 7 00:42:41.294218 kubelet[3285]: I0707 00:42:41.294160 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc9de9f7-d6e3-41fc-ba5f-de60b8660861-var-lib-calico\") pod \"tigera-operator-747864d56d-cq4c5\" (UID: \"bc9de9f7-d6e3-41fc-ba5f-de60b8660861\") " pod="tigera-operator/tigera-operator-747864d56d-cq4c5" Jul 7 00:42:41.294218 kubelet[3285]: I0707 00:42:41.294201 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l997\" (UniqueName: \"kubernetes.io/projected/bc9de9f7-d6e3-41fc-ba5f-de60b8660861-kube-api-access-5l997\") pod \"tigera-operator-747864d56d-cq4c5\" (UID: \"bc9de9f7-d6e3-41fc-ba5f-de60b8660861\") " pod="tigera-operator/tigera-operator-747864d56d-cq4c5" Jul 7 00:42:41.346806 containerd[1927]: time="2025-07-07T00:42:41.346679207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzkpx,Uid:39a49b1c-1145-4e0b-9ba2-af4647b0c32d,Namespace:kube-system,Attempt:0,}" Jul 7 00:42:41.353840 containerd[1927]: time="2025-07-07T00:42:41.353815874Z" level=info msg="connecting to shim 80bba3a4c34ffa36826f4f0a52a7e77cfca0d58349930521b82ab254492a8f9d" address="unix:///run/containerd/s/6df7b174ffac429a6cc24ce0f2254d63a4b80a14eff250b4ce687a768604b0d7" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:42:41.383279 systemd[1]: Started cri-containerd-80bba3a4c34ffa36826f4f0a52a7e77cfca0d58349930521b82ab254492a8f9d.scope - libcontainer container 80bba3a4c34ffa36826f4f0a52a7e77cfca0d58349930521b82ab254492a8f9d. Jul 7 00:42:41.408861 containerd[1927]: time="2025-07-07T00:42:41.408808256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzkpx,Uid:39a49b1c-1145-4e0b-9ba2-af4647b0c32d,Namespace:kube-system,Attempt:0,} returns sandbox id \"80bba3a4c34ffa36826f4f0a52a7e77cfca0d58349930521b82ab254492a8f9d\"" Jul 7 00:42:41.410749 containerd[1927]: time="2025-07-07T00:42:41.410735810Z" level=info msg="CreateContainer within sandbox \"80bba3a4c34ffa36826f4f0a52a7e77cfca0d58349930521b82ab254492a8f9d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:42:41.414362 containerd[1927]: time="2025-07-07T00:42:41.414349949Z" level=info msg="Container 2f17283820d3ae686fa2fa70019ba5255a495775905b0d16dc7c51b3baf9a5e0: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:42:41.417403 containerd[1927]: time="2025-07-07T00:42:41.417390970Z" level=info msg="CreateContainer within sandbox \"80bba3a4c34ffa36826f4f0a52a7e77cfca0d58349930521b82ab254492a8f9d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f17283820d3ae686fa2fa70019ba5255a495775905b0d16dc7c51b3baf9a5e0\"" Jul 7 00:42:41.417692 containerd[1927]: time="2025-07-07T00:42:41.417654224Z" level=info msg="StartContainer for \"2f17283820d3ae686fa2fa70019ba5255a495775905b0d16dc7c51b3baf9a5e0\"" Jul 7 00:42:41.418414 containerd[1927]: time="2025-07-07T00:42:41.418375039Z" level=info msg="connecting to shim 2f17283820d3ae686fa2fa70019ba5255a495775905b0d16dc7c51b3baf9a5e0" address="unix:///run/containerd/s/6df7b174ffac429a6cc24ce0f2254d63a4b80a14eff250b4ce687a768604b0d7" protocol=ttrpc version=3 Jul 7 00:42:41.432309 systemd[1]: Started cri-containerd-2f17283820d3ae686fa2fa70019ba5255a495775905b0d16dc7c51b3baf9a5e0.scope - libcontainer container 2f17283820d3ae686fa2fa70019ba5255a495775905b0d16dc7c51b3baf9a5e0. Jul 7 00:42:41.453320 containerd[1927]: time="2025-07-07T00:42:41.453270044Z" level=info msg="StartContainer for \"2f17283820d3ae686fa2fa70019ba5255a495775905b0d16dc7c51b3baf9a5e0\" returns successfully" Jul 7 00:42:41.596054 containerd[1927]: time="2025-07-07T00:42:41.595857578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-cq4c5,Uid:bc9de9f7-d6e3-41fc-ba5f-de60b8660861,Namespace:tigera-operator,Attempt:0,}" Jul 7 00:42:41.607502 containerd[1927]: time="2025-07-07T00:42:41.607447992Z" level=info msg="connecting to shim 131a9d5140792f47f9a56507f8c8cf4454956cf90f2751d8945c618e70583647" address="unix:///run/containerd/s/8aa2190b83fb18a1a87d89c3cdd05a16e9632562bb3cf08da56aef0ea34a4a06" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:42:41.632286 systemd[1]: Started cri-containerd-131a9d5140792f47f9a56507f8c8cf4454956cf90f2751d8945c618e70583647.scope - libcontainer container 131a9d5140792f47f9a56507f8c8cf4454956cf90f2751d8945c618e70583647. Jul 7 00:42:41.667672 containerd[1927]: time="2025-07-07T00:42:41.667628356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-cq4c5,Uid:bc9de9f7-d6e3-41fc-ba5f-de60b8660861,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"131a9d5140792f47f9a56507f8c8cf4454956cf90f2751d8945c618e70583647\"" Jul 7 00:42:41.668276 containerd[1927]: time="2025-07-07T00:42:41.668262525Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 00:42:41.918510 kubelet[3285]: I0707 00:42:41.918286 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pzkpx" podStartSLOduration=0.918250742 podStartE2EDuration="918.250742ms" podCreationTimestamp="2025-07-07 00:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:42:41.918205301 +0000 UTC m=+6.097250570" watchObservedRunningTime="2025-07-07 00:42:41.918250742 +0000 UTC m=+6.097295897" Jul 7 00:42:42.776396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2925809626.mount: Deactivated successfully. Jul 7 00:42:43.062672 containerd[1927]: time="2025-07-07T00:42:43.062583844Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:43.062868 containerd[1927]: time="2025-07-07T00:42:43.062807432Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 00:42:43.063182 containerd[1927]: time="2025-07-07T00:42:43.063150598Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:43.064101 containerd[1927]: time="2025-07-07T00:42:43.064060120Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:43.064475 containerd[1927]: time="2025-07-07T00:42:43.064435570Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.396156245s" Jul 7 00:42:43.064475 containerd[1927]: time="2025-07-07T00:42:43.064449614Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 00:42:43.065989 containerd[1927]: time="2025-07-07T00:42:43.065977635Z" level=info msg="CreateContainer within sandbox \"131a9d5140792f47f9a56507f8c8cf4454956cf90f2751d8945c618e70583647\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 00:42:43.068481 containerd[1927]: time="2025-07-07T00:42:43.068441804Z" level=info msg="Container 0658586d07f34897349a2efa9752884f212472fa191aa32298dd6fca63644763: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:42:43.070407 containerd[1927]: time="2025-07-07T00:42:43.070392234Z" level=info msg="CreateContainer within sandbox \"131a9d5140792f47f9a56507f8c8cf4454956cf90f2751d8945c618e70583647\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0658586d07f34897349a2efa9752884f212472fa191aa32298dd6fca63644763\"" Jul 7 00:42:43.070678 containerd[1927]: time="2025-07-07T00:42:43.070660882Z" level=info msg="StartContainer for \"0658586d07f34897349a2efa9752884f212472fa191aa32298dd6fca63644763\"" Jul 7 00:42:43.071058 containerd[1927]: time="2025-07-07T00:42:43.071047106Z" level=info msg="connecting to shim 0658586d07f34897349a2efa9752884f212472fa191aa32298dd6fca63644763" address="unix:///run/containerd/s/8aa2190b83fb18a1a87d89c3cdd05a16e9632562bb3cf08da56aef0ea34a4a06" protocol=ttrpc version=3 Jul 7 00:42:43.091295 systemd[1]: Started cri-containerd-0658586d07f34897349a2efa9752884f212472fa191aa32298dd6fca63644763.scope - libcontainer container 0658586d07f34897349a2efa9752884f212472fa191aa32298dd6fca63644763. Jul 7 00:42:43.103898 containerd[1927]: time="2025-07-07T00:42:43.103877371Z" level=info msg="StartContainer for \"0658586d07f34897349a2efa9752884f212472fa191aa32298dd6fca63644763\" returns successfully" Jul 7 00:42:43.906489 kubelet[3285]: I0707 00:42:43.906420 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-cq4c5" podStartSLOduration=1.509713906 podStartE2EDuration="2.90641071s" podCreationTimestamp="2025-07-07 00:42:41 +0000 UTC" firstStartedPulling="2025-07-07 00:42:41.668137001 +0000 UTC m=+5.847182111" lastFinishedPulling="2025-07-07 00:42:43.064833806 +0000 UTC m=+7.243878915" observedRunningTime="2025-07-07 00:42:43.906404232 +0000 UTC m=+8.085449347" watchObservedRunningTime="2025-07-07 00:42:43.90641071 +0000 UTC m=+8.085455819" Jul 7 00:42:47.493066 sudo[2227]: pam_unix(sudo:session): session closed for user root Jul 7 00:42:47.493956 sshd[2226]: Connection closed by 139.178.68.195 port 46878 Jul 7 00:42:47.494216 sshd-session[2224]: pam_unix(sshd:session): session closed for user core Jul 7 00:42:47.496247 systemd[1]: sshd@8-139.178.70.5:22-139.178.68.195:46878.service: Deactivated successfully. Jul 7 00:42:47.497237 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:42:47.497345 systemd[1]: session-11.scope: Consumed 4.264s CPU time, 236.5M memory peak. Jul 7 00:42:47.498153 systemd-logind[1917]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:42:47.498968 systemd-logind[1917]: Removed session 11. Jul 7 00:42:49.361092 update_engine[1922]: I20250707 00:42:49.361033 1922 update_attempter.cc:509] Updating boot flags... Jul 7 00:42:49.842372 kubelet[3285]: I0707 00:42:49.842312 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhp8w\" (UniqueName: \"kubernetes.io/projected/6425457a-4451-4850-8ee7-c83c8d55e6da-kube-api-access-qhp8w\") pod \"calico-typha-6596fcc467-rct8b\" (UID: \"6425457a-4451-4850-8ee7-c83c8d55e6da\") " pod="calico-system/calico-typha-6596fcc467-rct8b" Jul 7 00:42:49.842749 kubelet[3285]: I0707 00:42:49.842396 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6425457a-4451-4850-8ee7-c83c8d55e6da-tigera-ca-bundle\") pod \"calico-typha-6596fcc467-rct8b\" (UID: \"6425457a-4451-4850-8ee7-c83c8d55e6da\") " pod="calico-system/calico-typha-6596fcc467-rct8b" Jul 7 00:42:49.842749 kubelet[3285]: I0707 00:42:49.842446 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6425457a-4451-4850-8ee7-c83c8d55e6da-typha-certs\") pod \"calico-typha-6596fcc467-rct8b\" (UID: \"6425457a-4451-4850-8ee7-c83c8d55e6da\") " pod="calico-system/calico-typha-6596fcc467-rct8b" Jul 7 00:42:49.846567 systemd[1]: Created slice kubepods-besteffort-pod6425457a_4451_4850_8ee7_c83c8d55e6da.slice - libcontainer container kubepods-besteffort-pod6425457a_4451_4850_8ee7_c83c8d55e6da.slice. Jul 7 00:42:50.070110 systemd[1]: Created slice kubepods-besteffort-podcded9a92_3f16_43ba_9be0_c4c52f9707eb.slice - libcontainer container kubepods-besteffort-podcded9a92_3f16_43ba_9be0_c4c52f9707eb.slice. Jul 7 00:42:50.150152 containerd[1927]: time="2025-07-07T00:42:50.149935553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6596fcc467-rct8b,Uid:6425457a-4451-4850-8ee7-c83c8d55e6da,Namespace:calico-system,Attempt:0,}" Jul 7 00:42:50.158129 containerd[1927]: time="2025-07-07T00:42:50.158073869Z" level=info msg="connecting to shim 455ae5d0d7fc875f4788cb3a0157a4f9fef045bd4a8a322f9394ac7473187d5e" address="unix:///run/containerd/s/ddf0124a8c7e39fe7f479bb0cb95133a5686b8f67cc0dda54cac251b02e8ec9e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:42:50.184300 systemd[1]: Started cri-containerd-455ae5d0d7fc875f4788cb3a0157a4f9fef045bd4a8a322f9394ac7473187d5e.scope - libcontainer container 455ae5d0d7fc875f4788cb3a0157a4f9fef045bd4a8a322f9394ac7473187d5e. Jul 7 00:42:50.225632 containerd[1927]: time="2025-07-07T00:42:50.225596176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6596fcc467-rct8b,Uid:6425457a-4451-4850-8ee7-c83c8d55e6da,Namespace:calico-system,Attempt:0,} returns sandbox id \"455ae5d0d7fc875f4788cb3a0157a4f9fef045bd4a8a322f9394ac7473187d5e\"" Jul 7 00:42:50.226728 containerd[1927]: time="2025-07-07T00:42:50.226710942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 00:42:50.244941 kubelet[3285]: I0707 00:42:50.244891 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cded9a92-3f16-43ba-9be0-c4c52f9707eb-node-certs\") pod \"calico-node-dxcr6\" (UID: \"cded9a92-3f16-43ba-9be0-c4c52f9707eb\") " pod="calico-system/calico-node-dxcr6" Jul 7 00:42:50.244941 kubelet[3285]: I0707 00:42:50.244914 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cded9a92-3f16-43ba-9be0-c4c52f9707eb-xtables-lock\") pod \"calico-node-dxcr6\" (UID: \"cded9a92-3f16-43ba-9be0-c4c52f9707eb\") " pod="calico-system/calico-node-dxcr6" Jul 7 00:42:50.244941 kubelet[3285]: I0707 00:42:50.244927 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cded9a92-3f16-43ba-9be0-c4c52f9707eb-tigera-ca-bundle\") pod \"calico-node-dxcr6\" (UID: \"cded9a92-3f16-43ba-9be0-c4c52f9707eb\") " pod="calico-system/calico-node-dxcr6" Jul 7 00:42:50.244941 kubelet[3285]: I0707 00:42:50.244937 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cded9a92-3f16-43ba-9be0-c4c52f9707eb-cni-bin-dir\") pod \"calico-node-dxcr6\" (UID: \"cded9a92-3f16-43ba-9be0-c4c52f9707eb\") " pod="calico-system/calico-node-dxcr6" Jul 7 00:42:50.244941 kubelet[3285]: I0707 00:42:50.244946 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cded9a92-3f16-43ba-9be0-c4c52f9707eb-var-lib-calico\") pod \"calico-node-dxcr6\" (UID: \"cded9a92-3f16-43ba-9be0-c4c52f9707eb\") " pod="calico-system/calico-node-dxcr6" Jul 7 00:42:50.245087 kubelet[3285]: I0707 00:42:50.244956 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glfs2\" (UniqueName: \"kubernetes.io/projected/cded9a92-3f16-43ba-9be0-c4c52f9707eb-kube-api-access-glfs2\") pod \"calico-node-dxcr6\" (UID: \"cded9a92-3f16-43ba-9be0-c4c52f9707eb\") " pod="calico-system/calico-node-dxcr6" Jul 7 00:42:50.245087 kubelet[3285]: I0707 00:42:50.244972 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cded9a92-3f16-43ba-9be0-c4c52f9707eb-cni-net-dir\") pod \"calico-node-dxcr6\" (UID: \"cded9a92-3f16-43ba-9be0-c4c52f9707eb\") " pod="calico-system/calico-node-dxcr6" Jul 7 00:42:50.245087 kubelet[3285]: I0707 00:42:50.244998 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cded9a92-3f16-43ba-9be0-c4c52f9707eb-flexvol-driver-host\") pod \"calico-node-dxcr6\" (UID: \"cded9a92-3f16-43ba-9be0-c4c52f9707eb\") " pod="calico-system/calico-node-dxcr6" Jul 7 00:42:50.245087 kubelet[3285]: I0707 00:42:50.245017 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cded9a92-3f16-43ba-9be0-c4c52f9707eb-policysync\") pod \"calico-node-dxcr6\" (UID: \"cded9a92-3f16-43ba-9be0-c4c52f9707eb\") " pod="calico-system/calico-node-dxcr6" Jul 7 00:42:50.245087 kubelet[3285]: I0707 00:42:50.245031 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cded9a92-3f16-43ba-9be0-c4c52f9707eb-var-run-calico\") pod \"calico-node-dxcr6\" (UID: \"cded9a92-3f16-43ba-9be0-c4c52f9707eb\") " pod="calico-system/calico-node-dxcr6" Jul 7 00:42:50.245180 kubelet[3285]: I0707 00:42:50.245044 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cded9a92-3f16-43ba-9be0-c4c52f9707eb-cni-log-dir\") pod \"calico-node-dxcr6\" (UID: \"cded9a92-3f16-43ba-9be0-c4c52f9707eb\") " pod="calico-system/calico-node-dxcr6" Jul 7 00:42:50.245180 kubelet[3285]: I0707 00:42:50.245060 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cded9a92-3f16-43ba-9be0-c4c52f9707eb-lib-modules\") pod \"calico-node-dxcr6\" (UID: \"cded9a92-3f16-43ba-9be0-c4c52f9707eb\") " pod="calico-system/calico-node-dxcr6" Jul 7 00:42:50.308986 kubelet[3285]: E0707 00:42:50.308945 3285 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gwbvc" podUID="0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816" Jul 7 00:42:50.345736 kubelet[3285]: I0707 00:42:50.345610 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816-varrun\") pod \"csi-node-driver-gwbvc\" (UID: \"0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816\") " pod="calico-system/csi-node-driver-gwbvc" Jul 7 00:42:50.345736 kubelet[3285]: I0707 00:42:50.345740 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816-kubelet-dir\") pod \"csi-node-driver-gwbvc\" (UID: \"0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816\") " pod="calico-system/csi-node-driver-gwbvc" Jul 7 00:42:50.346091 kubelet[3285]: I0707 00:42:50.345961 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816-socket-dir\") pod \"csi-node-driver-gwbvc\" (UID: \"0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816\") " pod="calico-system/csi-node-driver-gwbvc" Jul 7 00:42:50.346091 kubelet[3285]: I0707 00:42:50.346037 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816-registration-dir\") pod \"csi-node-driver-gwbvc\" (UID: \"0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816\") " pod="calico-system/csi-node-driver-gwbvc" Jul 7 00:42:50.346301 kubelet[3285]: I0707 00:42:50.346100 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6r4p\" (UniqueName: \"kubernetes.io/projected/0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816-kube-api-access-r6r4p\") pod \"csi-node-driver-gwbvc\" (UID: \"0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816\") " pod="calico-system/csi-node-driver-gwbvc" Jul 7 00:42:50.348250 kubelet[3285]: E0707 00:42:50.348190 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.348250 kubelet[3285]: W0707 00:42:50.348230 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.348617 kubelet[3285]: E0707 00:42:50.348267 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.353376 kubelet[3285]: E0707 00:42:50.353323 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.353376 kubelet[3285]: W0707 00:42:50.353362 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.353625 kubelet[3285]: E0707 00:42:50.353396 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.364051 kubelet[3285]: E0707 00:42:50.364000 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.364051 kubelet[3285]: W0707 00:42:50.364046 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.364308 kubelet[3285]: E0707 00:42:50.364092 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.373988 containerd[1927]: time="2025-07-07T00:42:50.373926657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dxcr6,Uid:cded9a92-3f16-43ba-9be0-c4c52f9707eb,Namespace:calico-system,Attempt:0,}" Jul 7 00:42:50.381576 containerd[1927]: time="2025-07-07T00:42:50.381551346Z" level=info msg="connecting to shim 0db42cc1d88dc33a65df34ab1dfe3577bbde222a07cf67c122a6f5a01a22119e" address="unix:///run/containerd/s/586d6a75f354781624b09e046dec70fd606099a8a992963f4bb89cfe11f2cb10" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:42:50.397302 systemd[1]: Started cri-containerd-0db42cc1d88dc33a65df34ab1dfe3577bbde222a07cf67c122a6f5a01a22119e.scope - libcontainer container 0db42cc1d88dc33a65df34ab1dfe3577bbde222a07cf67c122a6f5a01a22119e. Jul 7 00:42:50.418423 containerd[1927]: time="2025-07-07T00:42:50.418342634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dxcr6,Uid:cded9a92-3f16-43ba-9be0-c4c52f9707eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"0db42cc1d88dc33a65df34ab1dfe3577bbde222a07cf67c122a6f5a01a22119e\"" Jul 7 00:42:50.448014 kubelet[3285]: E0707 00:42:50.447919 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.448014 kubelet[3285]: W0707 00:42:50.447959 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.448014 kubelet[3285]: E0707 00:42:50.447997 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.448639 kubelet[3285]: E0707 00:42:50.448558 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.448639 kubelet[3285]: W0707 00:42:50.448591 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.448639 kubelet[3285]: E0707 00:42:50.448620 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.449199 kubelet[3285]: E0707 00:42:50.449116 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.449199 kubelet[3285]: W0707 00:42:50.449188 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.449480 kubelet[3285]: E0707 00:42:50.449236 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.449912 kubelet[3285]: E0707 00:42:50.449818 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.449912 kubelet[3285]: W0707 00:42:50.449858 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.449912 kubelet[3285]: E0707 00:42:50.449893 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.450337 kubelet[3285]: E0707 00:42:50.450303 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.450337 kubelet[3285]: W0707 00:42:50.450327 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.450535 kubelet[3285]: E0707 00:42:50.450354 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.450821 kubelet[3285]: E0707 00:42:50.450752 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.450821 kubelet[3285]: W0707 00:42:50.450775 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.450821 kubelet[3285]: E0707 00:42:50.450798 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.451238 kubelet[3285]: E0707 00:42:50.451203 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.451238 kubelet[3285]: W0707 00:42:50.451226 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.451434 kubelet[3285]: E0707 00:42:50.451249 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.451653 kubelet[3285]: E0707 00:42:50.451576 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.451653 kubelet[3285]: W0707 00:42:50.451598 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.451653 kubelet[3285]: E0707 00:42:50.451620 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.452023 kubelet[3285]: E0707 00:42:50.451950 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.452023 kubelet[3285]: W0707 00:42:50.451970 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.452023 kubelet[3285]: E0707 00:42:50.451991 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.452368 kubelet[3285]: E0707 00:42:50.452328 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.452368 kubelet[3285]: W0707 00:42:50.452348 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.452368 kubelet[3285]: E0707 00:42:50.452369 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.452788 kubelet[3285]: E0707 00:42:50.452724 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.452788 kubelet[3285]: W0707 00:42:50.452745 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.452788 kubelet[3285]: E0707 00:42:50.452769 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.453392 kubelet[3285]: E0707 00:42:50.453332 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.453392 kubelet[3285]: W0707 00:42:50.453364 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.453632 kubelet[3285]: E0707 00:42:50.453395 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.453861 kubelet[3285]: E0707 00:42:50.453807 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.453861 kubelet[3285]: W0707 00:42:50.453831 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.453861 kubelet[3285]: E0707 00:42:50.453857 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.454277 kubelet[3285]: E0707 00:42:50.454229 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.454277 kubelet[3285]: W0707 00:42:50.454252 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.454277 kubelet[3285]: E0707 00:42:50.454274 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.454751 kubelet[3285]: E0707 00:42:50.454705 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.454751 kubelet[3285]: W0707 00:42:50.454727 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.454751 kubelet[3285]: E0707 00:42:50.454748 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.455199 kubelet[3285]: E0707 00:42:50.455115 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.455199 kubelet[3285]: W0707 00:42:50.455162 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.455199 kubelet[3285]: E0707 00:42:50.455185 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.455624 kubelet[3285]: E0707 00:42:50.455565 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.455624 kubelet[3285]: W0707 00:42:50.455587 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.455624 kubelet[3285]: E0707 00:42:50.455609 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.456040 kubelet[3285]: E0707 00:42:50.455990 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.456040 kubelet[3285]: W0707 00:42:50.456011 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.456040 kubelet[3285]: E0707 00:42:50.456032 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.456494 kubelet[3285]: E0707 00:42:50.456465 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.456494 kubelet[3285]: W0707 00:42:50.456491 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.456677 kubelet[3285]: E0707 00:42:50.456515 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.456976 kubelet[3285]: E0707 00:42:50.456950 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.457063 kubelet[3285]: W0707 00:42:50.456974 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.457063 kubelet[3285]: E0707 00:42:50.456999 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.457403 kubelet[3285]: E0707 00:42:50.457377 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.457403 kubelet[3285]: W0707 00:42:50.457399 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.457613 kubelet[3285]: E0707 00:42:50.457421 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.457906 kubelet[3285]: E0707 00:42:50.457880 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.457993 kubelet[3285]: W0707 00:42:50.457904 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.457993 kubelet[3285]: E0707 00:42:50.457928 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.458543 kubelet[3285]: E0707 00:42:50.458482 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.458543 kubelet[3285]: W0707 00:42:50.458507 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.458543 kubelet[3285]: E0707 00:42:50.458530 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.458955 kubelet[3285]: E0707 00:42:50.458902 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.458955 kubelet[3285]: W0707 00:42:50.458923 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.458955 kubelet[3285]: E0707 00:42:50.458944 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.459450 kubelet[3285]: E0707 00:42:50.459399 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.459450 kubelet[3285]: W0707 00:42:50.459421 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.459450 kubelet[3285]: E0707 00:42:50.459444 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:50.477541 kubelet[3285]: E0707 00:42:50.477480 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:50.477541 kubelet[3285]: W0707 00:42:50.477517 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:50.477934 kubelet[3285]: E0707 00:42:50.477554 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:51.651972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241367453.mount: Deactivated successfully. Jul 7 00:42:51.865629 kubelet[3285]: E0707 00:42:51.865567 3285 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gwbvc" podUID="0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816" Jul 7 00:42:52.339002 containerd[1927]: time="2025-07-07T00:42:52.338971267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:52.339319 containerd[1927]: time="2025-07-07T00:42:52.339132506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 00:42:52.339477 containerd[1927]: time="2025-07-07T00:42:52.339463741Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:52.340372 containerd[1927]: time="2025-07-07T00:42:52.340360816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:52.340731 containerd[1927]: time="2025-07-07T00:42:52.340719631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.113988458s" Jul 7 00:42:52.340768 containerd[1927]: time="2025-07-07T00:42:52.340735999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 00:42:52.341154 containerd[1927]: time="2025-07-07T00:42:52.341143900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 00:42:52.344803 containerd[1927]: time="2025-07-07T00:42:52.344754963Z" level=info msg="CreateContainer within sandbox \"455ae5d0d7fc875f4788cb3a0157a4f9fef045bd4a8a322f9394ac7473187d5e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 00:42:52.347456 containerd[1927]: time="2025-07-07T00:42:52.347420114Z" level=info msg="Container 4dbe2ad8134da6dfd810203f52292e84dec0c1ff9852d6fab67ea42be194c09a: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:42:52.350109 containerd[1927]: time="2025-07-07T00:42:52.350097523Z" level=info msg="CreateContainer within sandbox \"455ae5d0d7fc875f4788cb3a0157a4f9fef045bd4a8a322f9394ac7473187d5e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4dbe2ad8134da6dfd810203f52292e84dec0c1ff9852d6fab67ea42be194c09a\"" Jul 7 00:42:52.350354 containerd[1927]: time="2025-07-07T00:42:52.350344383Z" level=info msg="StartContainer for \"4dbe2ad8134da6dfd810203f52292e84dec0c1ff9852d6fab67ea42be194c09a\"" Jul 7 00:42:52.350862 containerd[1927]: time="2025-07-07T00:42:52.350851902Z" level=info msg="connecting to shim 4dbe2ad8134da6dfd810203f52292e84dec0c1ff9852d6fab67ea42be194c09a" address="unix:///run/containerd/s/ddf0124a8c7e39fe7f479bb0cb95133a5686b8f67cc0dda54cac251b02e8ec9e" protocol=ttrpc version=3 Jul 7 00:42:52.370249 systemd[1]: Started cri-containerd-4dbe2ad8134da6dfd810203f52292e84dec0c1ff9852d6fab67ea42be194c09a.scope - libcontainer container 4dbe2ad8134da6dfd810203f52292e84dec0c1ff9852d6fab67ea42be194c09a. Jul 7 00:42:52.403206 containerd[1927]: time="2025-07-07T00:42:52.403152560Z" level=info msg="StartContainer for \"4dbe2ad8134da6dfd810203f52292e84dec0c1ff9852d6fab67ea42be194c09a\" returns successfully" Jul 7 00:42:52.927570 kubelet[3285]: I0707 00:42:52.927511 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6596fcc467-rct8b" podStartSLOduration=1.812975053 podStartE2EDuration="3.927490838s" podCreationTimestamp="2025-07-07 00:42:49 +0000 UTC" firstStartedPulling="2025-07-07 00:42:50.226569906 +0000 UTC m=+14.405615016" lastFinishedPulling="2025-07-07 00:42:52.341085689 +0000 UTC m=+16.520130801" observedRunningTime="2025-07-07 00:42:52.927344854 +0000 UTC m=+17.106390025" watchObservedRunningTime="2025-07-07 00:42:52.927490838 +0000 UTC m=+17.106535971" Jul 7 00:42:52.961080 kubelet[3285]: E0707 00:42:52.960974 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.961080 kubelet[3285]: W0707 00:42:52.961024 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.961080 kubelet[3285]: E0707 00:42:52.961065 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.961677 kubelet[3285]: E0707 00:42:52.961588 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.961677 kubelet[3285]: W0707 00:42:52.961621 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.961677 kubelet[3285]: E0707 00:42:52.961652 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.962115 kubelet[3285]: E0707 00:42:52.962085 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.962115 kubelet[3285]: W0707 00:42:52.962109 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.962347 kubelet[3285]: E0707 00:42:52.962164 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.962700 kubelet[3285]: E0707 00:42:52.962634 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.962700 kubelet[3285]: W0707 00:42:52.962657 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.962700 kubelet[3285]: E0707 00:42:52.962680 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.963062 kubelet[3285]: E0707 00:42:52.963030 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.963062 kubelet[3285]: W0707 00:42:52.963059 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.963277 kubelet[3285]: E0707 00:42:52.963081 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.963434 kubelet[3285]: E0707 00:42:52.963402 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.963531 kubelet[3285]: W0707 00:42:52.963435 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.963531 kubelet[3285]: E0707 00:42:52.963458 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.963775 kubelet[3285]: E0707 00:42:52.963752 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.963870 kubelet[3285]: W0707 00:42:52.963774 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.963870 kubelet[3285]: E0707 00:42:52.963796 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.964112 kubelet[3285]: E0707 00:42:52.964088 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.964227 kubelet[3285]: W0707 00:42:52.964110 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.964227 kubelet[3285]: E0707 00:42:52.964152 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.964490 kubelet[3285]: E0707 00:42:52.964465 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.964490 kubelet[3285]: W0707 00:42:52.964487 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.964662 kubelet[3285]: E0707 00:42:52.964508 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.964871 kubelet[3285]: E0707 00:42:52.964848 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.964871 kubelet[3285]: W0707 00:42:52.964869 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.965041 kubelet[3285]: E0707 00:42:52.964890 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.965310 kubelet[3285]: E0707 00:42:52.965273 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.965423 kubelet[3285]: W0707 00:42:52.965311 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.965423 kubelet[3285]: E0707 00:42:52.965346 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.965837 kubelet[3285]: E0707 00:42:52.965812 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.965943 kubelet[3285]: W0707 00:42:52.965836 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.965943 kubelet[3285]: E0707 00:42:52.965861 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.966234 kubelet[3285]: E0707 00:42:52.966207 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.966234 kubelet[3285]: W0707 00:42:52.966231 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.966445 kubelet[3285]: E0707 00:42:52.966253 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.966610 kubelet[3285]: E0707 00:42:52.966587 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.966705 kubelet[3285]: W0707 00:42:52.966609 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.966705 kubelet[3285]: E0707 00:42:52.966630 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.967019 kubelet[3285]: E0707 00:42:52.966996 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.967138 kubelet[3285]: W0707 00:42:52.967019 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.967138 kubelet[3285]: E0707 00:42:52.967040 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.970664 kubelet[3285]: E0707 00:42:52.970624 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.970794 kubelet[3285]: W0707 00:42:52.970665 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.970794 kubelet[3285]: E0707 00:42:52.970700 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.971212 kubelet[3285]: E0707 00:42:52.971179 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.971212 kubelet[3285]: W0707 00:42:52.971210 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.971468 kubelet[3285]: E0707 00:42:52.971237 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.971718 kubelet[3285]: E0707 00:42:52.971685 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.971718 kubelet[3285]: W0707 00:42:52.971712 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.971894 kubelet[3285]: E0707 00:42:52.971737 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.972286 kubelet[3285]: E0707 00:42:52.972237 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.972427 kubelet[3285]: W0707 00:42:52.972285 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.972427 kubelet[3285]: E0707 00:42:52.972318 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.972698 kubelet[3285]: E0707 00:42:52.972670 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.972698 kubelet[3285]: W0707 00:42:52.972693 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.972883 kubelet[3285]: E0707 00:42:52.972717 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.973225 kubelet[3285]: E0707 00:42:52.973182 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.973225 kubelet[3285]: W0707 00:42:52.973206 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.973225 kubelet[3285]: E0707 00:42:52.973229 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.973678 kubelet[3285]: E0707 00:42:52.973596 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.973678 kubelet[3285]: W0707 00:42:52.973618 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.973678 kubelet[3285]: E0707 00:42:52.973640 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.974080 kubelet[3285]: E0707 00:42:52.973979 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.974080 kubelet[3285]: W0707 00:42:52.973999 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.974080 kubelet[3285]: E0707 00:42:52.974022 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.974501 kubelet[3285]: E0707 00:42:52.974463 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.974501 kubelet[3285]: W0707 00:42:52.974484 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.974688 kubelet[3285]: E0707 00:42:52.974507 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.975145 kubelet[3285]: E0707 00:42:52.975086 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.975263 kubelet[3285]: W0707 00:42:52.975147 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.975263 kubelet[3285]: E0707 00:42:52.975195 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.975669 kubelet[3285]: E0707 00:42:52.975635 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.975775 kubelet[3285]: W0707 00:42:52.975669 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.975775 kubelet[3285]: E0707 00:42:52.975704 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.976175 kubelet[3285]: E0707 00:42:52.976116 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.976175 kubelet[3285]: W0707 00:42:52.976166 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.976384 kubelet[3285]: E0707 00:42:52.976204 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.976679 kubelet[3285]: E0707 00:42:52.976648 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.976786 kubelet[3285]: W0707 00:42:52.976680 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.976786 kubelet[3285]: E0707 00:42:52.976717 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.977317 kubelet[3285]: E0707 00:42:52.977234 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.977317 kubelet[3285]: W0707 00:42:52.977268 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.977598 kubelet[3285]: E0707 00:42:52.977306 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.977967 kubelet[3285]: E0707 00:42:52.977919 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.977967 kubelet[3285]: W0707 00:42:52.977951 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.978316 kubelet[3285]: E0707 00:42:52.977982 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.978531 kubelet[3285]: E0707 00:42:52.978485 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.978531 kubelet[3285]: W0707 00:42:52.978519 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.978855 kubelet[3285]: E0707 00:42:52.978550 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.979065 kubelet[3285]: E0707 00:42:52.979028 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.979065 kubelet[3285]: W0707 00:42:52.979054 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.979370 kubelet[3285]: E0707 00:42:52.979080 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:52.979622 kubelet[3285]: E0707 00:42:52.979582 3285 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:42:52.979622 kubelet[3285]: W0707 00:42:52.979611 3285 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:42:52.979929 kubelet[3285]: E0707 00:42:52.979638 3285 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:42:53.724969 containerd[1927]: time="2025-07-07T00:42:53.724940158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:53.725203 containerd[1927]: time="2025-07-07T00:42:53.725133874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 00:42:53.725546 containerd[1927]: time="2025-07-07T00:42:53.725534481Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:53.726303 containerd[1927]: time="2025-07-07T00:42:53.726291398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:53.726683 containerd[1927]: time="2025-07-07T00:42:53.726673335Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.385515827s" Jul 7 00:42:53.726704 containerd[1927]: time="2025-07-07T00:42:53.726687810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 00:42:53.728105 containerd[1927]: time="2025-07-07T00:42:53.728092452Z" level=info msg="CreateContainer within sandbox \"0db42cc1d88dc33a65df34ab1dfe3577bbde222a07cf67c122a6f5a01a22119e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 00:42:53.731386 containerd[1927]: time="2025-07-07T00:42:53.731374895Z" level=info msg="Container 6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:42:53.734438 containerd[1927]: time="2025-07-07T00:42:53.734425364Z" level=info msg="CreateContainer within sandbox \"0db42cc1d88dc33a65df34ab1dfe3577bbde222a07cf67c122a6f5a01a22119e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b\"" Jul 7 00:42:53.734702 containerd[1927]: time="2025-07-07T00:42:53.734665156Z" level=info msg="StartContainer for \"6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b\"" Jul 7 00:42:53.735433 containerd[1927]: time="2025-07-07T00:42:53.735393075Z" level=info msg="connecting to shim 6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b" address="unix:///run/containerd/s/586d6a75f354781624b09e046dec70fd606099a8a992963f4bb89cfe11f2cb10" protocol=ttrpc version=3 Jul 7 00:42:53.768267 systemd[1]: Started cri-containerd-6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b.scope - libcontainer container 6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b. Jul 7 00:42:53.790144 containerd[1927]: time="2025-07-07T00:42:53.790115574Z" level=info msg="StartContainer for \"6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b\" returns successfully" Jul 7 00:42:53.794369 systemd[1]: cri-containerd-6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b.scope: Deactivated successfully. Jul 7 00:42:53.795718 containerd[1927]: time="2025-07-07T00:42:53.795696203Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b\" id:\"6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b\" pid:4080 exited_at:{seconds:1751848973 nanos:795378690}" Jul 7 00:42:53.795771 containerd[1927]: time="2025-07-07T00:42:53.795735228Z" level=info msg="received exit event container_id:\"6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b\" id:\"6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b\" pid:4080 exited_at:{seconds:1751848973 nanos:795378690}" Jul 7 00:42:53.808022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b-rootfs.mount: Deactivated successfully. Jul 7 00:42:53.865872 kubelet[3285]: E0707 00:42:53.865745 3285 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gwbvc" podUID="0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816" Jul 7 00:42:53.927534 kubelet[3285]: I0707 00:42:53.927452 3285 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:42:54.930971 containerd[1927]: time="2025-07-07T00:42:54.930921115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 00:42:55.866280 kubelet[3285]: E0707 00:42:55.866203 3285 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gwbvc" podUID="0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816" Jul 7 00:42:57.105282 containerd[1927]: time="2025-07-07T00:42:57.105255190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:57.105481 containerd[1927]: time="2025-07-07T00:42:57.105371413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 00:42:57.105805 containerd[1927]: time="2025-07-07T00:42:57.105766613Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:57.106645 containerd[1927]: time="2025-07-07T00:42:57.106605113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:42:57.106974 containerd[1927]: time="2025-07-07T00:42:57.106931432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.175959214s" Jul 7 00:42:57.106974 containerd[1927]: time="2025-07-07T00:42:57.106946569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 00:42:57.108395 containerd[1927]: time="2025-07-07T00:42:57.108382324Z" level=info msg="CreateContainer within sandbox \"0db42cc1d88dc33a65df34ab1dfe3577bbde222a07cf67c122a6f5a01a22119e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 00:42:57.112011 containerd[1927]: time="2025-07-07T00:42:57.111966713Z" level=info msg="Container c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:42:57.115314 containerd[1927]: time="2025-07-07T00:42:57.115300867Z" level=info msg="CreateContainer within sandbox \"0db42cc1d88dc33a65df34ab1dfe3577bbde222a07cf67c122a6f5a01a22119e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec\"" Jul 7 00:42:57.115548 containerd[1927]: time="2025-07-07T00:42:57.115537308Z" level=info msg="StartContainer for \"c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec\"" Jul 7 00:42:57.116316 containerd[1927]: time="2025-07-07T00:42:57.116304472Z" level=info msg="connecting to shim c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec" address="unix:///run/containerd/s/586d6a75f354781624b09e046dec70fd606099a8a992963f4bb89cfe11f2cb10" protocol=ttrpc version=3 Jul 7 00:42:57.132450 systemd[1]: Started cri-containerd-c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec.scope - libcontainer container c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec. Jul 7 00:42:57.151037 containerd[1927]: time="2025-07-07T00:42:57.151012685Z" level=info msg="StartContainer for \"c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec\" returns successfully" Jul 7 00:42:57.757795 containerd[1927]: time="2025-07-07T00:42:57.757763086Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:42:57.759167 systemd[1]: cri-containerd-c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec.scope: Deactivated successfully. Jul 7 00:42:57.759401 systemd[1]: cri-containerd-c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec.scope: Consumed 354ms CPU time, 195.8M memory peak, 171.2M written to disk. Jul 7 00:42:57.759950 containerd[1927]: time="2025-07-07T00:42:57.759928776Z" level=info msg="received exit event container_id:\"c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec\" id:\"c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec\" pid:4138 exited_at:{seconds:1751848977 nanos:759771555}" Jul 7 00:42:57.760032 containerd[1927]: time="2025-07-07T00:42:57.760013711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec\" id:\"c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec\" pid:4138 exited_at:{seconds:1751848977 nanos:759771555}" Jul 7 00:42:57.779638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec-rootfs.mount: Deactivated successfully. Jul 7 00:42:57.792664 kubelet[3285]: I0707 00:42:57.792646 3285 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:42:57.867170 systemd[1]: Created slice kubepods-besteffort-podd360f177_d589_4eae_9ea1_90a4124152c3.slice - libcontainer container kubepods-besteffort-podd360f177_d589_4eae_9ea1_90a4124152c3.slice. Jul 7 00:42:57.908062 kubelet[3285]: I0707 00:42:57.907921 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpv7\" (UniqueName: \"kubernetes.io/projected/d360f177-d589-4eae-9ea1-90a4124152c3-kube-api-access-bgpv7\") pod \"calico-apiserver-6cd87677cd-smbqf\" (UID: \"d360f177-d589-4eae-9ea1-90a4124152c3\") " pod="calico-apiserver/calico-apiserver-6cd87677cd-smbqf" Jul 7 00:42:57.908460 kubelet[3285]: I0707 00:42:57.908188 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d360f177-d589-4eae-9ea1-90a4124152c3-calico-apiserver-certs\") pod \"calico-apiserver-6cd87677cd-smbqf\" (UID: \"d360f177-d589-4eae-9ea1-90a4124152c3\") " pod="calico-apiserver/calico-apiserver-6cd87677cd-smbqf" Jul 7 00:42:57.940029 systemd[1]: Created slice kubepods-burstable-pod1c87a2da_0164_48e5_8bac_03563358ac5b.slice - libcontainer container kubepods-burstable-pod1c87a2da_0164_48e5_8bac_03563358ac5b.slice. Jul 7 00:42:57.956757 systemd[1]: Created slice kubepods-besteffort-pod0fa52e8a_71d9_43fe_a8ae_8d8b2dcee816.slice - libcontainer container kubepods-besteffort-pod0fa52e8a_71d9_43fe_a8ae_8d8b2dcee816.slice. Jul 7 00:42:57.961730 containerd[1927]: time="2025-07-07T00:42:57.961707124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gwbvc,Uid:0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816,Namespace:calico-system,Attempt:0,}" Jul 7 00:42:57.965646 systemd[1]: Created slice kubepods-besteffort-pod7d88211e_caac_49a1_a5e3_5c00be80871c.slice - libcontainer container kubepods-besteffort-pod7d88211e_caac_49a1_a5e3_5c00be80871c.slice. Jul 7 00:42:58.006259 systemd[1]: Created slice kubepods-besteffort-pod523f9ddf_3435_4ff2_8804_18d2521954b7.slice - libcontainer container kubepods-besteffort-pod523f9ddf_3435_4ff2_8804_18d2521954b7.slice. Jul 7 00:42:58.009270 kubelet[3285]: I0707 00:42:58.009236 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/523f9ddf-3435-4ff2-8804-18d2521954b7-tigera-ca-bundle\") pod \"calico-kube-controllers-668b8d6659-jwwj5\" (UID: \"523f9ddf-3435-4ff2-8804-18d2521954b7\") " pod="calico-system/calico-kube-controllers-668b8d6659-jwwj5" Jul 7 00:42:58.009270 kubelet[3285]: I0707 00:42:58.009257 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7d88211e-caac-49a1-a5e3-5c00be80871c-goldmane-key-pair\") pod \"goldmane-768f4c5c69-xgnbs\" (UID: \"7d88211e-caac-49a1-a5e3-5c00be80871c\") " pod="calico-system/goldmane-768f4c5c69-xgnbs" Jul 7 00:42:58.009343 kubelet[3285]: I0707 00:42:58.009308 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c87a2da-0164-48e5-8bac-03563358ac5b-config-volume\") pod \"coredns-674b8bbfcf-r5xlc\" (UID: \"1c87a2da-0164-48e5-8bac-03563358ac5b\") " pod="kube-system/coredns-674b8bbfcf-r5xlc" Jul 7 00:42:58.009343 kubelet[3285]: I0707 00:42:58.009323 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bkx9\" (UniqueName: \"kubernetes.io/projected/7d88211e-caac-49a1-a5e3-5c00be80871c-kube-api-access-9bkx9\") pod \"goldmane-768f4c5c69-xgnbs\" (UID: \"7d88211e-caac-49a1-a5e3-5c00be80871c\") " pod="calico-system/goldmane-768f4c5c69-xgnbs" Jul 7 00:42:58.009379 kubelet[3285]: I0707 00:42:58.009343 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d88211e-caac-49a1-a5e3-5c00be80871c-config\") pod \"goldmane-768f4c5c69-xgnbs\" (UID: \"7d88211e-caac-49a1-a5e3-5c00be80871c\") " pod="calico-system/goldmane-768f4c5c69-xgnbs" Jul 7 00:42:58.009379 kubelet[3285]: I0707 00:42:58.009360 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v2h7\" (UniqueName: \"kubernetes.io/projected/523f9ddf-3435-4ff2-8804-18d2521954b7-kube-api-access-9v2h7\") pod \"calico-kube-controllers-668b8d6659-jwwj5\" (UID: \"523f9ddf-3435-4ff2-8804-18d2521954b7\") " pod="calico-system/calico-kube-controllers-668b8d6659-jwwj5" Jul 7 00:42:58.009379 kubelet[3285]: I0707 00:42:58.009376 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbzsz\" (UniqueName: \"kubernetes.io/projected/1c87a2da-0164-48e5-8bac-03563358ac5b-kube-api-access-mbzsz\") pod \"coredns-674b8bbfcf-r5xlc\" (UID: \"1c87a2da-0164-48e5-8bac-03563358ac5b\") " pod="kube-system/coredns-674b8bbfcf-r5xlc" Jul 7 00:42:58.009432 kubelet[3285]: I0707 00:42:58.009386 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d88211e-caac-49a1-a5e3-5c00be80871c-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-xgnbs\" (UID: \"7d88211e-caac-49a1-a5e3-5c00be80871c\") " pod="calico-system/goldmane-768f4c5c69-xgnbs" Jul 7 00:42:58.057163 systemd[1]: Created slice kubepods-besteffort-poddff3c464_6ef5_42de_9c4e_b3e314d4259b.slice - libcontainer container kubepods-besteffort-poddff3c464_6ef5_42de_9c4e_b3e314d4259b.slice. Jul 7 00:42:58.091144 containerd[1927]: time="2025-07-07T00:42:58.091102204Z" level=error msg="Failed to destroy network for sandbox \"26135d33fe005c7ecec341556e5752d203783c746814c1c5a4909cd1e3fac12d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.110051 kubelet[3285]: I0707 00:42:58.109934 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dff3c464-6ef5-42de-9c4e-b3e314d4259b-whisker-backend-key-pair\") pod \"whisker-68cb56f76f-bxcm4\" (UID: \"dff3c464-6ef5-42de-9c4e-b3e314d4259b\") " pod="calico-system/whisker-68cb56f76f-bxcm4" Jul 7 00:42:58.110510 kubelet[3285]: I0707 00:42:58.110389 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dff3c464-6ef5-42de-9c4e-b3e314d4259b-whisker-ca-bundle\") pod \"whisker-68cb56f76f-bxcm4\" (UID: \"dff3c464-6ef5-42de-9c4e-b3e314d4259b\") " pod="calico-system/whisker-68cb56f76f-bxcm4" Jul 7 00:42:58.111193 kubelet[3285]: I0707 00:42:58.111084 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjd8c\" (UniqueName: \"kubernetes.io/projected/dff3c464-6ef5-42de-9c4e-b3e314d4259b-kube-api-access-sjd8c\") pod \"whisker-68cb56f76f-bxcm4\" (UID: \"dff3c464-6ef5-42de-9c4e-b3e314d4259b\") " pod="calico-system/whisker-68cb56f76f-bxcm4" Jul 7 00:42:58.124214 systemd[1]: run-netns-cni\x2d952b43d6\x2d7713\x2dfb6f\x2d4aea\x2dfcf111b36258.mount: Deactivated successfully. Jul 7 00:42:58.150056 systemd[1]: Created slice kubepods-besteffort-pod6c6d49bf_06c0_4afe_841d_9180c456f4c7.slice - libcontainer container kubepods-besteffort-pod6c6d49bf_06c0_4afe_841d_9180c456f4c7.slice. Jul 7 00:42:58.161191 containerd[1927]: time="2025-07-07T00:42:58.161164173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gwbvc,Uid:0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26135d33fe005c7ecec341556e5752d203783c746814c1c5a4909cd1e3fac12d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.161614 kubelet[3285]: E0707 00:42:58.161305 3285 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26135d33fe005c7ecec341556e5752d203783c746814c1c5a4909cd1e3fac12d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.161614 kubelet[3285]: E0707 00:42:58.161344 3285 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26135d33fe005c7ecec341556e5752d203783c746814c1c5a4909cd1e3fac12d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gwbvc" Jul 7 00:42:58.161614 kubelet[3285]: E0707 00:42:58.161357 3285 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26135d33fe005c7ecec341556e5752d203783c746814c1c5a4909cd1e3fac12d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gwbvc" Jul 7 00:42:58.161682 kubelet[3285]: E0707 00:42:58.161384 3285 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gwbvc_calico-system(0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gwbvc_calico-system(0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26135d33fe005c7ecec341556e5752d203783c746814c1c5a4909cd1e3fac12d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gwbvc" podUID="0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816" Jul 7 00:42:58.164624 systemd[1]: Created slice kubepods-burstable-pod8a042587_3358_4c70_a680_4a64a2dac953.slice - libcontainer container kubepods-burstable-pod8a042587_3358_4c70_a680_4a64a2dac953.slice. Jul 7 00:42:58.182865 containerd[1927]: time="2025-07-07T00:42:58.182811376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd87677cd-smbqf,Uid:d360f177-d589-4eae-9ea1-90a4124152c3,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:42:58.206023 containerd[1927]: time="2025-07-07T00:42:58.205974636Z" level=error msg="Failed to destroy network for sandbox \"d4f86f7a2b7948c1187950ee7ed0591f94f31143ccad0b06917e8f0110d5b8c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.206431 containerd[1927]: time="2025-07-07T00:42:58.206391665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd87677cd-smbqf,Uid:d360f177-d589-4eae-9ea1-90a4124152c3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4f86f7a2b7948c1187950ee7ed0591f94f31143ccad0b06917e8f0110d5b8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.206575 kubelet[3285]: E0707 00:42:58.206528 3285 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4f86f7a2b7948c1187950ee7ed0591f94f31143ccad0b06917e8f0110d5b8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.206575 kubelet[3285]: E0707 00:42:58.206569 3285 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4f86f7a2b7948c1187950ee7ed0591f94f31143ccad0b06917e8f0110d5b8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd87677cd-smbqf" Jul 7 00:42:58.206633 kubelet[3285]: E0707 00:42:58.206583 3285 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4f86f7a2b7948c1187950ee7ed0591f94f31143ccad0b06917e8f0110d5b8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd87677cd-smbqf" Jul 7 00:42:58.206633 kubelet[3285]: E0707 00:42:58.206617 3285 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd87677cd-smbqf_calico-apiserver(d360f177-d589-4eae-9ea1-90a4124152c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd87677cd-smbqf_calico-apiserver(d360f177-d589-4eae-9ea1-90a4124152c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4f86f7a2b7948c1187950ee7ed0591f94f31143ccad0b06917e8f0110d5b8c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd87677cd-smbqf" podUID="d360f177-d589-4eae-9ea1-90a4124152c3" Jul 7 00:42:58.207373 systemd[1]: run-netns-cni\x2de8fa408d\x2d1407\x2df464\x2d9cbd\x2db545cc061438.mount: Deactivated successfully. Jul 7 00:42:58.212301 kubelet[3285]: I0707 00:42:58.212262 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpx6r\" (UniqueName: \"kubernetes.io/projected/6c6d49bf-06c0-4afe-841d-9180c456f4c7-kube-api-access-kpx6r\") pod \"calico-apiserver-6cd87677cd-wzhxw\" (UID: \"6c6d49bf-06c0-4afe-841d-9180c456f4c7\") " pod="calico-apiserver/calico-apiserver-6cd87677cd-wzhxw" Jul 7 00:42:58.212301 kubelet[3285]: I0707 00:42:58.212278 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl4gw\" (UniqueName: \"kubernetes.io/projected/8a042587-3358-4c70-a680-4a64a2dac953-kube-api-access-nl4gw\") pod \"coredns-674b8bbfcf-757b8\" (UID: \"8a042587-3358-4c70-a680-4a64a2dac953\") " pod="kube-system/coredns-674b8bbfcf-757b8" Jul 7 00:42:58.212355 kubelet[3285]: I0707 00:42:58.212313 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6c6d49bf-06c0-4afe-841d-9180c456f4c7-calico-apiserver-certs\") pod \"calico-apiserver-6cd87677cd-wzhxw\" (UID: \"6c6d49bf-06c0-4afe-841d-9180c456f4c7\") " pod="calico-apiserver/calico-apiserver-6cd87677cd-wzhxw" Jul 7 00:42:58.212451 kubelet[3285]: I0707 00:42:58.212397 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a042587-3358-4c70-a680-4a64a2dac953-config-volume\") pod \"coredns-674b8bbfcf-757b8\" (UID: \"8a042587-3358-4c70-a680-4a64a2dac953\") " pod="kube-system/coredns-674b8bbfcf-757b8" Jul 7 00:42:58.263771 containerd[1927]: time="2025-07-07T00:42:58.263559310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r5xlc,Uid:1c87a2da-0164-48e5-8bac-03563358ac5b,Namespace:kube-system,Attempt:0,}" Jul 7 00:42:58.268040 containerd[1927]: time="2025-07-07T00:42:58.268011481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xgnbs,Uid:7d88211e-caac-49a1-a5e3-5c00be80871c,Namespace:calico-system,Attempt:0,}" Jul 7 00:42:58.287902 containerd[1927]: time="2025-07-07T00:42:58.287849395Z" level=error msg="Failed to destroy network for sandbox \"ff83e3af9f96e6def9f55777ea3dd48baa0c1631bef968781e3bcd910377f68a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.288384 containerd[1927]: time="2025-07-07T00:42:58.288362196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r5xlc,Uid:1c87a2da-0164-48e5-8bac-03563358ac5b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff83e3af9f96e6def9f55777ea3dd48baa0c1631bef968781e3bcd910377f68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.288545 kubelet[3285]: E0707 00:42:58.288525 3285 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff83e3af9f96e6def9f55777ea3dd48baa0c1631bef968781e3bcd910377f68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.288580 kubelet[3285]: E0707 00:42:58.288563 3285 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff83e3af9f96e6def9f55777ea3dd48baa0c1631bef968781e3bcd910377f68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r5xlc" Jul 7 00:42:58.288613 kubelet[3285]: E0707 00:42:58.288581 3285 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff83e3af9f96e6def9f55777ea3dd48baa0c1631bef968781e3bcd910377f68a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r5xlc" Jul 7 00:42:58.288646 kubelet[3285]: E0707 00:42:58.288616 3285 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-r5xlc_kube-system(1c87a2da-0164-48e5-8bac-03563358ac5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-r5xlc_kube-system(1c87a2da-0164-48e5-8bac-03563358ac5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff83e3af9f96e6def9f55777ea3dd48baa0c1631bef968781e3bcd910377f68a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-r5xlc" podUID="1c87a2da-0164-48e5-8bac-03563358ac5b" Jul 7 00:42:58.291172 containerd[1927]: time="2025-07-07T00:42:58.291145643Z" level=error msg="Failed to destroy network for sandbox \"a0a1cea968536b55d019d4ac35f84682f717b7e08462ca8c2ecaee44f121eb21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.291623 containerd[1927]: time="2025-07-07T00:42:58.291608780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xgnbs,Uid:7d88211e-caac-49a1-a5e3-5c00be80871c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0a1cea968536b55d019d4ac35f84682f717b7e08462ca8c2ecaee44f121eb21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.291716 kubelet[3285]: E0707 00:42:58.291697 3285 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0a1cea968536b55d019d4ac35f84682f717b7e08462ca8c2ecaee44f121eb21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.291738 kubelet[3285]: E0707 00:42:58.291730 3285 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0a1cea968536b55d019d4ac35f84682f717b7e08462ca8c2ecaee44f121eb21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-xgnbs" Jul 7 00:42:58.291761 kubelet[3285]: E0707 00:42:58.291743 3285 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0a1cea968536b55d019d4ac35f84682f717b7e08462ca8c2ecaee44f121eb21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-xgnbs" Jul 7 00:42:58.291782 kubelet[3285]: E0707 00:42:58.291767 3285 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-xgnbs_calico-system(7d88211e-caac-49a1-a5e3-5c00be80871c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-xgnbs_calico-system(7d88211e-caac-49a1-a5e3-5c00be80871c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0a1cea968536b55d019d4ac35f84682f717b7e08462ca8c2ecaee44f121eb21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-xgnbs" podUID="7d88211e-caac-49a1-a5e3-5c00be80871c" Jul 7 00:42:58.308558 containerd[1927]: time="2025-07-07T00:42:58.308515571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668b8d6659-jwwj5,Uid:523f9ddf-3435-4ff2-8804-18d2521954b7,Namespace:calico-system,Attempt:0,}" Jul 7 00:42:58.331803 containerd[1927]: time="2025-07-07T00:42:58.331776896Z" level=error msg="Failed to destroy network for sandbox \"656dc57b0f44241c10a1ff7c3900ee829e172d92386fbfc2431441c0d8144b67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.332213 containerd[1927]: time="2025-07-07T00:42:58.332197042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668b8d6659-jwwj5,Uid:523f9ddf-3435-4ff2-8804-18d2521954b7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"656dc57b0f44241c10a1ff7c3900ee829e172d92386fbfc2431441c0d8144b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.332388 kubelet[3285]: E0707 00:42:58.332367 3285 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"656dc57b0f44241c10a1ff7c3900ee829e172d92386fbfc2431441c0d8144b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.332420 kubelet[3285]: E0707 00:42:58.332405 3285 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"656dc57b0f44241c10a1ff7c3900ee829e172d92386fbfc2431441c0d8144b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-668b8d6659-jwwj5" Jul 7 00:42:58.332439 kubelet[3285]: E0707 00:42:58.332419 3285 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"656dc57b0f44241c10a1ff7c3900ee829e172d92386fbfc2431441c0d8144b67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-668b8d6659-jwwj5" Jul 7 00:42:58.332464 kubelet[3285]: E0707 00:42:58.332451 3285 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-668b8d6659-jwwj5_calico-system(523f9ddf-3435-4ff2-8804-18d2521954b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-668b8d6659-jwwj5_calico-system(523f9ddf-3435-4ff2-8804-18d2521954b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"656dc57b0f44241c10a1ff7c3900ee829e172d92386fbfc2431441c0d8144b67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-668b8d6659-jwwj5" podUID="523f9ddf-3435-4ff2-8804-18d2521954b7" Jul 7 00:42:58.364549 containerd[1927]: time="2025-07-07T00:42:58.364449106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68cb56f76f-bxcm4,Uid:dff3c464-6ef5-42de-9c4e-b3e314d4259b,Namespace:calico-system,Attempt:0,}" Jul 7 00:42:58.389780 containerd[1927]: time="2025-07-07T00:42:58.389727316Z" level=error msg="Failed to destroy network for sandbox \"ad65ef16d8707bf444ad3cdf0e735246bf5fc63c7b2ce179a56082817b0a17c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.390208 containerd[1927]: time="2025-07-07T00:42:58.390154915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68cb56f76f-bxcm4,Uid:dff3c464-6ef5-42de-9c4e-b3e314d4259b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad65ef16d8707bf444ad3cdf0e735246bf5fc63c7b2ce179a56082817b0a17c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.390319 kubelet[3285]: E0707 00:42:58.390302 3285 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad65ef16d8707bf444ad3cdf0e735246bf5fc63c7b2ce179a56082817b0a17c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.390353 kubelet[3285]: E0707 00:42:58.390335 3285 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad65ef16d8707bf444ad3cdf0e735246bf5fc63c7b2ce179a56082817b0a17c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68cb56f76f-bxcm4" Jul 7 00:42:58.390353 kubelet[3285]: E0707 00:42:58.390348 3285 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad65ef16d8707bf444ad3cdf0e735246bf5fc63c7b2ce179a56082817b0a17c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68cb56f76f-bxcm4" Jul 7 00:42:58.390397 kubelet[3285]: E0707 00:42:58.390378 3285 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68cb56f76f-bxcm4_calico-system(dff3c464-6ef5-42de-9c4e-b3e314d4259b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68cb56f76f-bxcm4_calico-system(dff3c464-6ef5-42de-9c4e-b3e314d4259b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad65ef16d8707bf444ad3cdf0e735246bf5fc63c7b2ce179a56082817b0a17c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68cb56f76f-bxcm4" podUID="dff3c464-6ef5-42de-9c4e-b3e314d4259b" Jul 7 00:42:58.453154 containerd[1927]: time="2025-07-07T00:42:58.453030184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd87677cd-wzhxw,Uid:6c6d49bf-06c0-4afe-841d-9180c456f4c7,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:42:58.466615 containerd[1927]: time="2025-07-07T00:42:58.466560185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-757b8,Uid:8a042587-3358-4c70-a680-4a64a2dac953,Namespace:kube-system,Attempt:0,}" Jul 7 00:42:58.476201 containerd[1927]: time="2025-07-07T00:42:58.476176116Z" level=error msg="Failed to destroy network for sandbox \"9e8fdb17d038dae7b8ceff49e0e17c87722f1d6761df770dc6333c78e2780994\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.476625 containerd[1927]: time="2025-07-07T00:42:58.476608482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd87677cd-wzhxw,Uid:6c6d49bf-06c0-4afe-841d-9180c456f4c7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8fdb17d038dae7b8ceff49e0e17c87722f1d6761df770dc6333c78e2780994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.476760 kubelet[3285]: E0707 00:42:58.476740 3285 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8fdb17d038dae7b8ceff49e0e17c87722f1d6761df770dc6333c78e2780994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.476789 kubelet[3285]: E0707 00:42:58.476779 3285 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8fdb17d038dae7b8ceff49e0e17c87722f1d6761df770dc6333c78e2780994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd87677cd-wzhxw" Jul 7 00:42:58.476807 kubelet[3285]: E0707 00:42:58.476795 3285 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8fdb17d038dae7b8ceff49e0e17c87722f1d6761df770dc6333c78e2780994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd87677cd-wzhxw" Jul 7 00:42:58.476837 kubelet[3285]: E0707 00:42:58.476825 3285 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd87677cd-wzhxw_calico-apiserver(6c6d49bf-06c0-4afe-841d-9180c456f4c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd87677cd-wzhxw_calico-apiserver(6c6d49bf-06c0-4afe-841d-9180c456f4c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e8fdb17d038dae7b8ceff49e0e17c87722f1d6761df770dc6333c78e2780994\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd87677cd-wzhxw" podUID="6c6d49bf-06c0-4afe-841d-9180c456f4c7" Jul 7 00:42:58.488856 containerd[1927]: time="2025-07-07T00:42:58.488829203Z" level=error msg="Failed to destroy network for sandbox \"15b1c0755bebd3532cd77d9a76ff63a0df29e65083ffb0aba97e4970eaefe8e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.489247 containerd[1927]: time="2025-07-07T00:42:58.489230677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-757b8,Uid:8a042587-3358-4c70-a680-4a64a2dac953,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b1c0755bebd3532cd77d9a76ff63a0df29e65083ffb0aba97e4970eaefe8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.489413 kubelet[3285]: E0707 00:42:58.489390 3285 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b1c0755bebd3532cd77d9a76ff63a0df29e65083ffb0aba97e4970eaefe8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:42:58.489449 kubelet[3285]: E0707 00:42:58.489431 3285 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b1c0755bebd3532cd77d9a76ff63a0df29e65083ffb0aba97e4970eaefe8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-757b8" Jul 7 00:42:58.489449 kubelet[3285]: E0707 00:42:58.489445 3285 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b1c0755bebd3532cd77d9a76ff63a0df29e65083ffb0aba97e4970eaefe8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-757b8" Jul 7 00:42:58.489494 kubelet[3285]: E0707 00:42:58.489482 3285 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-757b8_kube-system(8a042587-3358-4c70-a680-4a64a2dac953)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-757b8_kube-system(8a042587-3358-4c70-a680-4a64a2dac953)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15b1c0755bebd3532cd77d9a76ff63a0df29e65083ffb0aba97e4970eaefe8e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-757b8" podUID="8a042587-3358-4c70-a680-4a64a2dac953" Jul 7 00:42:58.956594 containerd[1927]: time="2025-07-07T00:42:58.956523602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 00:43:02.302341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401560851.mount: Deactivated successfully. Jul 7 00:43:02.323615 containerd[1927]: time="2025-07-07T00:43:02.323566291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:02.323754 containerd[1927]: time="2025-07-07T00:43:02.323681328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 00:43:02.324087 containerd[1927]: time="2025-07-07T00:43:02.324046908Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:02.324843 containerd[1927]: time="2025-07-07T00:43:02.324802533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:02.325444 containerd[1927]: time="2025-07-07T00:43:02.325404271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 3.368820339s" Jul 7 00:43:02.325444 containerd[1927]: time="2025-07-07T00:43:02.325418588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 00:43:02.330489 containerd[1927]: time="2025-07-07T00:43:02.330467289Z" level=info msg="CreateContainer within sandbox \"0db42cc1d88dc33a65df34ab1dfe3577bbde222a07cf67c122a6f5a01a22119e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 00:43:02.333977 containerd[1927]: time="2025-07-07T00:43:02.333935422Z" level=info msg="Container 04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:43:02.338665 containerd[1927]: time="2025-07-07T00:43:02.338625283Z" level=info msg="CreateContainer within sandbox \"0db42cc1d88dc33a65df34ab1dfe3577bbde222a07cf67c122a6f5a01a22119e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\"" Jul 7 00:43:02.338857 containerd[1927]: time="2025-07-07T00:43:02.338821095Z" level=info msg="StartContainer for \"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\"" Jul 7 00:43:02.339658 containerd[1927]: time="2025-07-07T00:43:02.339613470Z" level=info msg="connecting to shim 04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be" address="unix:///run/containerd/s/586d6a75f354781624b09e046dec70fd606099a8a992963f4bb89cfe11f2cb10" protocol=ttrpc version=3 Jul 7 00:43:02.356442 systemd[1]: Started cri-containerd-04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be.scope - libcontainer container 04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be. Jul 7 00:43:02.378300 containerd[1927]: time="2025-07-07T00:43:02.378249993Z" level=info msg="StartContainer for \"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" returns successfully" Jul 7 00:43:02.438147 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 00:43:02.438203 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 00:43:02.541331 kubelet[3285]: I0707 00:43:02.541308 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjd8c\" (UniqueName: \"kubernetes.io/projected/dff3c464-6ef5-42de-9c4e-b3e314d4259b-kube-api-access-sjd8c\") pod \"dff3c464-6ef5-42de-9c4e-b3e314d4259b\" (UID: \"dff3c464-6ef5-42de-9c4e-b3e314d4259b\") " Jul 7 00:43:02.541535 kubelet[3285]: I0707 00:43:02.541340 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dff3c464-6ef5-42de-9c4e-b3e314d4259b-whisker-backend-key-pair\") pod \"dff3c464-6ef5-42de-9c4e-b3e314d4259b\" (UID: \"dff3c464-6ef5-42de-9c4e-b3e314d4259b\") " Jul 7 00:43:02.541535 kubelet[3285]: I0707 00:43:02.541360 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dff3c464-6ef5-42de-9c4e-b3e314d4259b-whisker-ca-bundle\") pod \"dff3c464-6ef5-42de-9c4e-b3e314d4259b\" (UID: \"dff3c464-6ef5-42de-9c4e-b3e314d4259b\") " Jul 7 00:43:02.541582 kubelet[3285]: I0707 00:43:02.541549 3285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff3c464-6ef5-42de-9c4e-b3e314d4259b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dff3c464-6ef5-42de-9c4e-b3e314d4259b" (UID: "dff3c464-6ef5-42de-9c4e-b3e314d4259b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:43:02.542820 kubelet[3285]: I0707 00:43:02.542775 3285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff3c464-6ef5-42de-9c4e-b3e314d4259b-kube-api-access-sjd8c" (OuterVolumeSpecName: "kube-api-access-sjd8c") pod "dff3c464-6ef5-42de-9c4e-b3e314d4259b" (UID: "dff3c464-6ef5-42de-9c4e-b3e314d4259b"). InnerVolumeSpecName "kube-api-access-sjd8c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:43:02.542820 kubelet[3285]: I0707 00:43:02.542782 3285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff3c464-6ef5-42de-9c4e-b3e314d4259b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dff3c464-6ef5-42de-9c4e-b3e314d4259b" (UID: "dff3c464-6ef5-42de-9c4e-b3e314d4259b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:43:02.642702 kubelet[3285]: I0707 00:43:02.642511 3285 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dff3c464-6ef5-42de-9c4e-b3e314d4259b-whisker-ca-bundle\") on node \"ci-4344.1.1-a-7d9f698c61\" DevicePath \"\"" Jul 7 00:43:02.642702 kubelet[3285]: I0707 00:43:02.642564 3285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sjd8c\" (UniqueName: \"kubernetes.io/projected/dff3c464-6ef5-42de-9c4e-b3e314d4259b-kube-api-access-sjd8c\") on node \"ci-4344.1.1-a-7d9f698c61\" DevicePath \"\"" Jul 7 00:43:02.642702 kubelet[3285]: I0707 00:43:02.642593 3285 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dff3c464-6ef5-42de-9c4e-b3e314d4259b-whisker-backend-key-pair\") on node \"ci-4344.1.1-a-7d9f698c61\" DevicePath \"\"" Jul 7 00:43:02.980461 systemd[1]: Removed slice kubepods-besteffort-poddff3c464_6ef5_42de_9c4e_b3e314d4259b.slice - libcontainer container kubepods-besteffort-poddff3c464_6ef5_42de_9c4e_b3e314d4259b.slice. Jul 7 00:43:02.985697 kubelet[3285]: I0707 00:43:02.985667 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dxcr6" podStartSLOduration=1.078812723 podStartE2EDuration="12.985656975s" podCreationTimestamp="2025-07-07 00:42:50 +0000 UTC" firstStartedPulling="2025-07-07 00:42:50.418909259 +0000 UTC m=+14.597954368" lastFinishedPulling="2025-07-07 00:43:02.325753507 +0000 UTC m=+26.504798620" observedRunningTime="2025-07-07 00:43:02.985449864 +0000 UTC m=+27.164494977" watchObservedRunningTime="2025-07-07 00:43:02.985656975 +0000 UTC m=+27.164702085" Jul 7 00:43:03.005600 systemd[1]: Created slice kubepods-besteffort-pode139211c_620b_46bf_8d47_8658cb181b27.slice - libcontainer container kubepods-besteffort-pode139211c_620b_46bf_8d47_8658cb181b27.slice. Jul 7 00:43:03.045703 kubelet[3285]: I0707 00:43:03.045582 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e139211c-620b-46bf-8d47-8658cb181b27-whisker-backend-key-pair\") pod \"whisker-645bd6bf95-xcpdg\" (UID: \"e139211c-620b-46bf-8d47-8658cb181b27\") " pod="calico-system/whisker-645bd6bf95-xcpdg" Jul 7 00:43:03.045703 kubelet[3285]: I0707 00:43:03.045710 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e139211c-620b-46bf-8d47-8658cb181b27-whisker-ca-bundle\") pod \"whisker-645bd6bf95-xcpdg\" (UID: \"e139211c-620b-46bf-8d47-8658cb181b27\") " pod="calico-system/whisker-645bd6bf95-xcpdg" Jul 7 00:43:03.046068 kubelet[3285]: I0707 00:43:03.045799 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f96fg\" (UniqueName: \"kubernetes.io/projected/e139211c-620b-46bf-8d47-8658cb181b27-kube-api-access-f96fg\") pod \"whisker-645bd6bf95-xcpdg\" (UID: \"e139211c-620b-46bf-8d47-8658cb181b27\") " pod="calico-system/whisker-645bd6bf95-xcpdg" Jul 7 00:43:03.308046 containerd[1927]: time="2025-07-07T00:43:03.307996805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-645bd6bf95-xcpdg,Uid:e139211c-620b-46bf-8d47-8658cb181b27,Namespace:calico-system,Attempt:0,}" Jul 7 00:43:03.309322 systemd[1]: var-lib-kubelet-pods-dff3c464\x2d6ef5\x2d42de\x2d9c4e\x2db3e314d4259b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjd8c.mount: Deactivated successfully. Jul 7 00:43:03.309417 systemd[1]: var-lib-kubelet-pods-dff3c464\x2d6ef5\x2d42de\x2d9c4e\x2db3e314d4259b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 00:43:03.376279 systemd-networkd[1844]: calie9ec5965d21: Link UP Jul 7 00:43:03.376710 systemd-networkd[1844]: calie9ec5965d21: Gained carrier Jul 7 00:43:03.390759 containerd[1927]: 2025-07-07 00:43:03.319 [INFO][4644] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:43:03.390759 containerd[1927]: 2025-07-07 00:43:03.327 [INFO][4644] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--7d9f698c61-k8s-whisker--645bd6bf95--xcpdg-eth0 whisker-645bd6bf95- calico-system e139211c-620b-46bf-8d47-8658cb181b27 856 0 2025-07-07 00:43:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:645bd6bf95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.1.1-a-7d9f698c61 whisker-645bd6bf95-xcpdg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie9ec5965d21 [] [] }} ContainerID="26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" Namespace="calico-system" Pod="whisker-645bd6bf95-xcpdg" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-whisker--645bd6bf95--xcpdg-" Jul 7 00:43:03.390759 containerd[1927]: 2025-07-07 00:43:03.327 [INFO][4644] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" Namespace="calico-system" Pod="whisker-645bd6bf95-xcpdg" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-whisker--645bd6bf95--xcpdg-eth0" Jul 7 00:43:03.390759 containerd[1927]: 2025-07-07 00:43:03.341 [INFO][4666] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" HandleID="k8s-pod-network.26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" Workload="ci--4344.1.1--a--7d9f698c61-k8s-whisker--645bd6bf95--xcpdg-eth0" Jul 7 00:43:03.391678 containerd[1927]: 2025-07-07 00:43:03.341 [INFO][4666] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" HandleID="k8s-pod-network.26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" Workload="ci--4344.1.1--a--7d9f698c61-k8s-whisker--645bd6bf95--xcpdg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ce250), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-7d9f698c61", "pod":"whisker-645bd6bf95-xcpdg", "timestamp":"2025-07-07 00:43:03.341427841 +0000 UTC"}, Hostname:"ci-4344.1.1-a-7d9f698c61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:43:03.391678 containerd[1927]: 2025-07-07 00:43:03.341 [INFO][4666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:43:03.391678 containerd[1927]: 2025-07-07 00:43:03.341 [INFO][4666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:43:03.391678 containerd[1927]: 2025-07-07 00:43:03.341 [INFO][4666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-7d9f698c61' Jul 7 00:43:03.391678 containerd[1927]: 2025-07-07 00:43:03.347 [INFO][4666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:03.391678 containerd[1927]: 2025-07-07 00:43:03.351 [INFO][4666] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:03.391678 containerd[1927]: 2025-07-07 00:43:03.355 [INFO][4666] ipam/ipam.go 511: Trying affinity for 192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:03.391678 containerd[1927]: 2025-07-07 00:43:03.356 [INFO][4666] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:03.391678 containerd[1927]: 2025-07-07 00:43:03.358 [INFO][4666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:03.392240 containerd[1927]: 2025-07-07 00:43:03.358 [INFO][4666] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.128/26 handle="k8s-pod-network.26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:03.392240 containerd[1927]: 2025-07-07 00:43:03.359 [INFO][4666] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e Jul 7 00:43:03.392240 containerd[1927]: 2025-07-07 00:43:03.361 [INFO][4666] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.128/26 handle="k8s-pod-network.26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:03.392240 containerd[1927]: 2025-07-07 00:43:03.365 [INFO][4666] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.129/26] block=192.168.108.128/26 handle="k8s-pod-network.26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:03.392240 containerd[1927]: 2025-07-07 00:43:03.365 [INFO][4666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.129/26] handle="k8s-pod-network.26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:03.392240 containerd[1927]: 2025-07-07 00:43:03.365 [INFO][4666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:43:03.392240 containerd[1927]: 2025-07-07 00:43:03.365 [INFO][4666] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.129/26] IPv6=[] ContainerID="26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" HandleID="k8s-pod-network.26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" Workload="ci--4344.1.1--a--7d9f698c61-k8s-whisker--645bd6bf95--xcpdg-eth0" Jul 7 00:43:03.392604 containerd[1927]: 2025-07-07 00:43:03.368 [INFO][4644] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" Namespace="calico-system" Pod="whisker-645bd6bf95-xcpdg" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-whisker--645bd6bf95--xcpdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-whisker--645bd6bf95--xcpdg-eth0", GenerateName:"whisker-645bd6bf95-", Namespace:"calico-system", SelfLink:"", UID:"e139211c-620b-46bf-8d47-8658cb181b27", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 43, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"645bd6bf95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"", Pod:"whisker-645bd6bf95-xcpdg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.108.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie9ec5965d21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:03.392604 containerd[1927]: 2025-07-07 00:43:03.368 [INFO][4644] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.129/32] ContainerID="26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" Namespace="calico-system" Pod="whisker-645bd6bf95-xcpdg" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-whisker--645bd6bf95--xcpdg-eth0" Jul 7 00:43:03.392802 containerd[1927]: 2025-07-07 00:43:03.368 [INFO][4644] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9ec5965d21 ContainerID="26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" Namespace="calico-system" Pod="whisker-645bd6bf95-xcpdg" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-whisker--645bd6bf95--xcpdg-eth0" Jul 7 00:43:03.392802 containerd[1927]: 2025-07-07 00:43:03.376 [INFO][4644] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" Namespace="calico-system" Pod="whisker-645bd6bf95-xcpdg" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-whisker--645bd6bf95--xcpdg-eth0" Jul 7 00:43:03.392917 containerd[1927]: 2025-07-07 00:43:03.377 [INFO][4644] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" Namespace="calico-system" Pod="whisker-645bd6bf95-xcpdg" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-whisker--645bd6bf95--xcpdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-whisker--645bd6bf95--xcpdg-eth0", GenerateName:"whisker-645bd6bf95-", Namespace:"calico-system", SelfLink:"", UID:"e139211c-620b-46bf-8d47-8658cb181b27", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 43, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"645bd6bf95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e", Pod:"whisker-645bd6bf95-xcpdg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.108.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie9ec5965d21", MAC:"76:3f:6c:6a:d6:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:03.393044 containerd[1927]: 2025-07-07 00:43:03.387 [INFO][4644] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" Namespace="calico-system" Pod="whisker-645bd6bf95-xcpdg" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-whisker--645bd6bf95--xcpdg-eth0" Jul 7 00:43:03.418186 containerd[1927]: time="2025-07-07T00:43:03.418133782Z" level=info msg="connecting to shim 26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e" address="unix:///run/containerd/s/a9f6edd08680092f4b7dddc861e182d254150e07e89d37cae6454831f1c588c0" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:43:03.435455 systemd[1]: Started cri-containerd-26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e.scope - libcontainer container 26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e. Jul 7 00:43:03.464852 containerd[1927]: time="2025-07-07T00:43:03.464831126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-645bd6bf95-xcpdg,Uid:e139211c-620b-46bf-8d47-8658cb181b27,Namespace:calico-system,Attempt:0,} returns sandbox id \"26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e\"" Jul 7 00:43:03.465473 containerd[1927]: time="2025-07-07T00:43:03.465435385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 00:43:03.871423 kubelet[3285]: I0707 00:43:03.871350 3285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff3c464-6ef5-42de-9c4e-b3e314d4259b" path="/var/lib/kubelet/pods/dff3c464-6ef5-42de-9c4e-b3e314d4259b/volumes" Jul 7 00:43:03.978689 kubelet[3285]: I0707 00:43:03.978644 3285 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:43:04.625281 systemd-networkd[1844]: calie9ec5965d21: Gained IPv6LL Jul 7 00:43:05.017291 containerd[1927]: time="2025-07-07T00:43:05.017238826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:05.017493 containerd[1927]: time="2025-07-07T00:43:05.017440376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 00:43:05.017850 containerd[1927]: time="2025-07-07T00:43:05.017811370Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:05.018701 containerd[1927]: time="2025-07-07T00:43:05.018657490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:05.019082 containerd[1927]: time="2025-07-07T00:43:05.019041328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.553591202s" Jul 7 00:43:05.019082 containerd[1927]: time="2025-07-07T00:43:05.019056042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 00:43:05.020833 containerd[1927]: time="2025-07-07T00:43:05.020823588Z" level=info msg="CreateContainer within sandbox \"26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 00:43:05.023823 containerd[1927]: time="2025-07-07T00:43:05.023809652Z" level=info msg="Container 79d48b6d5e4654d74e82700ace530a067ea29c7ab34df99d2cf10f6252b27c05: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:43:05.027439 containerd[1927]: time="2025-07-07T00:43:05.027399572Z" level=info msg="CreateContainer within sandbox \"26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"79d48b6d5e4654d74e82700ace530a067ea29c7ab34df99d2cf10f6252b27c05\"" Jul 7 00:43:05.027697 containerd[1927]: time="2025-07-07T00:43:05.027659682Z" level=info msg="StartContainer for \"79d48b6d5e4654d74e82700ace530a067ea29c7ab34df99d2cf10f6252b27c05\"" Jul 7 00:43:05.028460 containerd[1927]: time="2025-07-07T00:43:05.028417836Z" level=info msg="connecting to shim 79d48b6d5e4654d74e82700ace530a067ea29c7ab34df99d2cf10f6252b27c05" address="unix:///run/containerd/s/a9f6edd08680092f4b7dddc861e182d254150e07e89d37cae6454831f1c588c0" protocol=ttrpc version=3 Jul 7 00:43:05.049538 systemd[1]: Started cri-containerd-79d48b6d5e4654d74e82700ace530a067ea29c7ab34df99d2cf10f6252b27c05.scope - libcontainer container 79d48b6d5e4654d74e82700ace530a067ea29c7ab34df99d2cf10f6252b27c05. Jul 7 00:43:05.139503 containerd[1927]: time="2025-07-07T00:43:05.139442531Z" level=info msg="StartContainer for \"79d48b6d5e4654d74e82700ace530a067ea29c7ab34df99d2cf10f6252b27c05\" returns successfully" Jul 7 00:43:05.140026 containerd[1927]: time="2025-07-07T00:43:05.140011071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 00:43:07.356074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount420381414.mount: Deactivated successfully. Jul 7 00:43:07.360322 containerd[1927]: time="2025-07-07T00:43:07.360285490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:07.360483 containerd[1927]: time="2025-07-07T00:43:07.360458394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 00:43:07.360898 containerd[1927]: time="2025-07-07T00:43:07.360858207Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:07.361734 containerd[1927]: time="2025-07-07T00:43:07.361720005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:07.362137 containerd[1927]: time="2025-07-07T00:43:07.362097270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.222066233s" Jul 7 00:43:07.362137 containerd[1927]: time="2025-07-07T00:43:07.362113788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 00:43:07.363512 containerd[1927]: time="2025-07-07T00:43:07.363472498Z" level=info msg="CreateContainer within sandbox \"26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:43:07.366459 containerd[1927]: time="2025-07-07T00:43:07.366443617Z" level=info msg="Container 6af3ce7ca9bbd6987298df671864c55d046b975b3bb78b5c35b0d0ecf599e074: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:43:07.369395 containerd[1927]: time="2025-07-07T00:43:07.369354296Z" level=info msg="CreateContainer within sandbox \"26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6af3ce7ca9bbd6987298df671864c55d046b975b3bb78b5c35b0d0ecf599e074\"" Jul 7 00:43:07.369590 containerd[1927]: time="2025-07-07T00:43:07.369577244Z" level=info msg="StartContainer for \"6af3ce7ca9bbd6987298df671864c55d046b975b3bb78b5c35b0d0ecf599e074\"" Jul 7 00:43:07.370147 containerd[1927]: time="2025-07-07T00:43:07.370134234Z" level=info msg="connecting to shim 6af3ce7ca9bbd6987298df671864c55d046b975b3bb78b5c35b0d0ecf599e074" address="unix:///run/containerd/s/a9f6edd08680092f4b7dddc861e182d254150e07e89d37cae6454831f1c588c0" protocol=ttrpc version=3 Jul 7 00:43:07.389279 systemd[1]: Started cri-containerd-6af3ce7ca9bbd6987298df671864c55d046b975b3bb78b5c35b0d0ecf599e074.scope - libcontainer container 6af3ce7ca9bbd6987298df671864c55d046b975b3bb78b5c35b0d0ecf599e074. Jul 7 00:43:07.420631 containerd[1927]: time="2025-07-07T00:43:07.420608559Z" level=info msg="StartContainer for \"6af3ce7ca9bbd6987298df671864c55d046b975b3bb78b5c35b0d0ecf599e074\" returns successfully" Jul 7 00:43:08.011041 kubelet[3285]: I0707 00:43:08.010923 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-645bd6bf95-xcpdg" podStartSLOduration=2.113785051 podStartE2EDuration="6.01088428s" podCreationTimestamp="2025-07-07 00:43:02 +0000 UTC" firstStartedPulling="2025-07-07 00:43:03.465340787 +0000 UTC m=+27.644385898" lastFinishedPulling="2025-07-07 00:43:07.362440017 +0000 UTC m=+31.541485127" observedRunningTime="2025-07-07 00:43:08.009589201 +0000 UTC m=+32.188634378" watchObservedRunningTime="2025-07-07 00:43:08.01088428 +0000 UTC m=+32.189929439" Jul 7 00:43:08.560626 kubelet[3285]: I0707 00:43:08.560516 3285 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:43:08.614864 containerd[1927]: time="2025-07-07T00:43:08.614842691Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"5a4bc7203278a4f92f19c8c66da59a1fd8a4ce29538e082a207e05aa20bd340e\" pid:5185 exit_status:1 exited_at:{seconds:1751848988 nanos:614575240}" Jul 7 00:43:08.664034 containerd[1927]: time="2025-07-07T00:43:08.664012378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"165527e8f17b310a50e519197e7d9db2ecf9b25579fc799dc6210c067d57d05d\" pid:5222 exit_status:1 exited_at:{seconds:1751848988 nanos:663857386}" Jul 7 00:43:08.817297 kubelet[3285]: I0707 00:43:08.817194 3285 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:43:08.866693 containerd[1927]: time="2025-07-07T00:43:08.866606148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668b8d6659-jwwj5,Uid:523f9ddf-3435-4ff2-8804-18d2521954b7,Namespace:calico-system,Attempt:0,}" Jul 7 00:43:08.919421 systemd-networkd[1844]: cali64a5904b0af: Link UP Jul 7 00:43:08.919560 systemd-networkd[1844]: cali64a5904b0af: Gained carrier Jul 7 00:43:08.925914 containerd[1927]: 2025-07-07 00:43:08.878 [INFO][5291] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:43:08.925914 containerd[1927]: 2025-07-07 00:43:08.884 [INFO][5291] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--7d9f698c61-k8s-calico--kube--controllers--668b8d6659--jwwj5-eth0 calico-kube-controllers-668b8d6659- calico-system 523f9ddf-3435-4ff2-8804-18d2521954b7 798 0 2025-07-07 00:42:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:668b8d6659 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.1.1-a-7d9f698c61 calico-kube-controllers-668b8d6659-jwwj5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali64a5904b0af [] [] }} ContainerID="70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" Namespace="calico-system" Pod="calico-kube-controllers-668b8d6659-jwwj5" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--kube--controllers--668b8d6659--jwwj5-" Jul 7 00:43:08.925914 containerd[1927]: 2025-07-07 00:43:08.884 [INFO][5291] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" Namespace="calico-system" Pod="calico-kube-controllers-668b8d6659-jwwj5" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--kube--controllers--668b8d6659--jwwj5-eth0" Jul 7 00:43:08.925914 containerd[1927]: 2025-07-07 00:43:08.897 [INFO][5314] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" HandleID="k8s-pod-network.70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" Workload="ci--4344.1.1--a--7d9f698c61-k8s-calico--kube--controllers--668b8d6659--jwwj5-eth0" Jul 7 00:43:08.926084 containerd[1927]: 2025-07-07 00:43:08.897 [INFO][5314] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" HandleID="k8s-pod-network.70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" Workload="ci--4344.1.1--a--7d9f698c61-k8s-calico--kube--controllers--668b8d6659--jwwj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001395b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-7d9f698c61", "pod":"calico-kube-controllers-668b8d6659-jwwj5", "timestamp":"2025-07-07 00:43:08.897233968 +0000 UTC"}, Hostname:"ci-4344.1.1-a-7d9f698c61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:43:08.926084 containerd[1927]: 2025-07-07 00:43:08.897 [INFO][5314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:43:08.926084 containerd[1927]: 2025-07-07 00:43:08.897 [INFO][5314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:43:08.926084 containerd[1927]: 2025-07-07 00:43:08.897 [INFO][5314] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-7d9f698c61' Jul 7 00:43:08.926084 containerd[1927]: 2025-07-07 00:43:08.902 [INFO][5314] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:08.926084 containerd[1927]: 2025-07-07 00:43:08.906 [INFO][5314] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:08.926084 containerd[1927]: 2025-07-07 00:43:08.910 [INFO][5314] ipam/ipam.go 511: Trying affinity for 192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:08.926084 containerd[1927]: 2025-07-07 00:43:08.911 [INFO][5314] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:08.926084 containerd[1927]: 2025-07-07 00:43:08.912 [INFO][5314] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:08.926297 containerd[1927]: 2025-07-07 00:43:08.912 [INFO][5314] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.128/26 handle="k8s-pod-network.70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:08.926297 containerd[1927]: 2025-07-07 00:43:08.913 [INFO][5314] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec Jul 7 00:43:08.926297 containerd[1927]: 2025-07-07 00:43:08.914 [INFO][5314] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.128/26 handle="k8s-pod-network.70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:08.926297 containerd[1927]: 2025-07-07 00:43:08.917 [INFO][5314] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.130/26] block=192.168.108.128/26 handle="k8s-pod-network.70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:08.926297 containerd[1927]: 2025-07-07 00:43:08.917 [INFO][5314] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.130/26] handle="k8s-pod-network.70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:08.926297 containerd[1927]: 2025-07-07 00:43:08.917 [INFO][5314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:43:08.926297 containerd[1927]: 2025-07-07 00:43:08.917 [INFO][5314] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.130/26] IPv6=[] ContainerID="70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" HandleID="k8s-pod-network.70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" Workload="ci--4344.1.1--a--7d9f698c61-k8s-calico--kube--controllers--668b8d6659--jwwj5-eth0" Jul 7 00:43:08.926437 containerd[1927]: 2025-07-07 00:43:08.918 [INFO][5291] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" Namespace="calico-system" Pod="calico-kube-controllers-668b8d6659-jwwj5" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--kube--controllers--668b8d6659--jwwj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-calico--kube--controllers--668b8d6659--jwwj5-eth0", GenerateName:"calico-kube-controllers-668b8d6659-", Namespace:"calico-system", SelfLink:"", UID:"523f9ddf-3435-4ff2-8804-18d2521954b7", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 42, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"668b8d6659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"", Pod:"calico-kube-controllers-668b8d6659-jwwj5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali64a5904b0af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:08.926490 containerd[1927]: 2025-07-07 00:43:08.918 [INFO][5291] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.130/32] ContainerID="70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" Namespace="calico-system" Pod="calico-kube-controllers-668b8d6659-jwwj5" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--kube--controllers--668b8d6659--jwwj5-eth0" Jul 7 00:43:08.926490 containerd[1927]: 2025-07-07 00:43:08.918 [INFO][5291] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64a5904b0af ContainerID="70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" Namespace="calico-system" Pod="calico-kube-controllers-668b8d6659-jwwj5" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--kube--controllers--668b8d6659--jwwj5-eth0" Jul 7 00:43:08.926490 containerd[1927]: 2025-07-07 00:43:08.919 [INFO][5291] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" Namespace="calico-system" Pod="calico-kube-controllers-668b8d6659-jwwj5" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--kube--controllers--668b8d6659--jwwj5-eth0" Jul 7 00:43:08.926553 containerd[1927]: 2025-07-07 00:43:08.919 [INFO][5291] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" Namespace="calico-system" Pod="calico-kube-controllers-668b8d6659-jwwj5" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--kube--controllers--668b8d6659--jwwj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-calico--kube--controllers--668b8d6659--jwwj5-eth0", GenerateName:"calico-kube-controllers-668b8d6659-", Namespace:"calico-system", SelfLink:"", UID:"523f9ddf-3435-4ff2-8804-18d2521954b7", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 42, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"668b8d6659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec", Pod:"calico-kube-controllers-668b8d6659-jwwj5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali64a5904b0af", MAC:"46:82:1b:22:ef:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:08.926603 containerd[1927]: 2025-07-07 00:43:08.924 [INFO][5291] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" Namespace="calico-system" Pod="calico-kube-controllers-668b8d6659-jwwj5" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--kube--controllers--668b8d6659--jwwj5-eth0" Jul 7 00:43:08.934409 containerd[1927]: time="2025-07-07T00:43:08.934386660Z" level=info msg="connecting to shim 70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec" address="unix:///run/containerd/s/577f08b3f0d105011d04600175e3ef1933af226430b6676fca5dfeb8c8ec2bf4" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:43:08.963358 systemd[1]: Started cri-containerd-70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec.scope - libcontainer container 70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec. Jul 7 00:43:09.002922 containerd[1927]: time="2025-07-07T00:43:09.002901756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-668b8d6659-jwwj5,Uid:523f9ddf-3435-4ff2-8804-18d2521954b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec\"" Jul 7 00:43:09.003572 containerd[1927]: time="2025-07-07T00:43:09.003533023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:43:09.865645 containerd[1927]: time="2025-07-07T00:43:09.865621667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd87677cd-smbqf,Uid:d360f177-d589-4eae-9ea1-90a4124152c3,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:43:09.865892 containerd[1927]: time="2025-07-07T00:43:09.865621696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xgnbs,Uid:7d88211e-caac-49a1-a5e3-5c00be80871c,Namespace:calico-system,Attempt:0,}" Jul 7 00:43:09.927580 systemd-networkd[1844]: cali96a8d595e2e: Link UP Jul 7 00:43:09.927741 systemd-networkd[1844]: cali96a8d595e2e: Gained carrier Jul 7 00:43:09.948922 containerd[1927]: 2025-07-07 00:43:09.895 [INFO][5468] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--7d9f698c61-k8s-goldmane--768f4c5c69--xgnbs-eth0 goldmane-768f4c5c69- calico-system 7d88211e-caac-49a1-a5e3-5c00be80871c 797 0 2025-07-07 00:42:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.1.1-a-7d9f698c61 goldmane-768f4c5c69-xgnbs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali96a8d595e2e [] [] }} ContainerID="4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" Namespace="calico-system" Pod="goldmane-768f4c5c69-xgnbs" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-goldmane--768f4c5c69--xgnbs-" Jul 7 00:43:09.948922 containerd[1927]: 2025-07-07 00:43:09.895 [INFO][5468] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" Namespace="calico-system" Pod="goldmane-768f4c5c69-xgnbs" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-goldmane--768f4c5c69--xgnbs-eth0" Jul 7 00:43:09.948922 containerd[1927]: 2025-07-07 00:43:09.907 [INFO][5532] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" HandleID="k8s-pod-network.4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" Workload="ci--4344.1.1--a--7d9f698c61-k8s-goldmane--768f4c5c69--xgnbs-eth0" Jul 7 00:43:09.949046 containerd[1927]: 2025-07-07 00:43:09.907 [INFO][5532] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" HandleID="k8s-pod-network.4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" Workload="ci--4344.1.1--a--7d9f698c61-k8s-goldmane--768f4c5c69--xgnbs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005976d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-7d9f698c61", "pod":"goldmane-768f4c5c69-xgnbs", "timestamp":"2025-07-07 00:43:09.907030787 +0000 UTC"}, Hostname:"ci-4344.1.1-a-7d9f698c61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:43:09.949046 containerd[1927]: 2025-07-07 00:43:09.907 [INFO][5532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:43:09.949046 containerd[1927]: 2025-07-07 00:43:09.907 [INFO][5532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:43:09.949046 containerd[1927]: 2025-07-07 00:43:09.907 [INFO][5532] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-7d9f698c61' Jul 7 00:43:09.949046 containerd[1927]: 2025-07-07 00:43:09.912 [INFO][5532] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:09.949046 containerd[1927]: 2025-07-07 00:43:09.915 [INFO][5532] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:09.949046 containerd[1927]: 2025-07-07 00:43:09.918 [INFO][5532] ipam/ipam.go 511: Trying affinity for 192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:09.949046 containerd[1927]: 2025-07-07 00:43:09.920 [INFO][5532] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:09.949046 containerd[1927]: 2025-07-07 00:43:09.921 [INFO][5532] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:09.949214 containerd[1927]: 2025-07-07 00:43:09.921 [INFO][5532] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.128/26 handle="k8s-pod-network.4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:09.949214 containerd[1927]: 2025-07-07 00:43:09.922 [INFO][5532] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3 Jul 7 00:43:09.949214 containerd[1927]: 2025-07-07 00:43:09.924 [INFO][5532] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.128/26 handle="k8s-pod-network.4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:09.949214 containerd[1927]: 2025-07-07 00:43:09.925 [INFO][5532] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.131/26] block=192.168.108.128/26 handle="k8s-pod-network.4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:09.949214 containerd[1927]: 2025-07-07 00:43:09.925 [INFO][5532] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.131/26] handle="k8s-pod-network.4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:09.949214 containerd[1927]: 2025-07-07 00:43:09.925 [INFO][5532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:43:09.949214 containerd[1927]: 2025-07-07 00:43:09.925 [INFO][5532] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.131/26] IPv6=[] ContainerID="4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" HandleID="k8s-pod-network.4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" Workload="ci--4344.1.1--a--7d9f698c61-k8s-goldmane--768f4c5c69--xgnbs-eth0" Jul 7 00:43:09.949318 containerd[1927]: 2025-07-07 00:43:09.926 [INFO][5468] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" Namespace="calico-system" Pod="goldmane-768f4c5c69-xgnbs" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-goldmane--768f4c5c69--xgnbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-goldmane--768f4c5c69--xgnbs-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7d88211e-caac-49a1-a5e3-5c00be80871c", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 42, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"", Pod:"goldmane-768f4c5c69-xgnbs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.108.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96a8d595e2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:09.949318 containerd[1927]: 2025-07-07 00:43:09.926 [INFO][5468] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.131/32] ContainerID="4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" Namespace="calico-system" Pod="goldmane-768f4c5c69-xgnbs" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-goldmane--768f4c5c69--xgnbs-eth0" Jul 7 00:43:09.949375 containerd[1927]: 2025-07-07 00:43:09.926 [INFO][5468] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96a8d595e2e ContainerID="4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" Namespace="calico-system" Pod="goldmane-768f4c5c69-xgnbs" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-goldmane--768f4c5c69--xgnbs-eth0" Jul 7 00:43:09.949375 containerd[1927]: 2025-07-07 00:43:09.927 [INFO][5468] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" Namespace="calico-system" Pod="goldmane-768f4c5c69-xgnbs" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-goldmane--768f4c5c69--xgnbs-eth0" Jul 7 00:43:09.949412 containerd[1927]: 2025-07-07 00:43:09.927 [INFO][5468] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" Namespace="calico-system" Pod="goldmane-768f4c5c69-xgnbs" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-goldmane--768f4c5c69--xgnbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-goldmane--768f4c5c69--xgnbs-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7d88211e-caac-49a1-a5e3-5c00be80871c", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 42, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3", Pod:"goldmane-768f4c5c69-xgnbs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.108.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96a8d595e2e", MAC:"3e:89:c1:34:d1:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:09.949450 containerd[1927]: 2025-07-07 00:43:09.947 [INFO][5468] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" Namespace="calico-system" Pod="goldmane-768f4c5c69-xgnbs" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-goldmane--768f4c5c69--xgnbs-eth0" Jul 7 00:43:09.956540 containerd[1927]: time="2025-07-07T00:43:09.956518418Z" level=info msg="connecting to shim 4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3" address="unix:///run/containerd/s/604a0b76bb1bd0c0ed4bb538dfa708f5b7f40af26154e790d963cd95dad01559" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:43:09.977290 systemd[1]: Started cri-containerd-4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3.scope - libcontainer container 4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3. Jul 7 00:43:10.002744 containerd[1927]: time="2025-07-07T00:43:10.002721805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xgnbs,Uid:7d88211e-caac-49a1-a5e3-5c00be80871c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3\"" Jul 7 00:43:10.026613 systemd-networkd[1844]: cali2edfa2af41b: Link UP Jul 7 00:43:10.026747 systemd-networkd[1844]: cali2edfa2af41b: Gained carrier Jul 7 00:43:10.031619 containerd[1927]: 2025-07-07 00:43:09.894 [INFO][5462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--smbqf-eth0 calico-apiserver-6cd87677cd- calico-apiserver d360f177-d589-4eae-9ea1-90a4124152c3 794 0 2025-07-07 00:42:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cd87677cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-a-7d9f698c61 calico-apiserver-6cd87677cd-smbqf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2edfa2af41b [] [] }} ContainerID="9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-smbqf" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--smbqf-" Jul 7 00:43:10.031619 containerd[1927]: 2025-07-07 00:43:09.894 [INFO][5462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-smbqf" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--smbqf-eth0" Jul 7 00:43:10.031619 containerd[1927]: 2025-07-07 00:43:09.907 [INFO][5530] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" HandleID="k8s-pod-network.9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" Workload="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--smbqf-eth0" Jul 7 00:43:10.031740 containerd[1927]: 2025-07-07 00:43:09.907 [INFO][5530] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" HandleID="k8s-pod-network.9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" Workload="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--smbqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-a-7d9f698c61", "pod":"calico-apiserver-6cd87677cd-smbqf", "timestamp":"2025-07-07 00:43:09.907030229 +0000 UTC"}, Hostname:"ci-4344.1.1-a-7d9f698c61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:43:10.031740 containerd[1927]: 2025-07-07 00:43:09.907 [INFO][5530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:43:10.031740 containerd[1927]: 2025-07-07 00:43:09.925 [INFO][5530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:43:10.031740 containerd[1927]: 2025-07-07 00:43:09.926 [INFO][5530] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-7d9f698c61' Jul 7 00:43:10.031740 containerd[1927]: 2025-07-07 00:43:10.012 [INFO][5530] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:10.031740 containerd[1927]: 2025-07-07 00:43:10.015 [INFO][5530] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:10.031740 containerd[1927]: 2025-07-07 00:43:10.018 [INFO][5530] ipam/ipam.go 511: Trying affinity for 192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:10.031740 containerd[1927]: 2025-07-07 00:43:10.019 [INFO][5530] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:10.031740 containerd[1927]: 2025-07-07 00:43:10.020 [INFO][5530] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:10.031894 containerd[1927]: 2025-07-07 00:43:10.020 [INFO][5530] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.128/26 handle="k8s-pod-network.9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:10.031894 containerd[1927]: 2025-07-07 00:43:10.021 [INFO][5530] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488 Jul 7 00:43:10.031894 containerd[1927]: 2025-07-07 00:43:10.022 [INFO][5530] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.128/26 handle="k8s-pod-network.9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:10.031894 containerd[1927]: 2025-07-07 00:43:10.024 [INFO][5530] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.132/26] block=192.168.108.128/26 handle="k8s-pod-network.9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:10.031894 containerd[1927]: 2025-07-07 00:43:10.024 [INFO][5530] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.132/26] handle="k8s-pod-network.9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:10.031894 containerd[1927]: 2025-07-07 00:43:10.024 [INFO][5530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:43:10.031894 containerd[1927]: 2025-07-07 00:43:10.024 [INFO][5530] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.132/26] IPv6=[] ContainerID="9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" HandleID="k8s-pod-network.9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" Workload="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--smbqf-eth0" Jul 7 00:43:10.032000 containerd[1927]: 2025-07-07 00:43:10.025 [INFO][5462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-smbqf" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--smbqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--smbqf-eth0", GenerateName:"calico-apiserver-6cd87677cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d360f177-d589-4eae-9ea1-90a4124152c3", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd87677cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"", Pod:"calico-apiserver-6cd87677cd-smbqf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2edfa2af41b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:10.032037 containerd[1927]: 2025-07-07 00:43:10.025 [INFO][5462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.132/32] ContainerID="9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-smbqf" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--smbqf-eth0" Jul 7 00:43:10.032037 containerd[1927]: 2025-07-07 00:43:10.025 [INFO][5462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2edfa2af41b ContainerID="9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-smbqf" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--smbqf-eth0" Jul 7 00:43:10.032037 containerd[1927]: 2025-07-07 00:43:10.026 [INFO][5462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-smbqf" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--smbqf-eth0" Jul 7 00:43:10.032087 containerd[1927]: 2025-07-07 00:43:10.026 [INFO][5462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-smbqf" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--smbqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--smbqf-eth0", GenerateName:"calico-apiserver-6cd87677cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"d360f177-d589-4eae-9ea1-90a4124152c3", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd87677cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488", Pod:"calico-apiserver-6cd87677cd-smbqf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2edfa2af41b", MAC:"ca:59:7c:a5:fb:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:10.032122 containerd[1927]: 2025-07-07 00:43:10.030 [INFO][5462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-smbqf" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--smbqf-eth0" Jul 7 00:43:10.040549 containerd[1927]: time="2025-07-07T00:43:10.040519116Z" level=info msg="connecting to shim 9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488" address="unix:///run/containerd/s/86bcb540f95ecd7238ea00d636775a12707371229f1ff46fad927a1078914da7" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:43:10.058261 systemd[1]: Started cri-containerd-9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488.scope - libcontainer container 9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488. Jul 7 00:43:10.066979 systemd-networkd[1844]: vxlan.calico: Link UP Jul 7 00:43:10.066981 systemd-networkd[1844]: vxlan.calico: Gained carrier Jul 7 00:43:10.087027 containerd[1927]: time="2025-07-07T00:43:10.087007855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd87677cd-smbqf,Uid:d360f177-d589-4eae-9ea1-90a4124152c3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488\"" Jul 7 00:43:10.128202 systemd-networkd[1844]: cali64a5904b0af: Gained IPv6LL Jul 7 00:43:11.728231 systemd-networkd[1844]: cali96a8d595e2e: Gained IPv6LL Jul 7 00:43:11.728455 systemd-networkd[1844]: cali2edfa2af41b: Gained IPv6LL Jul 7 00:43:11.866277 containerd[1927]: time="2025-07-07T00:43:11.866226114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gwbvc,Uid:0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816,Namespace:calico-system,Attempt:0,}" Jul 7 00:43:11.866491 containerd[1927]: time="2025-07-07T00:43:11.866226187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-757b8,Uid:8a042587-3358-4c70-a680-4a64a2dac953,Namespace:kube-system,Attempt:0,}" Jul 7 00:43:11.880249 containerd[1927]: time="2025-07-07T00:43:11.880216135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:11.880466 containerd[1927]: time="2025-07-07T00:43:11.880436393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 00:43:11.880918 containerd[1927]: time="2025-07-07T00:43:11.880903709Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:11.881764 containerd[1927]: time="2025-07-07T00:43:11.881749293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:11.882119 containerd[1927]: time="2025-07-07T00:43:11.882108675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.878560678s" Jul 7 00:43:11.882152 containerd[1927]: time="2025-07-07T00:43:11.882128123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 00:43:11.882631 containerd[1927]: time="2025-07-07T00:43:11.882620005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:43:11.886733 containerd[1927]: time="2025-07-07T00:43:11.886708631Z" level=info msg="CreateContainer within sandbox \"70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:43:11.889483 containerd[1927]: time="2025-07-07T00:43:11.889463563Z" level=info msg="Container a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:43:11.892669 containerd[1927]: time="2025-07-07T00:43:11.892627201Z" level=info msg="CreateContainer within sandbox \"70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\"" Jul 7 00:43:11.892931 containerd[1927]: time="2025-07-07T00:43:11.892896185Z" level=info msg="StartContainer for \"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\"" Jul 7 00:43:11.893506 containerd[1927]: time="2025-07-07T00:43:11.893470992Z" level=info msg="connecting to shim a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4" address="unix:///run/containerd/s/577f08b3f0d105011d04600175e3ef1933af226430b6676fca5dfeb8c8ec2bf4" protocol=ttrpc version=3 Jul 7 00:43:11.910293 systemd[1]: Started cri-containerd-a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4.scope - libcontainer container a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4. Jul 7 00:43:11.932838 systemd-networkd[1844]: cali585c4c70e95: Link UP Jul 7 00:43:11.932968 systemd-networkd[1844]: cali585c4c70e95: Gained carrier Jul 7 00:43:11.938784 containerd[1927]: 2025-07-07 00:43:11.896 [INFO][5776] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--757b8-eth0 coredns-674b8bbfcf- kube-system 8a042587-3358-4c70-a680-4a64a2dac953 802 0 2025-07-07 00:42:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-a-7d9f698c61 coredns-674b8bbfcf-757b8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali585c4c70e95 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" Namespace="kube-system" Pod="coredns-674b8bbfcf-757b8" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--757b8-" Jul 7 00:43:11.938784 containerd[1927]: 2025-07-07 00:43:11.896 [INFO][5776] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" Namespace="kube-system" Pod="coredns-674b8bbfcf-757b8" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--757b8-eth0" Jul 7 00:43:11.938784 containerd[1927]: 2025-07-07 00:43:11.908 [INFO][5832] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" HandleID="k8s-pod-network.e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" Workload="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--757b8-eth0" Jul 7 00:43:11.938905 containerd[1927]: 2025-07-07 00:43:11.908 [INFO][5832] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" HandleID="k8s-pod-network.e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" Workload="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--757b8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f630), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-a-7d9f698c61", "pod":"coredns-674b8bbfcf-757b8", "timestamp":"2025-07-07 00:43:11.908613776 +0000 UTC"}, Hostname:"ci-4344.1.1-a-7d9f698c61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:43:11.938905 containerd[1927]: 2025-07-07 00:43:11.908 [INFO][5832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:43:11.938905 containerd[1927]: 2025-07-07 00:43:11.908 [INFO][5832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:43:11.938905 containerd[1927]: 2025-07-07 00:43:11.908 [INFO][5832] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-7d9f698c61' Jul 7 00:43:11.938905 containerd[1927]: 2025-07-07 00:43:11.914 [INFO][5832] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:11.938905 containerd[1927]: 2025-07-07 00:43:11.918 [INFO][5832] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:11.938905 containerd[1927]: 2025-07-07 00:43:11.922 [INFO][5832] ipam/ipam.go 511: Trying affinity for 192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:11.938905 containerd[1927]: 2025-07-07 00:43:11.923 [INFO][5832] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:11.938905 containerd[1927]: 2025-07-07 00:43:11.925 [INFO][5832] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:11.939063 containerd[1927]: 2025-07-07 00:43:11.925 [INFO][5832] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.128/26 handle="k8s-pod-network.e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:11.939063 containerd[1927]: 2025-07-07 00:43:11.926 [INFO][5832] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527 Jul 7 00:43:11.939063 containerd[1927]: 2025-07-07 00:43:11.928 [INFO][5832] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.128/26 handle="k8s-pod-network.e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:11.939063 containerd[1927]: 2025-07-07 00:43:11.931 [INFO][5832] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.133/26] block=192.168.108.128/26 handle="k8s-pod-network.e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:11.939063 containerd[1927]: 2025-07-07 00:43:11.931 [INFO][5832] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.133/26] handle="k8s-pod-network.e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:11.939063 containerd[1927]: 2025-07-07 00:43:11.931 [INFO][5832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:43:11.939063 containerd[1927]: 2025-07-07 00:43:11.931 [INFO][5832] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.133/26] IPv6=[] ContainerID="e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" HandleID="k8s-pod-network.e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" Workload="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--757b8-eth0" Jul 7 00:43:11.939180 containerd[1927]: 2025-07-07 00:43:11.932 [INFO][5776] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" Namespace="kube-system" Pod="coredns-674b8bbfcf-757b8" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--757b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--757b8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8a042587-3358-4c70-a680-4a64a2dac953", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"", Pod:"coredns-674b8bbfcf-757b8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali585c4c70e95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:11.939180 containerd[1927]: 2025-07-07 00:43:11.932 [INFO][5776] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.133/32] ContainerID="e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" Namespace="kube-system" Pod="coredns-674b8bbfcf-757b8" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--757b8-eth0" Jul 7 00:43:11.939180 containerd[1927]: 2025-07-07 00:43:11.932 [INFO][5776] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali585c4c70e95 ContainerID="e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" Namespace="kube-system" Pod="coredns-674b8bbfcf-757b8" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--757b8-eth0" Jul 7 00:43:11.939180 containerd[1927]: 2025-07-07 00:43:11.932 [INFO][5776] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" Namespace="kube-system" Pod="coredns-674b8bbfcf-757b8" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--757b8-eth0" Jul 7 00:43:11.939180 containerd[1927]: 2025-07-07 00:43:11.933 [INFO][5776] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" Namespace="kube-system" Pod="coredns-674b8bbfcf-757b8" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--757b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--757b8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8a042587-3358-4c70-a680-4a64a2dac953", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527", Pod:"coredns-674b8bbfcf-757b8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali585c4c70e95", MAC:"9a:37:11:05:d9:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:11.939180 containerd[1927]: 2025-07-07 00:43:11.937 [INFO][5776] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" Namespace="kube-system" Pod="coredns-674b8bbfcf-757b8" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--757b8-eth0" Jul 7 00:43:11.941608 containerd[1927]: time="2025-07-07T00:43:11.941587510Z" level=info msg="StartContainer for \"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" returns successfully" Jul 7 00:43:11.947230 containerd[1927]: time="2025-07-07T00:43:11.947198137Z" level=info msg="connecting to shim e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527" address="unix:///run/containerd/s/d443564b80227584ad07798f74cb6ab471c3f96d781661227aa0937b497e1edc" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:43:11.963278 systemd[1]: Started cri-containerd-e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527.scope - libcontainer container e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527. Jul 7 00:43:11.984310 systemd-networkd[1844]: vxlan.calico: Gained IPv6LL Jul 7 00:43:11.989798 containerd[1927]: time="2025-07-07T00:43:11.989777943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-757b8,Uid:8a042587-3358-4c70-a680-4a64a2dac953,Namespace:kube-system,Attempt:0,} returns sandbox id \"e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527\"" Jul 7 00:43:11.991648 containerd[1927]: time="2025-07-07T00:43:11.991610181Z" level=info msg="CreateContainer within sandbox \"e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:43:11.994582 containerd[1927]: time="2025-07-07T00:43:11.994541014Z" level=info msg="Container 276b7893086d01ebcac6c1b75e81d9bd1c6c70b55e5066340aaf0cf43fac008f: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:43:11.997348 containerd[1927]: time="2025-07-07T00:43:11.997307892Z" level=info msg="CreateContainer within sandbox \"e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"276b7893086d01ebcac6c1b75e81d9bd1c6c70b55e5066340aaf0cf43fac008f\"" Jul 7 00:43:11.997533 containerd[1927]: time="2025-07-07T00:43:11.997493173Z" level=info msg="StartContainer for \"276b7893086d01ebcac6c1b75e81d9bd1c6c70b55e5066340aaf0cf43fac008f\"" Jul 7 00:43:11.997935 containerd[1927]: time="2025-07-07T00:43:11.997889554Z" level=info msg="connecting to shim 276b7893086d01ebcac6c1b75e81d9bd1c6c70b55e5066340aaf0cf43fac008f" address="unix:///run/containerd/s/d443564b80227584ad07798f74cb6ab471c3f96d781661227aa0937b497e1edc" protocol=ttrpc version=3 Jul 7 00:43:12.008057 kubelet[3285]: I0707 00:43:12.008027 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-668b8d6659-jwwj5" podStartSLOduration=19.128903686 podStartE2EDuration="22.008016738s" podCreationTimestamp="2025-07-07 00:42:50 +0000 UTC" firstStartedPulling="2025-07-07 00:43:09.003412937 +0000 UTC m=+33.182458047" lastFinishedPulling="2025-07-07 00:43:11.882525988 +0000 UTC m=+36.061571099" observedRunningTime="2025-07-07 00:43:12.007626493 +0000 UTC m=+36.186671622" watchObservedRunningTime="2025-07-07 00:43:12.008016738 +0000 UTC m=+36.187061848" Jul 7 00:43:12.025261 systemd[1]: Started cri-containerd-276b7893086d01ebcac6c1b75e81d9bd1c6c70b55e5066340aaf0cf43fac008f.scope - libcontainer container 276b7893086d01ebcac6c1b75e81d9bd1c6c70b55e5066340aaf0cf43fac008f. Jul 7 00:43:12.032880 systemd-networkd[1844]: cali45d0bd8d6d2: Link UP Jul 7 00:43:12.033021 systemd-networkd[1844]: cali45d0bd8d6d2: Gained carrier Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:11.897 [INFO][5773] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--7d9f698c61-k8s-csi--node--driver--gwbvc-eth0 csi-node-driver- calico-system 0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816 686 0 2025-07-07 00:42:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.1.1-a-7d9f698c61 csi-node-driver-gwbvc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali45d0bd8d6d2 [] [] }} ContainerID="3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" Namespace="calico-system" Pod="csi-node-driver-gwbvc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-csi--node--driver--gwbvc-" Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:11.897 [INFO][5773] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" Namespace="calico-system" Pod="csi-node-driver-gwbvc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-csi--node--driver--gwbvc-eth0" Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:11.909 [INFO][5838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" HandleID="k8s-pod-network.3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" Workload="ci--4344.1.1--a--7d9f698c61-k8s-csi--node--driver--gwbvc-eth0" Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:11.909 [INFO][5838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" HandleID="k8s-pod-network.3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" Workload="ci--4344.1.1--a--7d9f698c61-k8s-csi--node--driver--gwbvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00061f890), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-7d9f698c61", "pod":"csi-node-driver-gwbvc", "timestamp":"2025-07-07 00:43:11.909004422 +0000 UTC"}, Hostname:"ci-4344.1.1-a-7d9f698c61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:11.909 [INFO][5838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:11.931 [INFO][5838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:11.931 [INFO][5838] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-7d9f698c61' Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:12.014 [INFO][5838] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:12.018 [INFO][5838] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:12.022 [INFO][5838] ipam/ipam.go 511: Trying affinity for 192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:12.023 [INFO][5838] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:12.025 [INFO][5838] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:12.025 [INFO][5838] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.128/26 handle="k8s-pod-network.3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:12.026 [INFO][5838] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944 Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:12.028 [INFO][5838] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.128/26 handle="k8s-pod-network.3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:12.031 [INFO][5838] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.134/26] block=192.168.108.128/26 handle="k8s-pod-network.3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:12.031 [INFO][5838] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.134/26] handle="k8s-pod-network.3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:12.031 [INFO][5838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:43:12.038953 containerd[1927]: 2025-07-07 00:43:12.031 [INFO][5838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.134/26] IPv6=[] ContainerID="3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" HandleID="k8s-pod-network.3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" Workload="ci--4344.1.1--a--7d9f698c61-k8s-csi--node--driver--gwbvc-eth0" Jul 7 00:43:12.039444 containerd[1927]: 2025-07-07 00:43:12.032 [INFO][5773] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" Namespace="calico-system" Pod="csi-node-driver-gwbvc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-csi--node--driver--gwbvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-csi--node--driver--gwbvc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 42, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"", Pod:"csi-node-driver-gwbvc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali45d0bd8d6d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:12.039444 containerd[1927]: 2025-07-07 00:43:12.032 [INFO][5773] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.134/32] ContainerID="3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" Namespace="calico-system" Pod="csi-node-driver-gwbvc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-csi--node--driver--gwbvc-eth0" Jul 7 00:43:12.039444 containerd[1927]: 2025-07-07 00:43:12.032 [INFO][5773] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45d0bd8d6d2 ContainerID="3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" Namespace="calico-system" Pod="csi-node-driver-gwbvc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-csi--node--driver--gwbvc-eth0" Jul 7 00:43:12.039444 containerd[1927]: 2025-07-07 00:43:12.033 [INFO][5773] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" Namespace="calico-system" Pod="csi-node-driver-gwbvc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-csi--node--driver--gwbvc-eth0" Jul 7 00:43:12.039444 containerd[1927]: 2025-07-07 00:43:12.033 [INFO][5773] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" Namespace="calico-system" Pod="csi-node-driver-gwbvc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-csi--node--driver--gwbvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-csi--node--driver--gwbvc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 42, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944", Pod:"csi-node-driver-gwbvc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali45d0bd8d6d2", MAC:"ee:4f:c8:18:6d:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:12.039444 containerd[1927]: 2025-07-07 00:43:12.037 [INFO][5773] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" Namespace="calico-system" Pod="csi-node-driver-gwbvc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-csi--node--driver--gwbvc-eth0" Jul 7 00:43:12.039729 containerd[1927]: time="2025-07-07T00:43:12.039706586Z" level=info msg="StartContainer for \"276b7893086d01ebcac6c1b75e81d9bd1c6c70b55e5066340aaf0cf43fac008f\" returns successfully" Jul 7 00:43:12.046428 containerd[1927]: time="2025-07-07T00:43:12.046397863Z" level=info msg="connecting to shim 3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944" address="unix:///run/containerd/s/eab8f2c0ea81aa992f2d8eea6b88fd39cdbf8b9160f96fde4af8908ec7a61e7c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:43:12.061256 systemd[1]: Started cri-containerd-3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944.scope - libcontainer container 3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944. Jul 7 00:43:12.072596 containerd[1927]: time="2025-07-07T00:43:12.072577072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gwbvc,Uid:0fa52e8a-71d9-43fe-a8ae-8d8b2dcee816,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944\"" Jul 7 00:43:12.866072 containerd[1927]: time="2025-07-07T00:43:12.866050605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd87677cd-wzhxw,Uid:6c6d49bf-06c0-4afe-841d-9180c456f4c7,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:43:12.926350 systemd-networkd[1844]: cali6ef00118642: Link UP Jul 7 00:43:12.926659 systemd-networkd[1844]: cali6ef00118642: Gained carrier Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.884 [INFO][6067] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--wzhxw-eth0 calico-apiserver-6cd87677cd- calico-apiserver 6c6d49bf-06c0-4afe-841d-9180c456f4c7 800 0 2025-07-07 00:42:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cd87677cd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-a-7d9f698c61 calico-apiserver-6cd87677cd-wzhxw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6ef00118642 [] [] }} ContainerID="1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-wzhxw" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--wzhxw-" Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.884 [INFO][6067] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-wzhxw" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--wzhxw-eth0" Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.897 [INFO][6091] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" HandleID="k8s-pod-network.1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" Workload="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--wzhxw-eth0" Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.898 [INFO][6091] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" HandleID="k8s-pod-network.1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" Workload="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--wzhxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000604b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-a-7d9f698c61", "pod":"calico-apiserver-6cd87677cd-wzhxw", "timestamp":"2025-07-07 00:43:12.897951605 +0000 UTC"}, Hostname:"ci-4344.1.1-a-7d9f698c61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.898 [INFO][6091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.898 [INFO][6091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.898 [INFO][6091] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-7d9f698c61' Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.903 [INFO][6091] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.907 [INFO][6091] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.911 [INFO][6091] ipam/ipam.go 511: Trying affinity for 192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.912 [INFO][6091] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.914 [INFO][6091] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.914 [INFO][6091] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.128/26 handle="k8s-pod-network.1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.916 [INFO][6091] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.919 [INFO][6091] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.128/26 handle="k8s-pod-network.1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.922 [INFO][6091] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.135/26] block=192.168.108.128/26 handle="k8s-pod-network.1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.922 [INFO][6091] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.135/26] handle="k8s-pod-network.1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.922 [INFO][6091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:43:12.931765 containerd[1927]: 2025-07-07 00:43:12.923 [INFO][6091] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.135/26] IPv6=[] ContainerID="1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" HandleID="k8s-pod-network.1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" Workload="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--wzhxw-eth0" Jul 7 00:43:12.932303 containerd[1927]: 2025-07-07 00:43:12.924 [INFO][6067] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-wzhxw" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--wzhxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--wzhxw-eth0", GenerateName:"calico-apiserver-6cd87677cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c6d49bf-06c0-4afe-841d-9180c456f4c7", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd87677cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"", Pod:"calico-apiserver-6cd87677cd-wzhxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ef00118642", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:12.932303 containerd[1927]: 2025-07-07 00:43:12.924 [INFO][6067] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.135/32] ContainerID="1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-wzhxw" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--wzhxw-eth0" Jul 7 00:43:12.932303 containerd[1927]: 2025-07-07 00:43:12.925 [INFO][6067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ef00118642 ContainerID="1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-wzhxw" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--wzhxw-eth0" Jul 7 00:43:12.932303 containerd[1927]: 2025-07-07 00:43:12.926 [INFO][6067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-wzhxw" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--wzhxw-eth0" Jul 7 00:43:12.932303 containerd[1927]: 2025-07-07 00:43:12.926 [INFO][6067] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-wzhxw" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--wzhxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--wzhxw-eth0", GenerateName:"calico-apiserver-6cd87677cd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c6d49bf-06c0-4afe-841d-9180c456f4c7", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd87677cd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c", Pod:"calico-apiserver-6cd87677cd-wzhxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ef00118642", MAC:"8e:ad:bc:c6:c3:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:12.932303 containerd[1927]: 2025-07-07 00:43:12.930 [INFO][6067] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd87677cd-wzhxw" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-calico--apiserver--6cd87677cd--wzhxw-eth0" Jul 7 00:43:12.939240 containerd[1927]: time="2025-07-07T00:43:12.939211114Z" level=info msg="connecting to shim 1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c" address="unix:///run/containerd/s/a2587da3d1cab11d521a35c123a0058d5f6796bf8faac5bbd38f46729e7a84c3" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:43:12.967548 systemd[1]: Started cri-containerd-1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c.scope - libcontainer container 1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c. Jul 7 00:43:13.010430 kubelet[3285]: I0707 00:43:13.010330 3285 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:43:13.036514 kubelet[3285]: I0707 00:43:13.036463 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-757b8" podStartSLOduration=32.036442239 podStartE2EDuration="32.036442239s" podCreationTimestamp="2025-07-07 00:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:43:13.026829161 +0000 UTC m=+37.205874320" watchObservedRunningTime="2025-07-07 00:43:13.036442239 +0000 UTC m=+37.215487360" Jul 7 00:43:13.050942 containerd[1927]: time="2025-07-07T00:43:13.050920350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd87677cd-wzhxw,Uid:6c6d49bf-06c0-4afe-841d-9180c456f4c7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c\"" Jul 7 00:43:13.328365 systemd-networkd[1844]: cali45d0bd8d6d2: Gained IPv6LL Jul 7 00:43:13.841247 systemd-networkd[1844]: cali585c4c70e95: Gained IPv6LL Jul 7 00:43:13.875945 containerd[1927]: time="2025-07-07T00:43:13.875830264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r5xlc,Uid:1c87a2da-0164-48e5-8bac-03563358ac5b,Namespace:kube-system,Attempt:0,}" Jul 7 00:43:13.926267 systemd-networkd[1844]: califa5e7284106: Link UP Jul 7 00:43:13.926393 systemd-networkd[1844]: califa5e7284106: Gained carrier Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.895 [INFO][6163] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--r5xlc-eth0 coredns-674b8bbfcf- kube-system 1c87a2da-0164-48e5-8bac-03563358ac5b 795 0 2025-07-07 00:42:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-a-7d9f698c61 coredns-674b8bbfcf-r5xlc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa5e7284106 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5xlc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--r5xlc-" Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.895 [INFO][6163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5xlc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--r5xlc-eth0" Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.907 [INFO][6187] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" HandleID="k8s-pod-network.0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" Workload="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--r5xlc-eth0" Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.907 [INFO][6187] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" HandleID="k8s-pod-network.0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" Workload="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--r5xlc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00062aaf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-a-7d9f698c61", "pod":"coredns-674b8bbfcf-r5xlc", "timestamp":"2025-07-07 00:43:13.907622861 +0000 UTC"}, Hostname:"ci-4344.1.1-a-7d9f698c61", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.907 [INFO][6187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.907 [INFO][6187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.907 [INFO][6187] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-7d9f698c61' Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.911 [INFO][6187] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.914 [INFO][6187] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.917 [INFO][6187] ipam/ipam.go 511: Trying affinity for 192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.918 [INFO][6187] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.919 [INFO][6187] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.128/26 host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.919 [INFO][6187] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.128/26 handle="k8s-pod-network.0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.919 [INFO][6187] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2 Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.921 [INFO][6187] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.128/26 handle="k8s-pod-network.0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.924 [INFO][6187] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.136/26] block=192.168.108.128/26 handle="k8s-pod-network.0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.924 [INFO][6187] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.136/26] handle="k8s-pod-network.0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" host="ci-4344.1.1-a-7d9f698c61" Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.924 [INFO][6187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:43:13.932897 containerd[1927]: 2025-07-07 00:43:13.924 [INFO][6187] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.136/26] IPv6=[] ContainerID="0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" HandleID="k8s-pod-network.0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" Workload="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--r5xlc-eth0" Jul 7 00:43:13.933448 containerd[1927]: 2025-07-07 00:43:13.925 [INFO][6163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5xlc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--r5xlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--r5xlc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1c87a2da-0164-48e5-8bac-03563358ac5b", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"", Pod:"coredns-674b8bbfcf-r5xlc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa5e7284106", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:13.933448 containerd[1927]: 2025-07-07 00:43:13.925 [INFO][6163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.136/32] ContainerID="0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5xlc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--r5xlc-eth0" Jul 7 00:43:13.933448 containerd[1927]: 2025-07-07 00:43:13.925 [INFO][6163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa5e7284106 ContainerID="0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5xlc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--r5xlc-eth0" Jul 7 00:43:13.933448 containerd[1927]: 2025-07-07 00:43:13.926 [INFO][6163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5xlc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--r5xlc-eth0" Jul 7 00:43:13.933448 containerd[1927]: 2025-07-07 00:43:13.926 [INFO][6163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5xlc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--r5xlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--r5xlc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1c87a2da-0164-48e5-8bac-03563358ac5b", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-7d9f698c61", ContainerID:"0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2", Pod:"coredns-674b8bbfcf-r5xlc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa5e7284106", MAC:"be:cf:dd:95:81:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:43:13.933448 containerd[1927]: 2025-07-07 00:43:13.931 [INFO][6163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5xlc" WorkloadEndpoint="ci--4344.1.1--a--7d9f698c61-k8s-coredns--674b8bbfcf--r5xlc-eth0" Jul 7 00:43:14.079494 containerd[1927]: time="2025-07-07T00:43:14.079469419Z" level=info msg="connecting to shim 0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2" address="unix:///run/containerd/s/db15e712c5d187740aaf359a91e55f64214b2a57823640324e2a5eedca6ab82d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:43:14.100214 systemd[1]: Started cri-containerd-0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2.scope - libcontainer container 0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2. Jul 7 00:43:14.125948 containerd[1927]: time="2025-07-07T00:43:14.125929273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r5xlc,Uid:1c87a2da-0164-48e5-8bac-03563358ac5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2\"" Jul 7 00:43:14.127794 containerd[1927]: time="2025-07-07T00:43:14.127778955Z" level=info msg="CreateContainer within sandbox \"0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:43:14.130716 containerd[1927]: time="2025-07-07T00:43:14.130701726Z" level=info msg="Container 58c1245a9ef06dc9ff9ca5620954509e9ddaff1b65d415b9b8122e6e14978343: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:43:14.132994 containerd[1927]: time="2025-07-07T00:43:14.132955438Z" level=info msg="CreateContainer within sandbox \"0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"58c1245a9ef06dc9ff9ca5620954509e9ddaff1b65d415b9b8122e6e14978343\"" Jul 7 00:43:14.133220 containerd[1927]: time="2025-07-07T00:43:14.133176162Z" level=info msg="StartContainer for \"58c1245a9ef06dc9ff9ca5620954509e9ddaff1b65d415b9b8122e6e14978343\"" Jul 7 00:43:14.133722 containerd[1927]: time="2025-07-07T00:43:14.133680320Z" level=info msg="connecting to shim 58c1245a9ef06dc9ff9ca5620954509e9ddaff1b65d415b9b8122e6e14978343" address="unix:///run/containerd/s/db15e712c5d187740aaf359a91e55f64214b2a57823640324e2a5eedca6ab82d" protocol=ttrpc version=3 Jul 7 00:43:14.148225 systemd[1]: Started cri-containerd-58c1245a9ef06dc9ff9ca5620954509e9ddaff1b65d415b9b8122e6e14978343.scope - libcontainer container 58c1245a9ef06dc9ff9ca5620954509e9ddaff1b65d415b9b8122e6e14978343. Jul 7 00:43:14.161696 containerd[1927]: time="2025-07-07T00:43:14.161676556Z" level=info msg="StartContainer for \"58c1245a9ef06dc9ff9ca5620954509e9ddaff1b65d415b9b8122e6e14978343\" returns successfully" Jul 7 00:43:14.494797 containerd[1927]: time="2025-07-07T00:43:14.494745685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:14.494980 containerd[1927]: time="2025-07-07T00:43:14.494938380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 00:43:14.495349 containerd[1927]: time="2025-07-07T00:43:14.495305237Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:14.496256 containerd[1927]: time="2025-07-07T00:43:14.496212800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:14.496639 containerd[1927]: time="2025-07-07T00:43:14.496598871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 2.613964568s" Jul 7 00:43:14.496639 containerd[1927]: time="2025-07-07T00:43:14.496614017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 00:43:14.497076 containerd[1927]: time="2025-07-07T00:43:14.497064082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:43:14.498102 containerd[1927]: time="2025-07-07T00:43:14.498088803Z" level=info msg="CreateContainer within sandbox \"4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:43:14.500818 containerd[1927]: time="2025-07-07T00:43:14.500806123Z" level=info msg="Container 4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:43:14.503507 containerd[1927]: time="2025-07-07T00:43:14.503467538Z" level=info msg="CreateContainer within sandbox \"4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\"" Jul 7 00:43:14.503710 containerd[1927]: time="2025-07-07T00:43:14.503669371Z" level=info msg="StartContainer for \"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\"" Jul 7 00:43:14.504240 containerd[1927]: time="2025-07-07T00:43:14.504191448Z" level=info msg="connecting to shim 4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930" address="unix:///run/containerd/s/604a0b76bb1bd0c0ed4bb538dfa708f5b7f40af26154e790d963cd95dad01559" protocol=ttrpc version=3 Jul 7 00:43:14.526561 systemd[1]: Started cri-containerd-4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930.scope - libcontainer container 4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930. Jul 7 00:43:14.597853 containerd[1927]: time="2025-07-07T00:43:14.597803308Z" level=info msg="StartContainer for \"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" returns successfully" Jul 7 00:43:14.884656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3537824352.mount: Deactivated successfully. Jul 7 00:43:14.992479 systemd-networkd[1844]: cali6ef00118642: Gained IPv6LL Jul 7 00:43:15.044413 kubelet[3285]: I0707 00:43:15.044281 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-xgnbs" podStartSLOduration=20.550464029 podStartE2EDuration="25.044243506s" podCreationTimestamp="2025-07-07 00:42:50 +0000 UTC" firstStartedPulling="2025-07-07 00:43:10.003226302 +0000 UTC m=+34.182271415" lastFinishedPulling="2025-07-07 00:43:14.49700578 +0000 UTC m=+38.676050892" observedRunningTime="2025-07-07 00:43:15.043211239 +0000 UTC m=+39.222256429" watchObservedRunningTime="2025-07-07 00:43:15.044243506 +0000 UTC m=+39.223288667" Jul 7 00:43:15.062556 kubelet[3285]: I0707 00:43:15.062470 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r5xlc" podStartSLOduration=34.062440958 podStartE2EDuration="34.062440958s" podCreationTimestamp="2025-07-07 00:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:43:15.061587127 +0000 UTC m=+39.240632296" watchObservedRunningTime="2025-07-07 00:43:15.062440958 +0000 UTC m=+39.241486096" Jul 7 00:43:15.118860 containerd[1927]: time="2025-07-07T00:43:15.118836359Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"530fd9f729c53ec1769972df74dd30d3b2f0d2a7d221949eed5f4143872a24ff\" pid:6374 exit_status:1 exited_at:{seconds:1751848995 nanos:118629962}" Jul 7 00:43:15.824513 systemd-networkd[1844]: califa5e7284106: Gained IPv6LL Jul 7 00:43:16.123347 containerd[1927]: time="2025-07-07T00:43:16.123282117Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"d2e3a4bb951c780290312f3c3ff3331a8a483b92c8221378814edbcc34a0ffdf\" pid:6420 exited_at:{seconds:1751848996 nanos:123103838}" Jul 7 00:43:17.114072 containerd[1927]: time="2025-07-07T00:43:17.114015747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:17.114296 containerd[1927]: time="2025-07-07T00:43:17.114253691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 00:43:17.114630 containerd[1927]: time="2025-07-07T00:43:17.114589177Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:17.115450 containerd[1927]: time="2025-07-07T00:43:17.115406979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:17.115830 containerd[1927]: time="2025-07-07T00:43:17.115788097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.618708855s" Jul 7 00:43:17.115830 containerd[1927]: time="2025-07-07T00:43:17.115805340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:43:17.116257 containerd[1927]: time="2025-07-07T00:43:17.116209795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:43:17.117184 containerd[1927]: time="2025-07-07T00:43:17.117173365Z" level=info msg="CreateContainer within sandbox \"9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:43:17.119624 containerd[1927]: time="2025-07-07T00:43:17.119585354Z" level=info msg="Container dce168cc4f5f26669162c63f5fd09c7e1059aa3dc147b254767e3d615eecd097: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:43:17.122191 containerd[1927]: time="2025-07-07T00:43:17.122153762Z" level=info msg="CreateContainer within sandbox \"9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dce168cc4f5f26669162c63f5fd09c7e1059aa3dc147b254767e3d615eecd097\"" Jul 7 00:43:17.122420 containerd[1927]: time="2025-07-07T00:43:17.122378187Z" level=info msg="StartContainer for \"dce168cc4f5f26669162c63f5fd09c7e1059aa3dc147b254767e3d615eecd097\"" Jul 7 00:43:17.122902 containerd[1927]: time="2025-07-07T00:43:17.122865608Z" level=info msg="connecting to shim dce168cc4f5f26669162c63f5fd09c7e1059aa3dc147b254767e3d615eecd097" address="unix:///run/containerd/s/86bcb540f95ecd7238ea00d636775a12707371229f1ff46fad927a1078914da7" protocol=ttrpc version=3 Jul 7 00:43:17.143393 systemd[1]: Started cri-containerd-dce168cc4f5f26669162c63f5fd09c7e1059aa3dc147b254767e3d615eecd097.scope - libcontainer container dce168cc4f5f26669162c63f5fd09c7e1059aa3dc147b254767e3d615eecd097. Jul 7 00:43:17.173708 containerd[1927]: time="2025-07-07T00:43:17.173682906Z" level=info msg="StartContainer for \"dce168cc4f5f26669162c63f5fd09c7e1059aa3dc147b254767e3d615eecd097\" returns successfully" Jul 7 00:43:18.055931 kubelet[3285]: I0707 00:43:18.055815 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cd87677cd-smbqf" podStartSLOduration=24.027192975 podStartE2EDuration="31.055780374s" podCreationTimestamp="2025-07-07 00:42:47 +0000 UTC" firstStartedPulling="2025-07-07 00:43:10.087573125 +0000 UTC m=+34.266618235" lastFinishedPulling="2025-07-07 00:43:17.116160525 +0000 UTC m=+41.295205634" observedRunningTime="2025-07-07 00:43:18.055248488 +0000 UTC m=+42.234293714" watchObservedRunningTime="2025-07-07 00:43:18.055780374 +0000 UTC m=+42.234825533" Jul 7 00:43:18.492088 containerd[1927]: time="2025-07-07T00:43:18.492061935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:18.492364 containerd[1927]: time="2025-07-07T00:43:18.492320874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 00:43:18.492705 containerd[1927]: time="2025-07-07T00:43:18.492691826Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:18.493709 containerd[1927]: time="2025-07-07T00:43:18.493698134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:18.493972 containerd[1927]: time="2025-07-07T00:43:18.493959407Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.377735s" Jul 7 00:43:18.494004 containerd[1927]: time="2025-07-07T00:43:18.493974741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 00:43:18.494390 containerd[1927]: time="2025-07-07T00:43:18.494377465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:43:18.495407 containerd[1927]: time="2025-07-07T00:43:18.495395257Z" level=info msg="CreateContainer within sandbox \"3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:43:18.499064 containerd[1927]: time="2025-07-07T00:43:18.499022326Z" level=info msg="Container e2023a6baf6c44a49b188a7cfe5782bdddb1a80ffb02be54e5feaadca77cfd26: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:43:18.502722 containerd[1927]: time="2025-07-07T00:43:18.502681474Z" level=info msg="CreateContainer within sandbox \"3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e2023a6baf6c44a49b188a7cfe5782bdddb1a80ffb02be54e5feaadca77cfd26\"" Jul 7 00:43:18.502913 containerd[1927]: time="2025-07-07T00:43:18.502902899Z" level=info msg="StartContainer for \"e2023a6baf6c44a49b188a7cfe5782bdddb1a80ffb02be54e5feaadca77cfd26\"" Jul 7 00:43:18.503705 containerd[1927]: time="2025-07-07T00:43:18.503665431Z" level=info msg="connecting to shim e2023a6baf6c44a49b188a7cfe5782bdddb1a80ffb02be54e5feaadca77cfd26" address="unix:///run/containerd/s/eab8f2c0ea81aa992f2d8eea6b88fd39cdbf8b9160f96fde4af8908ec7a61e7c" protocol=ttrpc version=3 Jul 7 00:43:18.528368 systemd[1]: Started cri-containerd-e2023a6baf6c44a49b188a7cfe5782bdddb1a80ffb02be54e5feaadca77cfd26.scope - libcontainer container e2023a6baf6c44a49b188a7cfe5782bdddb1a80ffb02be54e5feaadca77cfd26. Jul 7 00:43:18.549890 containerd[1927]: time="2025-07-07T00:43:18.549865184Z" level=info msg="StartContainer for \"e2023a6baf6c44a49b188a7cfe5782bdddb1a80ffb02be54e5feaadca77cfd26\" returns successfully" Jul 7 00:43:18.874074 containerd[1927]: time="2025-07-07T00:43:18.873990501Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:18.874137 containerd[1927]: time="2025-07-07T00:43:18.874118155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 00:43:18.875250 containerd[1927]: time="2025-07-07T00:43:18.875208451Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 380.817165ms" Jul 7 00:43:18.875250 containerd[1927]: time="2025-07-07T00:43:18.875225224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:43:18.875643 containerd[1927]: time="2025-07-07T00:43:18.875631184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:43:18.876727 containerd[1927]: time="2025-07-07T00:43:18.876714853Z" level=info msg="CreateContainer within sandbox \"1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:43:18.879276 containerd[1927]: time="2025-07-07T00:43:18.879236672Z" level=info msg="Container a62f5cda7bbda862d4f01da19ffb2fee1488cc797788f464a75a0d061d0ed29b: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:43:18.882117 containerd[1927]: time="2025-07-07T00:43:18.882103639Z" level=info msg="CreateContainer within sandbox \"1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a62f5cda7bbda862d4f01da19ffb2fee1488cc797788f464a75a0d061d0ed29b\"" Jul 7 00:43:18.882329 containerd[1927]: time="2025-07-07T00:43:18.882315486Z" level=info msg="StartContainer for \"a62f5cda7bbda862d4f01da19ffb2fee1488cc797788f464a75a0d061d0ed29b\"" Jul 7 00:43:18.883211 containerd[1927]: time="2025-07-07T00:43:18.883186178Z" level=info msg="connecting to shim a62f5cda7bbda862d4f01da19ffb2fee1488cc797788f464a75a0d061d0ed29b" address="unix:///run/containerd/s/a2587da3d1cab11d521a35c123a0058d5f6796bf8faac5bbd38f46729e7a84c3" protocol=ttrpc version=3 Jul 7 00:43:18.905595 systemd[1]: Started cri-containerd-a62f5cda7bbda862d4f01da19ffb2fee1488cc797788f464a75a0d061d0ed29b.scope - libcontainer container a62f5cda7bbda862d4f01da19ffb2fee1488cc797788f464a75a0d061d0ed29b. Jul 7 00:43:18.979683 containerd[1927]: time="2025-07-07T00:43:18.979627378Z" level=info msg="StartContainer for \"a62f5cda7bbda862d4f01da19ffb2fee1488cc797788f464a75a0d061d0ed29b\" returns successfully" Jul 7 00:43:19.038624 kubelet[3285]: I0707 00:43:19.038609 3285 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:43:19.044899 kubelet[3285]: I0707 00:43:19.044863 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cd87677cd-wzhxw" podStartSLOduration=26.22076112 podStartE2EDuration="32.044849669s" podCreationTimestamp="2025-07-07 00:42:47 +0000 UTC" firstStartedPulling="2025-07-07 00:43:13.051488963 +0000 UTC m=+37.230534073" lastFinishedPulling="2025-07-07 00:43:18.875577512 +0000 UTC m=+43.054622622" observedRunningTime="2025-07-07 00:43:19.044660997 +0000 UTC m=+43.223706118" watchObservedRunningTime="2025-07-07 00:43:19.044849669 +0000 UTC m=+43.223894778" Jul 7 00:43:19.087188 kubelet[3285]: I0707 00:43:19.087163 3285 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:43:19.124196 containerd[1927]: time="2025-07-07T00:43:19.124141921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"7aef7ed79a7c67d415a70bf4c361e2636379130298d168723829663e2571b759\" pid:6596 exited_at:{seconds:1751848999 nanos:123990675}" Jul 7 00:43:19.160246 containerd[1927]: time="2025-07-07T00:43:19.160222532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"796d8c610b93c6e0f4c101e57c64804990b2ccce909f04c124dcd6e68efaa928\" pid:6619 exited_at:{seconds:1751848999 nanos:160113415}" Jul 7 00:43:20.040500 kubelet[3285]: I0707 00:43:20.040463 3285 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:43:20.322875 containerd[1927]: time="2025-07-07T00:43:20.322812429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:20.323173 containerd[1927]: time="2025-07-07T00:43:20.323043027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 00:43:20.323445 containerd[1927]: time="2025-07-07T00:43:20.323429389Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:20.324379 containerd[1927]: time="2025-07-07T00:43:20.324341412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:43:20.324618 containerd[1927]: time="2025-07-07T00:43:20.324582372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.448938212s" Jul 7 00:43:20.324618 containerd[1927]: time="2025-07-07T00:43:20.324598995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 00:43:20.326074 containerd[1927]: time="2025-07-07T00:43:20.326061006Z" level=info msg="CreateContainer within sandbox \"3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:43:20.329281 containerd[1927]: time="2025-07-07T00:43:20.329267671Z" level=info msg="Container 28c76bb1876e77a3ec692386bfe441ac25a1e1c670cbb93f975c3df5cdbe7914: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:43:20.332869 containerd[1927]: time="2025-07-07T00:43:20.332832389Z" level=info msg="CreateContainer within sandbox \"3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"28c76bb1876e77a3ec692386bfe441ac25a1e1c670cbb93f975c3df5cdbe7914\"" Jul 7 00:43:20.333033 containerd[1927]: time="2025-07-07T00:43:20.333022018Z" level=info msg="StartContainer for \"28c76bb1876e77a3ec692386bfe441ac25a1e1c670cbb93f975c3df5cdbe7914\"" Jul 7 00:43:20.333759 containerd[1927]: time="2025-07-07T00:43:20.333746472Z" level=info msg="connecting to shim 28c76bb1876e77a3ec692386bfe441ac25a1e1c670cbb93f975c3df5cdbe7914" address="unix:///run/containerd/s/eab8f2c0ea81aa992f2d8eea6b88fd39cdbf8b9160f96fde4af8908ec7a61e7c" protocol=ttrpc version=3 Jul 7 00:43:20.353409 systemd[1]: Started cri-containerd-28c76bb1876e77a3ec692386bfe441ac25a1e1c670cbb93f975c3df5cdbe7914.scope - libcontainer container 28c76bb1876e77a3ec692386bfe441ac25a1e1c670cbb93f975c3df5cdbe7914. Jul 7 00:43:20.372819 containerd[1927]: time="2025-07-07T00:43:20.372793002Z" level=info msg="StartContainer for \"28c76bb1876e77a3ec692386bfe441ac25a1e1c670cbb93f975c3df5cdbe7914\" returns successfully" Jul 7 00:43:20.905495 kubelet[3285]: I0707 00:43:20.905405 3285 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:43:20.905495 kubelet[3285]: I0707 00:43:20.905483 3285 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:43:21.085711 kubelet[3285]: I0707 00:43:21.085569 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gwbvc" podStartSLOduration=22.833714776 podStartE2EDuration="31.085527892s" podCreationTimestamp="2025-07-07 00:42:50 +0000 UTC" firstStartedPulling="2025-07-07 00:43:12.07309418 +0000 UTC m=+36.252139289" lastFinishedPulling="2025-07-07 00:43:20.324907295 +0000 UTC m=+44.503952405" observedRunningTime="2025-07-07 00:43:21.084414663 +0000 UTC m=+45.263459863" watchObservedRunningTime="2025-07-07 00:43:21.085527892 +0000 UTC m=+45.264573051" Jul 7 00:43:25.056994 kubelet[3285]: I0707 00:43:25.056922 3285 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:43:25.766878 kubelet[3285]: I0707 00:43:25.766791 3285 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:43:36.315074 containerd[1927]: time="2025-07-07T00:43:36.314998741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"e3962e425b0ad51025d25260b417743d308bb595aeef3b5e0d06f119ecae3a6b\" pid:6709 exited_at:{seconds:1751849016 nanos:314723380}" Jul 7 00:43:38.719812 containerd[1927]: time="2025-07-07T00:43:38.719782357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"91348ac487998cee3eff13a9f109c8bc7f0402e219ffe7bd5231e70bee9483c1\" pid:6744 exited_at:{seconds:1751849018 nanos:719549408}" Jul 7 00:43:46.138785 containerd[1927]: time="2025-07-07T00:43:46.138758878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"adde51335e05a75a7329c6406dc65b96e918f6ad8857eba1820fa43583c1be46\" pid:6784 exited_at:{seconds:1751849026 nanos:138582472}" Jul 7 00:43:49.212236 containerd[1927]: time="2025-07-07T00:43:49.212177872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"a09ae0d4a6b8715f7b6ea6700107143247dba6bc23225233065c18f193690c34\" pid:6819 exited_at:{seconds:1751849029 nanos:212043248}" Jul 7 00:43:49.471246 containerd[1927]: time="2025-07-07T00:43:49.471177619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"8b31f465b8519eb6cad40e4729185002fe813ea92716621219facdaa6f2fd23c\" pid:6840 exited_at:{seconds:1751849029 nanos:471021412}" Jul 7 00:44:08.719316 containerd[1927]: time="2025-07-07T00:44:08.719290406Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"65e333f2832916664c18f0315db79d1b59e6f141052d6b1db8a90b9f9fbf155c\" pid:6872 exited_at:{seconds:1751849048 nanos:719089190}" Jul 7 00:44:16.103669 containerd[1927]: time="2025-07-07T00:44:16.103645257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"a1b042b47460925495233078e9c0e1c84332e6b0c816935df59653773b12a3a6\" pid:6911 exited_at:{seconds:1751849056 nanos:103473880}" Jul 7 00:44:19.237230 containerd[1927]: time="2025-07-07T00:44:19.237198508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"16651c7f4e686443c63de4e8808e6990ef73a8dee880f5e1f29426f55f7e1cdd\" pid:6942 exited_at:{seconds:1751849059 nanos:237049820}" Jul 7 00:44:36.312722 containerd[1927]: time="2025-07-07T00:44:36.312697820Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"ea0854dc17bfdb39c342d5c86ed17cdf159386ebda400eb8b90f870d11b97355\" pid:6973 exited_at:{seconds:1751849076 nanos:312522820}" Jul 7 00:44:38.667745 containerd[1927]: time="2025-07-07T00:44:38.667721410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"361027cd6e7ec00ee93fd6a1531b1adaa74e55293910664578224ca68bd3f44e\" pid:7008 exited_at:{seconds:1751849078 nanos:667516932}" Jul 7 00:44:46.135854 containerd[1927]: time="2025-07-07T00:44:46.135791671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"e32da7c16b000c3116e5026e11ee75c8986836334be76781a39e1b9b4e20d739\" pid:7068 exited_at:{seconds:1751849086 nanos:135504795}" Jul 7 00:44:49.173777 containerd[1927]: time="2025-07-07T00:44:49.173753849Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"38d0df5fd3dd09836391972e4b72e7ef164f879b8e53b9cdf43853544afc2204\" pid:7102 exited_at:{seconds:1751849089 nanos:173648327}" Jul 7 00:44:49.419804 containerd[1927]: time="2025-07-07T00:44:49.419753085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"5d1036def235772ed4d9e58218a038b7f58e54ba8a27445d564ebfc125622069\" pid:7124 exited_at:{seconds:1751849089 nanos:419634913}" Jul 7 00:45:08.679511 containerd[1927]: time="2025-07-07T00:45:08.679423958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"55339af9c6a278a50acceb6e2f06068f4d876a2a9c77db3a966e0ba3aa21b49e\" pid:7145 exited_at:{seconds:1751849108 nanos:679199327}" Jul 7 00:45:16.093390 containerd[1927]: time="2025-07-07T00:45:16.093329104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"c7a7bb52f0caa4a8aa2e2c970117ea06980807ded489b9fdc61c9d49e47ac6a0\" pid:7181 exited_at:{seconds:1751849116 nanos:93106025}" Jul 7 00:45:19.171789 containerd[1927]: time="2025-07-07T00:45:19.171758441Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"5d78a372d071a6c11b4cadbe28d00138e65f95494d04155e941f687c34fdaae7\" pid:7212 exited_at:{seconds:1751849119 nanos:171620163}" Jul 7 00:45:36.352933 containerd[1927]: time="2025-07-07T00:45:36.352907364Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"4e716dfaf82b1a68a7cd242e5b60ebd12826ae1ee9ad34341038a70ce6572194\" pid:7236 exited_at:{seconds:1751849136 nanos:352700878}" Jul 7 00:45:38.720463 containerd[1927]: time="2025-07-07T00:45:38.720435406Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"0efafb9f29a56046eb913541ebdd3d0e463a84ac25703335f1ba5fd03aa549cc\" pid:7269 exited_at:{seconds:1751849138 nanos:720233239}" Jul 7 00:45:46.134564 containerd[1927]: time="2025-07-07T00:45:46.134541288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"1229b0a2091b7a43a1b50a9b4317753bbaddde50fafb7092bf9bb06c437551df\" pid:7306 exited_at:{seconds:1751849146 nanos:134316466}" Jul 7 00:45:49.171219 containerd[1927]: time="2025-07-07T00:45:49.171194542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"79362e7613109c19b9740cf410c367fc8b830608c5d5327dace6cf3fac205aa9\" pid:7338 exited_at:{seconds:1751849149 nanos:170998213}" Jul 7 00:45:49.465301 containerd[1927]: time="2025-07-07T00:45:49.465273408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"67c8d31cc9ad5410b00331297d87b101ac3c7dcf00eb4f641191f7a8b63f3195\" pid:7361 exited_at:{seconds:1751849149 nanos:465106027}" Jul 7 00:46:08.725835 containerd[1927]: time="2025-07-07T00:46:08.725806036Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"472b8a67dc06dfd5197c5441be0b9973e84312759de3d0cd1c03a32c03e50acb\" pid:7391 exited_at:{seconds:1751849168 nanos:725596837}" Jul 7 00:46:16.140918 containerd[1927]: time="2025-07-07T00:46:16.140865421Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"efc0258780d84192eba03142e03c83ae8c8c2b55ec2337a03d26e86f73a4397d\" pid:7431 exited_at:{seconds:1751849176 nanos:140687445}" Jul 7 00:46:19.172694 containerd[1927]: time="2025-07-07T00:46:19.172673347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"bb8abf2ffa310e4ca14a0b0cc7765f2f94895324f3aa9e381e883eb6a5ba9313\" pid:7476 exited_at:{seconds:1751849179 nanos:172553505}" Jul 7 00:46:36.322684 containerd[1927]: time="2025-07-07T00:46:36.322615959Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"586683cc41fc4125cb7c197f2bf317bd50df96cbab4b25bdfeb2c3d67993c685\" pid:7509 exited_at:{seconds:1751849196 nanos:322231022}" Jul 7 00:46:38.683498 containerd[1927]: time="2025-07-07T00:46:38.683470607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"11181061344d97e422553ec1fe17ec7af9784643215010c9d616e7816d7b7cdd\" pid:7540 exited_at:{seconds:1751849198 nanos:683182994}" Jul 7 00:46:46.139464 containerd[1927]: time="2025-07-07T00:46:46.139436376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"d077a24d8981703503f988f58ac26af001b0ed06af3cc06631000b46aa77b2c3\" pid:7577 exited_at:{seconds:1751849206 nanos:139252024}" Jul 7 00:46:49.174806 containerd[1927]: time="2025-07-07T00:46:49.174774374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"fd86bc3fd299797122bbb62832354c9c89a06759c6bbbf531e3296fe79b12bca\" pid:7611 exited_at:{seconds:1751849209 nanos:174627249}" Jul 7 00:46:49.438779 containerd[1927]: time="2025-07-07T00:46:49.438761692Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"afacc71e42f32a1dea87e5b5bf5b6b3252d73bc3eaad2cf834abeaee40f60503\" pid:7632 exited_at:{seconds:1751849209 nanos:438666541}" Jul 7 00:47:08.738494 containerd[1927]: time="2025-07-07T00:47:08.738462335Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"fec4c98c069331405d86b2bb23b1fed699ecedeee6e9b32eab3f3d65dcf7f6f0\" pid:7657 exited_at:{seconds:1751849228 nanos:738192563}" Jul 7 00:47:16.136769 containerd[1927]: time="2025-07-07T00:47:16.136711822Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"989298ca420a1d62c597350dab97401cfe4e178ea950c8340ae89a818dfb5de4\" pid:7695 exited_at:{seconds:1751849236 nanos:136464366}" Jul 7 00:47:19.167620 containerd[1927]: time="2025-07-07T00:47:19.167596893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"b12d5459baed793850b9e3becf430deb693c2a9e8984b3214f216e570fbe6d77\" pid:7729 exited_at:{seconds:1751849239 nanos:167489977}" Jul 7 00:47:32.269093 containerd[1927]: time="2025-07-07T00:47:32.268909014Z" level=warning msg="container event discarded" container=b59f3838ecaeeb417c41fd587ed75c59f5e7f4fe40a6950dfbeacd16019e7611 type=CONTAINER_CREATED_EVENT Jul 7 00:47:32.269093 containerd[1927]: time="2025-07-07T00:47:32.269026979Z" level=warning msg="container event discarded" container=b59f3838ecaeeb417c41fd587ed75c59f5e7f4fe40a6950dfbeacd16019e7611 type=CONTAINER_STARTED_EVENT Jul 7 00:47:32.287442 containerd[1927]: time="2025-07-07T00:47:32.287335954Z" level=warning msg="container event discarded" container=bb000f4e0ce01a68f4d2a5d56be98a0a04324de2456c9acf8defc5b44bdc1e06 type=CONTAINER_CREATED_EVENT Jul 7 00:47:32.287442 containerd[1927]: time="2025-07-07T00:47:32.287401783Z" level=warning msg="container event discarded" container=bb000f4e0ce01a68f4d2a5d56be98a0a04324de2456c9acf8defc5b44bdc1e06 type=CONTAINER_STARTED_EVENT Jul 7 00:47:32.287442 containerd[1927]: time="2025-07-07T00:47:32.287447104Z" level=warning msg="container event discarded" container=54fbb99fcfe800ecbea843fdebf5294d6804c3c1603fd4214aa1df9deaf57dfa type=CONTAINER_CREATED_EVENT Jul 7 00:47:32.287902 containerd[1927]: time="2025-07-07T00:47:32.287468577Z" level=warning msg="container event discarded" container=b9879df2d59dd194b785353d6a956b616abf953e3f543371e096c6b05056bef8 type=CONTAINER_CREATED_EVENT Jul 7 00:47:32.287902 containerd[1927]: time="2025-07-07T00:47:32.287488059Z" level=warning msg="container event discarded" container=b9879df2d59dd194b785353d6a956b616abf953e3f543371e096c6b05056bef8 type=CONTAINER_STARTED_EVENT Jul 7 00:47:32.287902 containerd[1927]: time="2025-07-07T00:47:32.287512354Z" level=warning msg="container event discarded" container=e485cc3762d06d0adbeafbf538a2d076366945308cd6423cf825eed81c2ec921 type=CONTAINER_CREATED_EVENT Jul 7 00:47:32.298942 containerd[1927]: time="2025-07-07T00:47:32.298801216Z" level=warning msg="container event discarded" container=6503db8268ef049c2c5dc78eccffc46156a3a822db5757986bb2bdc30b96f0a3 type=CONTAINER_CREATED_EVENT Jul 7 00:47:32.382423 containerd[1927]: time="2025-07-07T00:47:32.382303223Z" level=warning msg="container event discarded" container=54fbb99fcfe800ecbea843fdebf5294d6804c3c1603fd4214aa1df9deaf57dfa type=CONTAINER_STARTED_EVENT Jul 7 00:47:32.382423 containerd[1927]: time="2025-07-07T00:47:32.382394561Z" level=warning msg="container event discarded" container=6503db8268ef049c2c5dc78eccffc46156a3a822db5757986bb2bdc30b96f0a3 type=CONTAINER_STARTED_EVENT Jul 7 00:47:32.382423 containerd[1927]: time="2025-07-07T00:47:32.382425195Z" level=warning msg="container event discarded" container=e485cc3762d06d0adbeafbf538a2d076366945308cd6423cf825eed81c2ec921 type=CONTAINER_STARTED_EVENT Jul 7 00:47:36.372373 containerd[1927]: time="2025-07-07T00:47:36.372343912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"d8ac1a55c6d21905da25389fb07594cc33075b0fb7e4090dda4e0c7aefab938b\" pid:7754 exited_at:{seconds:1751849256 nanos:372158374}" Jul 7 00:47:38.686774 containerd[1927]: time="2025-07-07T00:47:38.686743096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"ccfe6f6e51985aa9cfacc2e7993be83cf735f3f7247744cfd1af320e50518a7b\" pid:7785 exited_at:{seconds:1751849258 nanos:686432929}" Jul 7 00:47:41.419871 containerd[1927]: time="2025-07-07T00:47:41.419696017Z" level=warning msg="container event discarded" container=80bba3a4c34ffa36826f4f0a52a7e77cfca0d58349930521b82ab254492a8f9d type=CONTAINER_CREATED_EVENT Jul 7 00:47:41.419871 containerd[1927]: time="2025-07-07T00:47:41.419810195Z" level=warning msg="container event discarded" container=80bba3a4c34ffa36826f4f0a52a7e77cfca0d58349930521b82ab254492a8f9d type=CONTAINER_STARTED_EVENT Jul 7 00:47:41.419871 containerd[1927]: time="2025-07-07T00:47:41.419836934Z" level=warning msg="container event discarded" container=2f17283820d3ae686fa2fa70019ba5255a495775905b0d16dc7c51b3baf9a5e0 type=CONTAINER_CREATED_EVENT Jul 7 00:47:41.463442 containerd[1927]: time="2025-07-07T00:47:41.463298376Z" level=warning msg="container event discarded" container=2f17283820d3ae686fa2fa70019ba5255a495775905b0d16dc7c51b3baf9a5e0 type=CONTAINER_STARTED_EVENT Jul 7 00:47:41.678115 containerd[1927]: time="2025-07-07T00:47:41.678030212Z" level=warning msg="container event discarded" container=131a9d5140792f47f9a56507f8c8cf4454956cf90f2751d8945c618e70583647 type=CONTAINER_CREATED_EVENT Jul 7 00:47:41.678340 containerd[1927]: time="2025-07-07T00:47:41.678110605Z" level=warning msg="container event discarded" container=131a9d5140792f47f9a56507f8c8cf4454956cf90f2751d8945c618e70583647 type=CONTAINER_STARTED_EVENT Jul 7 00:47:43.080442 containerd[1927]: time="2025-07-07T00:47:43.080311546Z" level=warning msg="container event discarded" container=0658586d07f34897349a2efa9752884f212472fa191aa32298dd6fca63644763 type=CONTAINER_CREATED_EVENT Jul 7 00:47:43.113780 containerd[1927]: time="2025-07-07T00:47:43.113639239Z" level=warning msg="container event discarded" container=0658586d07f34897349a2efa9752884f212472fa191aa32298dd6fca63644763 type=CONTAINER_STARTED_EVENT Jul 7 00:47:46.077482 containerd[1927]: time="2025-07-07T00:47:46.077433924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"c2535f40bf252e4de330fedfd4bdbc936e4c7c7ff4435161087afe4e85c91f69\" pid:7823 exited_at:{seconds:1751849266 nanos:77250677}" Jul 7 00:47:49.173145 containerd[1927]: time="2025-07-07T00:47:49.173107331Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"d7823a3d357879818a917d94c4545087cdace78330ce980d9dc3a57fa11ac548\" pid:7857 exited_at:{seconds:1751849269 nanos:172965954}" Jul 7 00:47:49.458363 containerd[1927]: time="2025-07-07T00:47:49.458335193Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"c25f2e75d50d571c42d5fafdd517a3df5b138342daf2a1dff2043792d718c4a9\" pid:7879 exited_at:{seconds:1751849269 nanos:458183457}" Jul 7 00:47:50.236723 containerd[1927]: time="2025-07-07T00:47:50.236589964Z" level=warning msg="container event discarded" container=455ae5d0d7fc875f4788cb3a0157a4f9fef045bd4a8a322f9394ac7473187d5e type=CONTAINER_CREATED_EVENT Jul 7 00:47:50.236723 containerd[1927]: time="2025-07-07T00:47:50.236710328Z" level=warning msg="container event discarded" container=455ae5d0d7fc875f4788cb3a0157a4f9fef045bd4a8a322f9394ac7473187d5e type=CONTAINER_STARTED_EVENT Jul 7 00:47:50.429304 containerd[1927]: time="2025-07-07T00:47:50.429223332Z" level=warning msg="container event discarded" container=0db42cc1d88dc33a65df34ab1dfe3577bbde222a07cf67c122a6f5a01a22119e type=CONTAINER_CREATED_EVENT Jul 7 00:47:50.429304 containerd[1927]: time="2025-07-07T00:47:50.429268860Z" level=warning msg="container event discarded" container=0db42cc1d88dc33a65df34ab1dfe3577bbde222a07cf67c122a6f5a01a22119e type=CONTAINER_STARTED_EVENT Jul 7 00:47:52.360397 containerd[1927]: time="2025-07-07T00:47:52.360278541Z" level=warning msg="container event discarded" container=4dbe2ad8134da6dfd810203f52292e84dec0c1ff9852d6fab67ea42be194c09a type=CONTAINER_CREATED_EVENT Jul 7 00:47:52.412915 containerd[1927]: time="2025-07-07T00:47:52.412854565Z" level=warning msg="container event discarded" container=4dbe2ad8134da6dfd810203f52292e84dec0c1ff9852d6fab67ea42be194c09a type=CONTAINER_STARTED_EVENT Jul 7 00:47:53.744927 containerd[1927]: time="2025-07-07T00:47:53.744807388Z" level=warning msg="container event discarded" container=6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b type=CONTAINER_CREATED_EVENT Jul 7 00:47:53.800405 containerd[1927]: time="2025-07-07T00:47:53.800288402Z" level=warning msg="container event discarded" container=6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b type=CONTAINER_STARTED_EVENT Jul 7 00:47:54.749956 containerd[1927]: time="2025-07-07T00:47:54.749837280Z" level=warning msg="container event discarded" container=6503412d4120e22cc4a1e29cbf2a24471bb915f483f0dad4bcbce217352dde3b type=CONTAINER_STOPPED_EVENT Jul 7 00:47:57.124972 containerd[1927]: time="2025-07-07T00:47:57.124863036Z" level=warning msg="container event discarded" container=c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec type=CONTAINER_CREATED_EVENT Jul 7 00:47:57.161574 containerd[1927]: time="2025-07-07T00:47:57.161479886Z" level=warning msg="container event discarded" container=c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec type=CONTAINER_STARTED_EVENT Jul 7 00:47:58.175758 containerd[1927]: time="2025-07-07T00:47:58.175608234Z" level=warning msg="container event discarded" container=c86d750dc4eb1de15e2da6c310911810fb180a6d717c1ce27a3212fc7e0574ec type=CONTAINER_STOPPED_EVENT Jul 7 00:48:02.348536 containerd[1927]: time="2025-07-07T00:48:02.348300018Z" level=warning msg="container event discarded" container=04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be type=CONTAINER_CREATED_EVENT Jul 7 00:48:02.388698 containerd[1927]: time="2025-07-07T00:48:02.388640008Z" level=warning msg="container event discarded" container=04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be type=CONTAINER_STARTED_EVENT Jul 7 00:48:03.474954 containerd[1927]: time="2025-07-07T00:48:03.474889694Z" level=warning msg="container event discarded" container=26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e type=CONTAINER_CREATED_EVENT Jul 7 00:48:03.474954 containerd[1927]: time="2025-07-07T00:48:03.474915746Z" level=warning msg="container event discarded" container=26bf1a2c1a7cd232e48b9fd1a752790e93136f442aced1561696f157a03f5f0e type=CONTAINER_STARTED_EVENT Jul 7 00:48:05.037896 containerd[1927]: time="2025-07-07T00:48:05.037713444Z" level=warning msg="container event discarded" container=79d48b6d5e4654d74e82700ace530a067ea29c7ab34df99d2cf10f6252b27c05 type=CONTAINER_CREATED_EVENT Jul 7 00:48:05.149569 containerd[1927]: time="2025-07-07T00:48:05.149433690Z" level=warning msg="container event discarded" container=79d48b6d5e4654d74e82700ace530a067ea29c7ab34df99d2cf10f6252b27c05 type=CONTAINER_STARTED_EVENT Jul 7 00:48:07.379870 containerd[1927]: time="2025-07-07T00:48:07.379732303Z" level=warning msg="container event discarded" container=6af3ce7ca9bbd6987298df671864c55d046b975b3bb78b5c35b0d0ecf599e074 type=CONTAINER_CREATED_EVENT Jul 7 00:48:07.431190 containerd[1927]: time="2025-07-07T00:48:07.431088351Z" level=warning msg="container event discarded" container=6af3ce7ca9bbd6987298df671864c55d046b975b3bb78b5c35b0d0ecf599e074 type=CONTAINER_STARTED_EVENT Jul 7 00:48:08.673580 containerd[1927]: time="2025-07-07T00:48:08.673553026Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"21b1ed596994b7f8b6e59a1ca2b2915afc63ce131b550c0c7404da3cfcb89131\" pid:7924 exited_at:{seconds:1751849288 nanos:673381167}" Jul 7 00:48:09.013534 containerd[1927]: time="2025-07-07T00:48:09.013416113Z" level=warning msg="container event discarded" container=70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec type=CONTAINER_CREATED_EVENT Jul 7 00:48:09.013534 containerd[1927]: time="2025-07-07T00:48:09.013486881Z" level=warning msg="container event discarded" container=70ced8502e78c59a4c3819b542bd1c48644b8bea889bbc6bb466f3b3c5e45cec type=CONTAINER_STARTED_EVENT Jul 7 00:48:10.013076 containerd[1927]: time="2025-07-07T00:48:10.012911725Z" level=warning msg="container event discarded" container=4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3 type=CONTAINER_CREATED_EVENT Jul 7 00:48:10.013076 containerd[1927]: time="2025-07-07T00:48:10.013008917Z" level=warning msg="container event discarded" container=4bd802a4e1747317da264532d3bb1412c9aa0a1c94ce8bb05492ad275fd723e3 type=CONTAINER_STARTED_EVENT Jul 7 00:48:10.097864 containerd[1927]: time="2025-07-07T00:48:10.097690936Z" level=warning msg="container event discarded" container=9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488 type=CONTAINER_CREATED_EVENT Jul 7 00:48:10.097864 containerd[1927]: time="2025-07-07T00:48:10.097808213Z" level=warning msg="container event discarded" container=9eec9de5f9d6e18074c6ae7af29cb275dc26b4a051f72604ddc35e903e258488 type=CONTAINER_STARTED_EVENT Jul 7 00:48:11.902233 containerd[1927]: time="2025-07-07T00:48:11.902071455Z" level=warning msg="container event discarded" container=a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4 type=CONTAINER_CREATED_EVENT Jul 7 00:48:11.951651 containerd[1927]: time="2025-07-07T00:48:11.951520965Z" level=warning msg="container event discarded" container=a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4 type=CONTAINER_STARTED_EVENT Jul 7 00:48:11.999944 containerd[1927]: time="2025-07-07T00:48:11.999851019Z" level=warning msg="container event discarded" container=e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527 type=CONTAINER_CREATED_EVENT Jul 7 00:48:11.999944 containerd[1927]: time="2025-07-07T00:48:11.999928312Z" level=warning msg="container event discarded" container=e445e93d04b92a0cb44245cf6d0edb4a027fce1bb98ec044a8cdef5e60aa6527 type=CONTAINER_STARTED_EVENT Jul 7 00:48:12.000239 containerd[1927]: time="2025-07-07T00:48:11.999957731Z" level=warning msg="container event discarded" container=276b7893086d01ebcac6c1b75e81d9bd1c6c70b55e5066340aaf0cf43fac008f type=CONTAINER_CREATED_EVENT Jul 7 00:48:12.050096 containerd[1927]: time="2025-07-07T00:48:12.050010417Z" level=warning msg="container event discarded" container=276b7893086d01ebcac6c1b75e81d9bd1c6c70b55e5066340aaf0cf43fac008f type=CONTAINER_STARTED_EVENT Jul 7 00:48:12.083010 containerd[1927]: time="2025-07-07T00:48:12.082910154Z" level=warning msg="container event discarded" container=3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944 type=CONTAINER_CREATED_EVENT Jul 7 00:48:12.083010 containerd[1927]: time="2025-07-07T00:48:12.082995580Z" level=warning msg="container event discarded" container=3ffd4073fd3d7719b77f69377d4ad9238d1828720ed97461874d3a32ee739944 type=CONTAINER_STARTED_EVENT Jul 7 00:48:13.061555 containerd[1927]: time="2025-07-07T00:48:13.061419727Z" level=warning msg="container event discarded" container=1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c type=CONTAINER_CREATED_EVENT Jul 7 00:48:13.061555 containerd[1927]: time="2025-07-07T00:48:13.061499830Z" level=warning msg="container event discarded" container=1271adcd2151596c3722418285c633febf3413cfbe95d7c5af510c26d995327c type=CONTAINER_STARTED_EVENT Jul 7 00:48:14.136398 containerd[1927]: time="2025-07-07T00:48:14.136265499Z" level=warning msg="container event discarded" container=0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2 type=CONTAINER_CREATED_EVENT Jul 7 00:48:14.136398 containerd[1927]: time="2025-07-07T00:48:14.136334843Z" level=warning msg="container event discarded" container=0d81d98177f6dfb31123485277e6a6e33d554d56097934fe1cb1b22db9d8abd2 type=CONTAINER_STARTED_EVENT Jul 7 00:48:14.136398 containerd[1927]: time="2025-07-07T00:48:14.136359960Z" level=warning msg="container event discarded" container=58c1245a9ef06dc9ff9ca5620954509e9ddaff1b65d415b9b8122e6e14978343 type=CONTAINER_CREATED_EVENT Jul 7 00:48:14.171865 containerd[1927]: time="2025-07-07T00:48:14.171784370Z" level=warning msg="container event discarded" container=58c1245a9ef06dc9ff9ca5620954509e9ddaff1b65d415b9b8122e6e14978343 type=CONTAINER_STARTED_EVENT Jul 7 00:48:14.513913 containerd[1927]: time="2025-07-07T00:48:14.513782327Z" level=warning msg="container event discarded" container=4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930 type=CONTAINER_CREATED_EVENT Jul 7 00:48:14.608284 containerd[1927]: time="2025-07-07T00:48:14.608240820Z" level=warning msg="container event discarded" container=4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930 type=CONTAINER_STARTED_EVENT Jul 7 00:48:16.134144 containerd[1927]: time="2025-07-07T00:48:16.134072796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"ba070a757af12bf55563bf86c6badfc7d9aa9a42d98198f0bd5bb20f80bfd160\" pid:7960 exited_at:{seconds:1751849296 nanos:133690315}" Jul 7 00:48:17.132824 containerd[1927]: time="2025-07-07T00:48:17.132633854Z" level=warning msg="container event discarded" container=dce168cc4f5f26669162c63f5fd09c7e1059aa3dc147b254767e3d615eecd097 type=CONTAINER_CREATED_EVENT Jul 7 00:48:17.183365 containerd[1927]: time="2025-07-07T00:48:17.183200569Z" level=warning msg="container event discarded" container=dce168cc4f5f26669162c63f5fd09c7e1059aa3dc147b254767e3d615eecd097 type=CONTAINER_STARTED_EVENT Jul 7 00:48:18.513018 containerd[1927]: time="2025-07-07T00:48:18.512866495Z" level=warning msg="container event discarded" container=e2023a6baf6c44a49b188a7cfe5782bdddb1a80ffb02be54e5feaadca77cfd26 type=CONTAINER_CREATED_EVENT Jul 7 00:48:18.559389 containerd[1927]: time="2025-07-07T00:48:18.559241251Z" level=warning msg="container event discarded" container=e2023a6baf6c44a49b188a7cfe5782bdddb1a80ffb02be54e5feaadca77cfd26 type=CONTAINER_STARTED_EVENT Jul 7 00:48:18.892205 containerd[1927]: time="2025-07-07T00:48:18.891923333Z" level=warning msg="container event discarded" container=a62f5cda7bbda862d4f01da19ffb2fee1488cc797788f464a75a0d061d0ed29b type=CONTAINER_CREATED_EVENT Jul 7 00:48:18.989926 containerd[1927]: time="2025-07-07T00:48:18.989769393Z" level=warning msg="container event discarded" container=a62f5cda7bbda862d4f01da19ffb2fee1488cc797788f464a75a0d061d0ed29b type=CONTAINER_STARTED_EVENT Jul 7 00:48:19.178702 containerd[1927]: time="2025-07-07T00:48:19.178678735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"9856aad9a5d6016a25209ad41309c25a8a99cceac25ebfbbde37bcd91ebf0bd9\" pid:7991 exited_at:{seconds:1751849299 nanos:178514580}" Jul 7 00:48:20.342653 containerd[1927]: time="2025-07-07T00:48:20.342515684Z" level=warning msg="container event discarded" container=28c76bb1876e77a3ec692386bfe441ac25a1e1c670cbb93f975c3df5cdbe7914 type=CONTAINER_CREATED_EVENT Jul 7 00:48:20.383205 containerd[1927]: time="2025-07-07T00:48:20.383062587Z" level=warning msg="container event discarded" container=28c76bb1876e77a3ec692386bfe441ac25a1e1c670cbb93f975c3df5cdbe7914 type=CONTAINER_STARTED_EVENT Jul 7 00:48:36.369286 containerd[1927]: time="2025-07-07T00:48:36.369250418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"b88267fc9db04bb98d0a743f0d223f7931e64d069e990f5d3532e8c26de4ebf5\" pid:8023 exited_at:{seconds:1751849316 nanos:369032070}" Jul 7 00:48:38.662063 containerd[1927]: time="2025-07-07T00:48:38.662038308Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"edec21421b290f605f9f23bdb9dcab3fd67683d6ea51f51104661eadc4a22b30\" pid:8055 exited_at:{seconds:1751849318 nanos:661855850}" Jul 7 00:48:46.137413 containerd[1927]: time="2025-07-07T00:48:46.137385132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"010fc2f693535f8a81c0e248a50e12ed87f664ae66d3009995059de76d1815a1\" pid:8092 exited_at:{seconds:1751849326 nanos:137187879}" Jul 7 00:48:49.174784 containerd[1927]: time="2025-07-07T00:48:49.174759698Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"827f128378fb5489765bca403fd74b5c3ba694b3c0b54025c8b5aecc464ccb8f\" pid:8123 exited_at:{seconds:1751849329 nanos:174605872}" Jul 7 00:48:49.476060 containerd[1927]: time="2025-07-07T00:48:49.476038356Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"c008154436e725637aa6b53015b1bc7126a887caefe761b7906c432154574fc9\" pid:8144 exited_at:{seconds:1751849329 nanos:475923814}" Jul 7 00:48:57.420061 systemd[1]: Started sshd@9-139.178.70.5:22-139.178.68.195:40266.service - OpenSSH per-connection server daemon (139.178.68.195:40266). Jul 7 00:48:57.474334 sshd[8159]: Accepted publickey for core from 139.178.68.195 port 40266 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:48:57.474914 sshd-session[8159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:48:57.477824 systemd-logind[1917]: New session 12 of user core. Jul 7 00:48:57.495245 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:48:57.607219 sshd[8161]: Connection closed by 139.178.68.195 port 40266 Jul 7 00:48:57.607358 sshd-session[8159]: pam_unix(sshd:session): session closed for user core Jul 7 00:48:57.609081 systemd[1]: sshd@9-139.178.70.5:22-139.178.68.195:40266.service: Deactivated successfully. Jul 7 00:48:57.610091 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:48:57.610824 systemd-logind[1917]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:48:57.611490 systemd-logind[1917]: Removed session 12. Jul 7 00:49:02.621844 systemd[1]: Started sshd@10-139.178.70.5:22-139.178.68.195:54656.service - OpenSSH per-connection server daemon (139.178.68.195:54656). Jul 7 00:49:02.671140 sshd[8192]: Accepted publickey for core from 139.178.68.195 port 54656 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:49:02.672235 sshd-session[8192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:49:02.675964 systemd-logind[1917]: New session 13 of user core. Jul 7 00:49:02.694255 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:49:02.840105 sshd[8194]: Connection closed by 139.178.68.195 port 54656 Jul 7 00:49:02.840279 sshd-session[8192]: pam_unix(sshd:session): session closed for user core Jul 7 00:49:02.842080 systemd[1]: sshd@10-139.178.70.5:22-139.178.68.195:54656.service: Deactivated successfully. Jul 7 00:49:02.843114 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:49:02.843902 systemd-logind[1917]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:49:02.844735 systemd-logind[1917]: Removed session 13. Jul 7 00:49:07.860626 systemd[1]: Started sshd@11-139.178.70.5:22-139.178.68.195:54668.service - OpenSSH per-connection server daemon (139.178.68.195:54668). Jul 7 00:49:07.910200 sshd[8221]: Accepted publickey for core from 139.178.68.195 port 54668 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:49:07.910813 sshd-session[8221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:49:07.913390 systemd-logind[1917]: New session 14 of user core. Jul 7 00:49:07.929397 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:49:08.016261 sshd[8223]: Connection closed by 139.178.68.195 port 54668 Jul 7 00:49:08.016441 sshd-session[8221]: pam_unix(sshd:session): session closed for user core Jul 7 00:49:08.032452 systemd[1]: sshd@11-139.178.70.5:22-139.178.68.195:54668.service: Deactivated successfully. Jul 7 00:49:08.033400 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:49:08.033965 systemd-logind[1917]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:49:08.035224 systemd[1]: Started sshd@12-139.178.70.5:22-139.178.68.195:54684.service - OpenSSH per-connection server daemon (139.178.68.195:54684). Jul 7 00:49:08.035912 systemd-logind[1917]: Removed session 14. Jul 7 00:49:08.082613 sshd[8250]: Accepted publickey for core from 139.178.68.195 port 54684 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:49:08.083394 sshd-session[8250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:49:08.086754 systemd-logind[1917]: New session 15 of user core. Jul 7 00:49:08.096327 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:49:08.200096 sshd[8252]: Connection closed by 139.178.68.195 port 54684 Jul 7 00:49:08.200284 sshd-session[8250]: pam_unix(sshd:session): session closed for user core Jul 7 00:49:08.217857 systemd[1]: sshd@12-139.178.70.5:22-139.178.68.195:54684.service: Deactivated successfully. Jul 7 00:49:08.219704 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:49:08.220145 systemd-logind[1917]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:49:08.221386 systemd[1]: Started sshd@13-139.178.70.5:22-139.178.68.195:51974.service - OpenSSH per-connection server daemon (139.178.68.195:51974). Jul 7 00:49:08.222014 systemd-logind[1917]: Removed session 15. Jul 7 00:49:08.259297 sshd[8275]: Accepted publickey for core from 139.178.68.195 port 51974 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:49:08.259933 sshd-session[8275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:49:08.263046 systemd-logind[1917]: New session 16 of user core. Jul 7 00:49:08.274318 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:49:08.407653 sshd[8278]: Connection closed by 139.178.68.195 port 51974 Jul 7 00:49:08.407847 sshd-session[8275]: pam_unix(sshd:session): session closed for user core Jul 7 00:49:08.409815 systemd[1]: sshd@13-139.178.70.5:22-139.178.68.195:51974.service: Deactivated successfully. Jul 7 00:49:08.410915 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:49:08.412001 systemd-logind[1917]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:49:08.412691 systemd-logind[1917]: Removed session 16. Jul 7 00:49:08.734769 containerd[1927]: time="2025-07-07T00:49:08.734741628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"320fc616180471b59e3b1c071841886202997f2998ae59e105e77aabc0b28adc\" pid:8314 exited_at:{seconds:1751849348 nanos:734484822}" Jul 7 00:49:13.424878 systemd[1]: Started sshd@14-139.178.70.5:22-139.178.68.195:51976.service - OpenSSH per-connection server daemon (139.178.68.195:51976). Jul 7 00:49:13.485118 sshd[8341]: Accepted publickey for core from 139.178.68.195 port 51976 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:49:13.485817 sshd-session[8341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:49:13.488445 systemd-logind[1917]: New session 17 of user core. Jul 7 00:49:13.494407 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:49:13.581423 sshd[8343]: Connection closed by 139.178.68.195 port 51976 Jul 7 00:49:13.581600 sshd-session[8341]: pam_unix(sshd:session): session closed for user core Jul 7 00:49:13.583507 systemd[1]: sshd@14-139.178.70.5:22-139.178.68.195:51976.service: Deactivated successfully. Jul 7 00:49:13.584581 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:49:13.585389 systemd-logind[1917]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:49:13.586096 systemd-logind[1917]: Removed session 17. Jul 7 00:49:16.117706 containerd[1927]: time="2025-07-07T00:49:16.117643182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"edade65ec9bbfa5c545c0e846fee227f42d8fef37005168aa869120d36bdc058\" pid:8383 exited_at:{seconds:1751849356 nanos:117378557}" Jul 7 00:49:18.601746 systemd[1]: Started sshd@15-139.178.70.5:22-139.178.68.195:44956.service - OpenSSH per-connection server daemon (139.178.68.195:44956). Jul 7 00:49:18.639204 sshd[8406]: Accepted publickey for core from 139.178.68.195 port 44956 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:49:18.639894 sshd-session[8406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:49:18.642825 systemd-logind[1917]: New session 18 of user core. Jul 7 00:49:18.655308 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:49:18.744465 sshd[8408]: Connection closed by 139.178.68.195 port 44956 Jul 7 00:49:18.744605 sshd-session[8406]: pam_unix(sshd:session): session closed for user core Jul 7 00:49:18.746295 systemd[1]: sshd@15-139.178.70.5:22-139.178.68.195:44956.service: Deactivated successfully. Jul 7 00:49:18.747294 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:49:18.748008 systemd-logind[1917]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:49:18.748671 systemd-logind[1917]: Removed session 18. Jul 7 00:49:19.222736 containerd[1927]: time="2025-07-07T00:49:19.222706975Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a831b9077be89bceb30883d469b8213b24df260a29b21552350a595f889b41c4\" id:\"6b13d0c45f6c4916f929e0e1a061a472d687bfbe1c73fb8210087f02f8cf0790\" pid:8444 exited_at:{seconds:1751849359 nanos:222576076}" Jul 7 00:49:23.770062 systemd[1]: Started sshd@16-139.178.70.5:22-139.178.68.195:44960.service - OpenSSH per-connection server daemon (139.178.68.195:44960). Jul 7 00:49:23.841519 sshd[8456]: Accepted publickey for core from 139.178.68.195 port 44960 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:49:23.842455 sshd-session[8456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:49:23.845926 systemd-logind[1917]: New session 19 of user core. Jul 7 00:49:23.862329 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:49:23.991708 sshd[8458]: Connection closed by 139.178.68.195 port 44960 Jul 7 00:49:23.991872 sshd-session[8456]: pam_unix(sshd:session): session closed for user core Jul 7 00:49:23.993980 systemd[1]: sshd@16-139.178.70.5:22-139.178.68.195:44960.service: Deactivated successfully. Jul 7 00:49:23.994882 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:49:23.995381 systemd-logind[1917]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:49:23.995977 systemd-logind[1917]: Removed session 19. Jul 7 00:49:29.008817 systemd[1]: Started sshd@17-139.178.70.5:22-139.178.68.195:43106.service - OpenSSH per-connection server daemon (139.178.68.195:43106). Jul 7 00:49:29.064842 sshd[8484]: Accepted publickey for core from 139.178.68.195 port 43106 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:49:29.065479 sshd-session[8484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:49:29.068388 systemd-logind[1917]: New session 20 of user core. Jul 7 00:49:29.082367 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:49:29.200039 sshd[8486]: Connection closed by 139.178.68.195 port 43106 Jul 7 00:49:29.200231 sshd-session[8484]: pam_unix(sshd:session): session closed for user core Jul 7 00:49:29.224311 systemd[1]: sshd@17-139.178.70.5:22-139.178.68.195:43106.service: Deactivated successfully. Jul 7 00:49:29.225103 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:49:29.225617 systemd-logind[1917]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:49:29.226677 systemd[1]: Started sshd@18-139.178.70.5:22-139.178.68.195:43120.service - OpenSSH per-connection server daemon (139.178.68.195:43120). Jul 7 00:49:29.227327 systemd-logind[1917]: Removed session 20. Jul 7 00:49:29.264203 sshd[8511]: Accepted publickey for core from 139.178.68.195 port 43120 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:49:29.264868 sshd-session[8511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:49:29.267580 systemd-logind[1917]: New session 21 of user core. Jul 7 00:49:29.286296 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:49:29.466590 sshd[8515]: Connection closed by 139.178.68.195 port 43120 Jul 7 00:49:29.467275 sshd-session[8511]: pam_unix(sshd:session): session closed for user core Jul 7 00:49:29.498386 systemd[1]: sshd@18-139.178.70.5:22-139.178.68.195:43120.service: Deactivated successfully. Jul 7 00:49:29.502212 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:49:29.504289 systemd-logind[1917]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:49:29.510515 systemd[1]: Started sshd@19-139.178.70.5:22-139.178.68.195:43132.service - OpenSSH per-connection server daemon (139.178.68.195:43132). Jul 7 00:49:29.512507 systemd-logind[1917]: Removed session 21. Jul 7 00:49:29.606962 sshd[8535]: Accepted publickey for core from 139.178.68.195 port 43132 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:49:29.607680 sshd-session[8535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:49:29.610834 systemd-logind[1917]: New session 22 of user core. Jul 7 00:49:29.625283 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:49:30.463090 sshd[8537]: Connection closed by 139.178.68.195 port 43132 Jul 7 00:49:30.463332 sshd-session[8535]: pam_unix(sshd:session): session closed for user core Jul 7 00:49:30.481075 systemd[1]: sshd@19-139.178.70.5:22-139.178.68.195:43132.service: Deactivated successfully. Jul 7 00:49:30.482287 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:49:30.482935 systemd-logind[1917]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:49:30.484610 systemd[1]: Started sshd@20-139.178.70.5:22-139.178.68.195:43138.service - OpenSSH per-connection server daemon (139.178.68.195:43138). Jul 7 00:49:30.485157 systemd-logind[1917]: Removed session 22. Jul 7 00:49:30.577188 sshd[8584]: Accepted publickey for core from 139.178.68.195 port 43138 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:49:30.578443 sshd-session[8584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:49:30.583304 systemd-logind[1917]: New session 23 of user core. Jul 7 00:49:30.601312 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:49:30.794785 sshd[8586]: Connection closed by 139.178.68.195 port 43138 Jul 7 00:49:30.794924 sshd-session[8584]: pam_unix(sshd:session): session closed for user core Jul 7 00:49:30.821381 systemd[1]: sshd@20-139.178.70.5:22-139.178.68.195:43138.service: Deactivated successfully. Jul 7 00:49:30.822810 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:49:30.823484 systemd-logind[1917]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:49:30.825746 systemd[1]: Started sshd@21-139.178.70.5:22-139.178.68.195:43152.service - OpenSSH per-connection server daemon (139.178.68.195:43152). Jul 7 00:49:30.826370 systemd-logind[1917]: Removed session 23. Jul 7 00:49:30.879815 sshd[8609]: Accepted publickey for core from 139.178.68.195 port 43152 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:49:30.880807 sshd-session[8609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:49:30.884250 systemd-logind[1917]: New session 24 of user core. Jul 7 00:49:30.900300 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:49:30.986228 sshd[8611]: Connection closed by 139.178.68.195 port 43152 Jul 7 00:49:30.986432 sshd-session[8609]: pam_unix(sshd:session): session closed for user core Jul 7 00:49:30.988102 systemd[1]: sshd@21-139.178.70.5:22-139.178.68.195:43152.service: Deactivated successfully. Jul 7 00:49:30.989069 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:49:30.989776 systemd-logind[1917]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:49:30.990426 systemd-logind[1917]: Removed session 24. Jul 7 00:49:36.012268 systemd[1]: Started sshd@22-139.178.70.5:22-139.178.68.195:43158.service - OpenSSH per-connection server daemon (139.178.68.195:43158). Jul 7 00:49:36.105380 sshd[8647]: Accepted publickey for core from 139.178.68.195 port 43158 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:49:36.106100 sshd-session[8647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:49:36.108999 systemd-logind[1917]: New session 25 of user core. Jul 7 00:49:36.124228 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:49:36.206854 sshd[8649]: Connection closed by 139.178.68.195 port 43158 Jul 7 00:49:36.207026 sshd-session[8647]: pam_unix(sshd:session): session closed for user core Jul 7 00:49:36.208774 systemd[1]: sshd@22-139.178.70.5:22-139.178.68.195:43158.service: Deactivated successfully. Jul 7 00:49:36.209756 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:49:36.210700 systemd-logind[1917]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:49:36.211360 systemd-logind[1917]: Removed session 25. Jul 7 00:49:36.353255 containerd[1927]: time="2025-07-07T00:49:36.353195651Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4412a9f91771635f44286358ad343bc2056e5df585f9148df76d8a4075fe8930\" id:\"0c1d196b36c0fa531f49539b6b55ced2876accdf787b52721a0668de082b3457\" pid:8685 exited_at:{seconds:1751849376 nanos:353008233}" Jul 7 00:49:38.720065 containerd[1927]: time="2025-07-07T00:49:38.720012469Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d77f822e697144d57f6d9da9d9c23b04ffad6b2351b6a7e5c2d85db54d15be\" id:\"a146cb1ed0c906a5ef4f96281d93c36ed3dd2eb2777259d54137380540bc7818\" pid:8715 exited_at:{seconds:1751849378 nanos:719834886}" Jul 7 00:49:41.240068 systemd[1]: Started sshd@23-139.178.70.5:22-139.178.68.195:38526.service - OpenSSH per-connection server daemon (139.178.68.195:38526). Jul 7 00:49:41.306424 sshd[8741]: Accepted publickey for core from 139.178.68.195 port 38526 ssh2: RSA SHA256:K1qJvhav5tVVb6ayUhwp/y2htnZy8CFX8zytprp0410 Jul 7 00:49:41.307251 sshd-session[8741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:49:41.310573 systemd-logind[1917]: New session 26 of user core. Jul 7 00:49:41.318372 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 00:49:41.408132 sshd[8743]: Connection closed by 139.178.68.195 port 38526 Jul 7 00:49:41.408307 sshd-session[8741]: pam_unix(sshd:session): session closed for user core Jul 7 00:49:41.410049 systemd[1]: sshd@23-139.178.70.5:22-139.178.68.195:38526.service: Deactivated successfully. Jul 7 00:49:41.411030 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 00:49:41.411793 systemd-logind[1917]: Session 26 logged out. Waiting for processes to exit. Jul 7 00:49:41.412557 systemd-logind[1917]: Removed session 26.