Apr 30 13:56:41.481282 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:26:36 -00 2025 Apr 30 13:56:41.481299 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 13:56:41.481306 kernel: BIOS-provided physical RAM map: Apr 30 13:56:41.481312 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Apr 30 13:56:41.481316 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Apr 30 13:56:41.481320 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Apr 30 13:56:41.481326 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Apr 30 13:56:41.481330 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Apr 30 13:56:41.481335 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081a73fff] usable Apr 30 13:56:41.481339 kernel: BIOS-e820: [mem 0x0000000081a74000-0x0000000081a74fff] ACPI NVS Apr 30 13:56:41.481344 kernel: BIOS-e820: [mem 0x0000000081a75000-0x0000000081a75fff] reserved Apr 30 13:56:41.481348 kernel: BIOS-e820: [mem 0x0000000081a76000-0x000000008afcdfff] usable Apr 30 13:56:41.481354 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Apr 30 13:56:41.481359 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Apr 30 13:56:41.481365 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Apr 30 13:56:41.481370 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Apr 30 13:56:41.481376 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Apr 30 13:56:41.481381 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Apr 30 13:56:41.481386 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 30 13:56:41.481391 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Apr 30 13:56:41.481396 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Apr 30 13:56:41.481401 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Apr 30 13:56:41.481406 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Apr 30 13:56:41.481411 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Apr 30 13:56:41.481416 kernel: NX (Execute Disable) protection: active Apr 30 13:56:41.481421 kernel: APIC: Static calls initialized Apr 30 13:56:41.481426 kernel: SMBIOS 3.2.1 present. Apr 30 13:56:41.481432 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 2.6 12/03/2024 Apr 30 13:56:41.481438 kernel: tsc: Detected 3400.000 MHz processor Apr 30 13:56:41.481443 kernel: tsc: Detected 3399.906 MHz TSC Apr 30 13:56:41.481448 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 13:56:41.481454 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 13:56:41.481459 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Apr 30 13:56:41.481465 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Apr 30 13:56:41.481470 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 13:56:41.481475 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Apr 30 13:56:41.481480 kernel: Using GB pages for direct mapping Apr 30 13:56:41.481486 kernel: ACPI: Early table checksum verification disabled Apr 30 13:56:41.481492 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Apr 30 13:56:41.481498 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Apr 30 13:56:41.481505 kernel: ACPI: FACP 0x000000008C58B670 000114 (v06 01072009 AMI 00010013) Apr 30 13:56:41.481511 kernel: ACPI: DSDT 0x000000008C54F268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Apr 30 13:56:41.481516 kernel: ACPI: FACS 0x000000008C66DF80 000040 Apr 30 13:56:41.481522 kernel: ACPI: APIC 0x000000008C58B788 00012C (v04 01072009 AMI 00010013) Apr 30 13:56:41.481528 kernel: ACPI: FPDT 0x000000008C58B8B8 000044 (v01 01072009 AMI 00010013) Apr 30 13:56:41.481543 kernel: ACPI: FIDT 0x000000008C58B900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Apr 30 13:56:41.481549 kernel: ACPI: MCFG 0x000000008C58B9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Apr 30 13:56:41.481554 kernel: ACPI: SPMI 0x000000008C58B9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Apr 30 13:56:41.481560 kernel: ACPI: SSDT 0x000000008C58BA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Apr 30 13:56:41.481565 kernel: ACPI: SSDT 0x000000008C58D548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Apr 30 13:56:41.481570 kernel: ACPI: SSDT 0x000000008C590710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Apr 30 13:56:41.481577 kernel: ACPI: HPET 0x000000008C592A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:56:41.481583 kernel: ACPI: SSDT 0x000000008C592A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Apr 30 13:56:41.481589 kernel: ACPI: SSDT 0x000000008C593A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Apr 30 13:56:41.481594 kernel: ACPI: UEFI 0x000000008C594320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:56:41.481600 kernel: ACPI: LPIT 0x000000008C594368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:56:41.481605 kernel: ACPI: SSDT 0x000000008C594400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Apr 30 13:56:41.481611 kernel: ACPI: SSDT 0x000000008C596BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Apr 30 13:56:41.481616 kernel: ACPI: DBGP 0x000000008C5980C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:56:41.481622 kernel: ACPI: DBG2 0x000000008C598100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:56:41.481629 kernel: ACPI: SSDT 0x000000008C598158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Apr 30 13:56:41.481634 kernel: ACPI: DMAR 0x000000008C599CC0 000070 (v01 INTEL EDK2 00000002 01000013) Apr 30 13:56:41.481640 kernel: ACPI: SSDT 0x000000008C599D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Apr 30 13:56:41.481645 kernel: ACPI: TPM2 0x000000008C599E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Apr 30 13:56:41.481651 kernel: ACPI: SSDT 0x000000008C599EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Apr 30 13:56:41.481657 kernel: ACPI: WSMT 0x000000008C59AC40 000028 (v01 SUPERM 01072009 AMI 00010013) Apr 30 13:56:41.481662 kernel: ACPI: EINJ 0x000000008C59AC68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Apr 30 13:56:41.481668 kernel: ACPI: ERST 0x000000008C59AD98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Apr 30 13:56:41.481674 kernel: ACPI: BERT 0x000000008C59AFC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Apr 30 13:56:41.481680 kernel: ACPI: HEST 0x000000008C59AFF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Apr 30 13:56:41.481685 kernel: ACPI: SSDT 0x000000008C59B278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Apr 30 13:56:41.481691 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b670-0x8c58b783] Apr 30 13:56:41.481696 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b66b] Apr 30 13:56:41.481702 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Apr 30 13:56:41.481707 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b788-0x8c58b8b3] Apr 30 13:56:41.481713 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b8b8-0x8c58b8fb] Apr 30 13:56:41.481718 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b900-0x8c58b99b] Apr 30 13:56:41.481725 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b9a0-0x8c58b9db] Apr 30 13:56:41.481730 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b9e0-0x8c58ba20] Apr 30 13:56:41.481736 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58ba28-0x8c58d543] Apr 30 13:56:41.481741 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d548-0x8c59070d] Apr 30 13:56:41.481747 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590710-0x8c592a3a] Apr 30 13:56:41.481752 kernel: ACPI: Reserving HPET table memory at [mem 0x8c592a40-0x8c592a77] Apr 30 13:56:41.481758 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a78-0x8c593a25] Apr 30 13:56:41.481763 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593a28-0x8c59431b] Apr 30 13:56:41.481768 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c594320-0x8c594361] Apr 30 13:56:41.481775 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c594368-0x8c5943fb] Apr 30 13:56:41.481781 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594400-0x8c596bdd] Apr 30 13:56:41.481786 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596be0-0x8c5980c1] Apr 30 13:56:41.481792 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5980c8-0x8c5980fb] Apr 30 13:56:41.481797 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598100-0x8c598153] Apr 30 13:56:41.481803 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598158-0x8c599cbe] Apr 30 13:56:41.481808 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599cc0-0x8c599d2f] Apr 30 13:56:41.481814 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599d30-0x8c599e73] Apr 30 13:56:41.481819 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599e78-0x8c599eab] Apr 30 13:56:41.481825 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599eb0-0x8c59ac3e] Apr 30 13:56:41.481831 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59ac40-0x8c59ac67] Apr 30 13:56:41.481837 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59ac68-0x8c59ad97] Apr 30 13:56:41.481842 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad98-0x8c59afc7] Apr 30 13:56:41.481848 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59afc8-0x8c59aff7] Apr 30 13:56:41.481853 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59aff8-0x8c59b273] Apr 30 13:56:41.481859 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b278-0x8c59b3d9] Apr 30 13:56:41.481864 kernel: No NUMA configuration found Apr 30 13:56:41.481870 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Apr 30 13:56:41.481875 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Apr 30 13:56:41.481882 kernel: Zone ranges: Apr 30 13:56:41.481888 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 13:56:41.481893 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 13:56:41.481899 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Apr 30 13:56:41.481904 kernel: Movable zone start for each node Apr 30 13:56:41.481910 kernel: Early memory node ranges Apr 30 13:56:41.481915 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Apr 30 13:56:41.481921 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Apr 30 13:56:41.481926 kernel: node 0: [mem 0x0000000040400000-0x0000000081a73fff] Apr 30 13:56:41.481933 kernel: node 0: [mem 0x0000000081a76000-0x000000008afcdfff] Apr 30 13:56:41.481938 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Apr 30 13:56:41.481944 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Apr 30 13:56:41.481949 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Apr 30 13:56:41.481958 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Apr 30 13:56:41.481965 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 13:56:41.481971 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Apr 30 13:56:41.481977 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 30 13:56:41.481984 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Apr 30 13:56:41.481990 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Apr 30 13:56:41.481996 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Apr 30 13:56:41.482002 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Apr 30 13:56:41.482008 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Apr 30 13:56:41.482014 kernel: ACPI: PM-Timer IO Port: 0x1808 Apr 30 13:56:41.482020 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Apr 30 13:56:41.482026 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Apr 30 13:56:41.482032 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Apr 30 13:56:41.482039 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Apr 30 13:56:41.482044 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Apr 30 13:56:41.482050 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Apr 30 13:56:41.482056 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Apr 30 13:56:41.482062 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Apr 30 13:56:41.482068 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Apr 30 13:56:41.482074 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Apr 30 13:56:41.482079 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Apr 30 13:56:41.482085 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Apr 30 13:56:41.482091 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Apr 30 13:56:41.482098 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Apr 30 13:56:41.482104 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Apr 30 13:56:41.482110 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Apr 30 13:56:41.482116 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Apr 30 13:56:41.482121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 13:56:41.482127 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 13:56:41.482133 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 13:56:41.482139 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 13:56:41.482145 kernel: TSC deadline timer available Apr 30 13:56:41.482152 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Apr 30 13:56:41.482158 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Apr 30 13:56:41.482164 kernel: Booting paravirtualized kernel on bare hardware Apr 30 13:56:41.482170 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 13:56:41.482176 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Apr 30 13:56:41.482182 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Apr 30 13:56:41.482188 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Apr 30 13:56:41.482194 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Apr 30 13:56:41.482200 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 13:56:41.482208 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 13:56:41.482213 kernel: random: crng init done Apr 30 13:56:41.482219 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Apr 30 13:56:41.482225 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Apr 30 13:56:41.482231 kernel: Fallback order for Node 0: 0 Apr 30 13:56:41.482237 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Apr 30 13:56:41.482243 kernel: Policy zone: Normal Apr 30 13:56:41.482249 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 13:56:41.482256 kernel: software IO TLB: area num 16. Apr 30 13:56:41.482262 kernel: Memory: 32718264K/33452984K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 734460K reserved, 0K cma-reserved) Apr 30 13:56:41.482268 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Apr 30 13:56:41.482274 kernel: ftrace: allocating 37918 entries in 149 pages Apr 30 13:56:41.482280 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 13:56:41.482286 kernel: Dynamic Preempt: voluntary Apr 30 13:56:41.482292 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 13:56:41.482298 kernel: rcu: RCU event tracing is enabled. Apr 30 13:56:41.482304 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Apr 30 13:56:41.482311 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 13:56:41.482317 kernel: Rude variant of Tasks RCU enabled. Apr 30 13:56:41.482323 kernel: Tracing variant of Tasks RCU enabled. Apr 30 13:56:41.482329 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 13:56:41.482335 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Apr 30 13:56:41.482341 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Apr 30 13:56:41.482347 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 13:56:41.482353 kernel: Console: colour VGA+ 80x25 Apr 30 13:56:41.482359 kernel: printk: console [tty0] enabled Apr 30 13:56:41.482366 kernel: printk: console [ttyS1] enabled Apr 30 13:56:41.482372 kernel: ACPI: Core revision 20230628 Apr 30 13:56:41.482378 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Apr 30 13:56:41.482384 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 13:56:41.482390 kernel: DMAR: Host address width 39 Apr 30 13:56:41.482396 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Apr 30 13:56:41.482402 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Apr 30 13:56:41.482408 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Apr 30 13:56:41.482414 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Apr 30 13:56:41.482421 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Apr 30 13:56:41.482427 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Apr 30 13:56:41.482433 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Apr 30 13:56:41.482439 kernel: x2apic enabled Apr 30 13:56:41.482445 kernel: APIC: Switched APIC routing to: cluster x2apic Apr 30 13:56:41.482451 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Apr 30 13:56:41.482457 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Apr 30 13:56:41.482463 kernel: CPU0: Thermal monitoring enabled (TM1) Apr 30 13:56:41.482469 kernel: process: using mwait in idle threads Apr 30 13:56:41.482476 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 13:56:41.482481 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 13:56:41.482487 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 13:56:41.482493 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 30 13:56:41.482499 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 30 13:56:41.482505 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 30 13:56:41.482511 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 13:56:41.482516 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Apr 30 13:56:41.482522 kernel: RETBleed: Mitigation: Enhanced IBRS Apr 30 13:56:41.482528 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 13:56:41.482536 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 13:56:41.482543 kernel: TAA: Mitigation: TSX disabled Apr 30 13:56:41.482549 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Apr 30 13:56:41.482555 kernel: SRBDS: Mitigation: Microcode Apr 30 13:56:41.482561 kernel: GDS: Mitigation: Microcode Apr 30 13:56:41.482567 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 13:56:41.482572 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 13:56:41.482578 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 13:56:41.482584 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 30 13:56:41.482590 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 30 13:56:41.482596 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 13:56:41.482601 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 30 13:56:41.482608 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 30 13:56:41.482614 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Apr 30 13:56:41.482620 kernel: Freeing SMP alternatives memory: 32K Apr 30 13:56:41.482626 kernel: pid_max: default: 32768 minimum: 301 Apr 30 13:56:41.482632 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 13:56:41.482637 kernel: landlock: Up and running. Apr 30 13:56:41.482643 kernel: SELinux: Initializing. Apr 30 13:56:41.482649 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 13:56:41.482655 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 13:56:41.482661 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Apr 30 13:56:41.482667 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 13:56:41.482674 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 13:56:41.482680 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 13:56:41.482686 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Apr 30 13:56:41.482692 kernel: ... version: 4 Apr 30 13:56:41.482698 kernel: ... bit width: 48 Apr 30 13:56:41.482704 kernel: ... generic registers: 4 Apr 30 13:56:41.482710 kernel: ... value mask: 0000ffffffffffff Apr 30 13:56:41.482715 kernel: ... max period: 00007fffffffffff Apr 30 13:56:41.482721 kernel: ... fixed-purpose events: 3 Apr 30 13:56:41.482728 kernel: ... event mask: 000000070000000f Apr 30 13:56:41.482734 kernel: signal: max sigframe size: 2032 Apr 30 13:56:41.482740 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Apr 30 13:56:41.482746 kernel: rcu: Hierarchical SRCU implementation. Apr 30 13:56:41.482752 kernel: rcu: Max phase no-delay instances is 400. Apr 30 13:56:41.482758 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Apr 30 13:56:41.482764 kernel: smp: Bringing up secondary CPUs ... Apr 30 13:56:41.482769 kernel: smpboot: x86: Booting SMP configuration: Apr 30 13:56:41.482775 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Apr 30 13:56:41.482783 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 13:56:41.482789 kernel: smp: Brought up 1 node, 16 CPUs Apr 30 13:56:41.482795 kernel: smpboot: Max logical packages: 1 Apr 30 13:56:41.482801 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Apr 30 13:56:41.482807 kernel: devtmpfs: initialized Apr 30 13:56:41.482812 kernel: x86/mm: Memory block size: 128MB Apr 30 13:56:41.482818 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81a74000-0x81a74fff] (4096 bytes) Apr 30 13:56:41.482825 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Apr 30 13:56:41.482831 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 13:56:41.482838 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Apr 30 13:56:41.482844 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 13:56:41.482850 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 13:56:41.482856 kernel: audit: initializing netlink subsys (disabled) Apr 30 13:56:41.482862 kernel: audit: type=2000 audit(1746021396.040:1): state=initialized audit_enabled=0 res=1 Apr 30 13:56:41.482867 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 13:56:41.482873 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 13:56:41.482879 kernel: cpuidle: using governor menu Apr 30 13:56:41.482885 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 13:56:41.482892 kernel: dca service started, version 1.12.1 Apr 30 13:56:41.482898 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 30 13:56:41.482904 kernel: PCI: Using configuration type 1 for base access Apr 30 13:56:41.482910 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Apr 30 13:56:41.482916 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 13:56:41.482922 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 13:56:41.482928 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 13:56:41.482933 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 13:56:41.482939 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 13:56:41.482946 kernel: ACPI: Added _OSI(Module Device) Apr 30 13:56:41.482952 kernel: ACPI: Added _OSI(Processor Device) Apr 30 13:56:41.482958 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 13:56:41.482964 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 13:56:41.482970 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Apr 30 13:56:41.482976 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:56:41.482982 kernel: ACPI: SSDT 0xFFFF9B7600E38800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Apr 30 13:56:41.482988 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:56:41.482994 kernel: ACPI: SSDT 0xFFFF9B7601E08000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Apr 30 13:56:41.483001 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:56:41.483007 kernel: ACPI: SSDT 0xFFFF9B7600DE5C00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Apr 30 13:56:41.483013 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:56:41.483019 kernel: ACPI: SSDT 0xFFFF9B7601E08800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Apr 30 13:56:41.483025 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:56:41.483030 kernel: ACPI: SSDT 0xFFFF9B7600E52000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Apr 30 13:56:41.483036 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:56:41.483042 kernel: ACPI: SSDT 0xFFFF9B760154BC00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Apr 30 13:56:41.483048 kernel: ACPI: _OSC evaluated successfully for all CPUs Apr 30 13:56:41.483054 kernel: ACPI: Interpreter enabled Apr 30 13:56:41.483061 kernel: ACPI: PM: (supports S0 S5) Apr 30 13:56:41.483067 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 13:56:41.483073 kernel: HEST: Enabling Firmware First mode for corrected errors. Apr 30 13:56:41.483079 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Apr 30 13:56:41.483084 kernel: HEST: Table parsing has been initialized. Apr 30 13:56:41.483090 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Apr 30 13:56:41.483096 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 13:56:41.483102 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 13:56:41.483108 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Apr 30 13:56:41.483115 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Apr 30 13:56:41.483122 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Apr 30 13:56:41.483127 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Apr 30 13:56:41.483133 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Apr 30 13:56:41.483139 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Apr 30 13:56:41.483145 kernel: ACPI: \_TZ_.FN00: New power resource Apr 30 13:56:41.483151 kernel: ACPI: \_TZ_.FN01: New power resource Apr 30 13:56:41.483157 kernel: ACPI: \_TZ_.FN02: New power resource Apr 30 13:56:41.483163 kernel: ACPI: \_TZ_.FN03: New power resource Apr 30 13:56:41.483170 kernel: ACPI: \_TZ_.FN04: New power resource Apr 30 13:56:41.483176 kernel: ACPI: \PIN_: New power resource Apr 30 13:56:41.483182 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Apr 30 13:56:41.483263 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 13:56:41.483320 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Apr 30 13:56:41.483374 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Apr 30 13:56:41.483382 kernel: PCI host bridge to bus 0000:00 Apr 30 13:56:41.483440 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 13:56:41.483490 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 13:56:41.483542 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 13:56:41.483590 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Apr 30 13:56:41.483638 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Apr 30 13:56:41.483685 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Apr 30 13:56:41.483751 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Apr 30 13:56:41.483820 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Apr 30 13:56:41.483878 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Apr 30 13:56:41.483936 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Apr 30 13:56:41.483991 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Apr 30 13:56:41.484049 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Apr 30 13:56:41.484104 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Apr 30 13:56:41.484166 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Apr 30 13:56:41.484220 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Apr 30 13:56:41.484275 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Apr 30 13:56:41.484333 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Apr 30 13:56:41.484387 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Apr 30 13:56:41.484440 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Apr 30 13:56:41.484501 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Apr 30 13:56:41.484560 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 13:56:41.484621 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Apr 30 13:56:41.484675 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 13:56:41.484733 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Apr 30 13:56:41.484788 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Apr 30 13:56:41.484843 kernel: pci 0000:00:16.0: PME# supported from D3hot Apr 30 13:56:41.484902 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Apr 30 13:56:41.484964 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Apr 30 13:56:41.485020 kernel: pci 0000:00:16.1: PME# supported from D3hot Apr 30 13:56:41.485079 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Apr 30 13:56:41.485134 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Apr 30 13:56:41.485188 kernel: pci 0000:00:16.4: PME# supported from D3hot Apr 30 13:56:41.485247 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Apr 30 13:56:41.485301 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Apr 30 13:56:41.485354 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Apr 30 13:56:41.485408 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Apr 30 13:56:41.485461 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Apr 30 13:56:41.485514 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Apr 30 13:56:41.485573 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Apr 30 13:56:41.485629 kernel: pci 0000:00:17.0: PME# supported from D3hot Apr 30 13:56:41.485692 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Apr 30 13:56:41.485746 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Apr 30 13:56:41.485809 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Apr 30 13:56:41.485867 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Apr 30 13:56:41.485927 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Apr 30 13:56:41.485982 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Apr 30 13:56:41.486040 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Apr 30 13:56:41.486095 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Apr 30 13:56:41.486159 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Apr 30 13:56:41.486214 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Apr 30 13:56:41.486272 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Apr 30 13:56:41.486327 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 13:56:41.486386 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Apr 30 13:56:41.486443 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Apr 30 13:56:41.486498 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Apr 30 13:56:41.486557 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Apr 30 13:56:41.486618 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Apr 30 13:56:41.486671 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Apr 30 13:56:41.486732 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Apr 30 13:56:41.486789 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Apr 30 13:56:41.486844 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Apr 30 13:56:41.486903 kernel: pci 0000:01:00.0: PME# supported from D3cold Apr 30 13:56:41.486957 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 30 13:56:41.487014 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 30 13:56:41.487076 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Apr 30 13:56:41.487132 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Apr 30 13:56:41.487188 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Apr 30 13:56:41.487242 kernel: pci 0000:01:00.1: PME# supported from D3cold Apr 30 13:56:41.487300 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 30 13:56:41.487354 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 30 13:56:41.487412 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 30 13:56:41.487467 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Apr 30 13:56:41.487522 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 13:56:41.487580 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Apr 30 13:56:41.487640 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Apr 30 13:56:41.487697 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Apr 30 13:56:41.487756 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Apr 30 13:56:41.487811 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Apr 30 13:56:41.487866 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Apr 30 13:56:41.487921 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Apr 30 13:56:41.487976 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Apr 30 13:56:41.488029 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 30 13:56:41.488083 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 30 13:56:41.488148 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Apr 30 13:56:41.488203 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Apr 30 13:56:41.488259 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Apr 30 13:56:41.488314 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Apr 30 13:56:41.488370 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Apr 30 13:56:41.488426 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Apr 30 13:56:41.488483 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Apr 30 13:56:41.488543 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 30 13:56:41.488598 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 30 13:56:41.488654 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Apr 30 13:56:41.488714 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Apr 30 13:56:41.488771 kernel: pci 0000:06:00.0: enabling Extended Tags Apr 30 13:56:41.488827 kernel: pci 0000:06:00.0: supports D1 D2 Apr 30 13:56:41.488882 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 13:56:41.488937 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Apr 30 13:56:41.488993 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Apr 30 13:56:41.489047 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Apr 30 13:56:41.489108 kernel: pci_bus 0000:07: extended config space not accessible Apr 30 13:56:41.489174 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Apr 30 13:56:41.489233 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Apr 30 13:56:41.489290 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Apr 30 13:56:41.489348 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Apr 30 13:56:41.489409 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 13:56:41.489465 kernel: pci 0000:07:00.0: supports D1 D2 Apr 30 13:56:41.489523 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 13:56:41.489591 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Apr 30 13:56:41.489646 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Apr 30 13:56:41.489702 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 30 13:56:41.489712 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Apr 30 13:56:41.489719 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Apr 30 13:56:41.489727 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Apr 30 13:56:41.489734 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Apr 30 13:56:41.489740 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Apr 30 13:56:41.489746 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Apr 30 13:56:41.489753 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Apr 30 13:56:41.489759 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Apr 30 13:56:41.489765 kernel: iommu: Default domain type: Translated Apr 30 13:56:41.489771 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 13:56:41.489778 kernel: PCI: Using ACPI for IRQ routing Apr 30 13:56:41.489785 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 13:56:41.489791 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Apr 30 13:56:41.489797 kernel: e820: reserve RAM buffer [mem 0x81a74000-0x83ffffff] Apr 30 13:56:41.489804 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Apr 30 13:56:41.489810 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Apr 30 13:56:41.489816 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Apr 30 13:56:41.489822 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Apr 30 13:56:41.489878 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Apr 30 13:56:41.489937 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Apr 30 13:56:41.489998 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 13:56:41.490007 kernel: vgaarb: loaded Apr 30 13:56:41.490014 kernel: clocksource: Switched to clocksource tsc-early Apr 30 13:56:41.490020 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 13:56:41.490027 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 13:56:41.490033 kernel: pnp: PnP ACPI init Apr 30 13:56:41.490089 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Apr 30 13:56:41.490145 kernel: pnp 00:02: [dma 0 disabled] Apr 30 13:56:41.490200 kernel: pnp 00:03: [dma 0 disabled] Apr 30 13:56:41.490258 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Apr 30 13:56:41.490307 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Apr 30 13:56:41.490361 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Apr 30 13:56:41.490415 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Apr 30 13:56:41.490464 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Apr 30 13:56:41.490516 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Apr 30 13:56:41.490575 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Apr 30 13:56:41.490626 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Apr 30 13:56:41.490675 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Apr 30 13:56:41.490725 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Apr 30 13:56:41.490774 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Apr 30 13:56:41.490827 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Apr 30 13:56:41.490880 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Apr 30 13:56:41.490928 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Apr 30 13:56:41.490977 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Apr 30 13:56:41.491025 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Apr 30 13:56:41.491078 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Apr 30 13:56:41.491126 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Apr 30 13:56:41.491180 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Apr 30 13:56:41.491191 kernel: pnp: PnP ACPI: found 10 devices Apr 30 13:56:41.491198 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 13:56:41.491204 kernel: NET: Registered PF_INET protocol family Apr 30 13:56:41.491211 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 13:56:41.491218 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Apr 30 13:56:41.491224 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 13:56:41.491230 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 13:56:41.491237 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 13:56:41.491244 kernel: TCP: Hash tables configured (established 262144 bind 65536) Apr 30 13:56:41.491251 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 13:56:41.491257 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 13:56:41.491263 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 13:56:41.491270 kernel: NET: Registered PF_XDP protocol family Apr 30 13:56:41.491325 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Apr 30 13:56:41.491379 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Apr 30 13:56:41.491435 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Apr 30 13:56:41.491492 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 30 13:56:41.491558 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 30 13:56:41.491615 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 30 13:56:41.491671 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 30 13:56:41.491726 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 30 13:56:41.491781 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Apr 30 13:56:41.491835 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 13:56:41.491888 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Apr 30 13:56:41.491946 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Apr 30 13:56:41.491999 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 30 13:56:41.492053 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 30 13:56:41.492107 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Apr 30 13:56:41.492161 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 30 13:56:41.492217 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 30 13:56:41.492271 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Apr 30 13:56:41.492326 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Apr 30 13:56:41.492381 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Apr 30 13:56:41.492437 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 30 13:56:41.492492 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Apr 30 13:56:41.492557 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Apr 30 13:56:41.492611 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Apr 30 13:56:41.492661 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Apr 30 13:56:41.492712 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 13:56:41.492759 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 13:56:41.492808 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 13:56:41.492855 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Apr 30 13:56:41.492903 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Apr 30 13:56:41.492958 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Apr 30 13:56:41.493010 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 13:56:41.493070 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Apr 30 13:56:41.493120 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Apr 30 13:56:41.493175 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 30 13:56:41.493224 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Apr 30 13:56:41.493279 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Apr 30 13:56:41.493329 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Apr 30 13:56:41.493386 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Apr 30 13:56:41.493439 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Apr 30 13:56:41.493448 kernel: PCI: CLS 64 bytes, default 64 Apr 30 13:56:41.493455 kernel: DMAR: No ATSR found Apr 30 13:56:41.493461 kernel: DMAR: No SATC found Apr 30 13:56:41.493469 kernel: DMAR: dmar0: Using Queued invalidation Apr 30 13:56:41.493523 kernel: pci 0000:00:00.0: Adding to iommu group 0 Apr 30 13:56:41.493589 kernel: pci 0000:00:01.0: Adding to iommu group 1 Apr 30 13:56:41.493644 kernel: pci 0000:00:08.0: Adding to iommu group 2 Apr 30 13:56:41.493702 kernel: pci 0000:00:12.0: Adding to iommu group 3 Apr 30 13:56:41.493756 kernel: pci 0000:00:14.0: Adding to iommu group 4 Apr 30 13:56:41.493810 kernel: pci 0000:00:14.2: Adding to iommu group 4 Apr 30 13:56:41.493865 kernel: pci 0000:00:15.0: Adding to iommu group 5 Apr 30 13:56:41.493917 kernel: pci 0000:00:15.1: Adding to iommu group 5 Apr 30 13:56:41.493973 kernel: pci 0000:00:16.0: Adding to iommu group 6 Apr 30 13:56:41.494028 kernel: pci 0000:00:16.1: Adding to iommu group 6 Apr 30 13:56:41.494081 kernel: pci 0000:00:16.4: Adding to iommu group 6 Apr 30 13:56:41.494137 kernel: pci 0000:00:17.0: Adding to iommu group 7 Apr 30 13:56:41.494191 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Apr 30 13:56:41.494245 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Apr 30 13:56:41.494299 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Apr 30 13:56:41.494353 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Apr 30 13:56:41.494407 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Apr 30 13:56:41.494461 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Apr 30 13:56:41.494516 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Apr 30 13:56:41.494575 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Apr 30 13:56:41.494629 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Apr 30 13:56:41.494685 kernel: pci 0000:01:00.0: Adding to iommu group 1 Apr 30 13:56:41.494742 kernel: pci 0000:01:00.1: Adding to iommu group 1 Apr 30 13:56:41.494798 kernel: pci 0000:03:00.0: Adding to iommu group 15 Apr 30 13:56:41.494854 kernel: pci 0000:04:00.0: Adding to iommu group 16 Apr 30 13:56:41.494911 kernel: pci 0000:06:00.0: Adding to iommu group 17 Apr 30 13:56:41.494967 kernel: pci 0000:07:00.0: Adding to iommu group 17 Apr 30 13:56:41.494979 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Apr 30 13:56:41.494986 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 13:56:41.494992 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Apr 30 13:56:41.494999 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Apr 30 13:56:41.495005 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Apr 30 13:56:41.495011 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Apr 30 13:56:41.495018 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Apr 30 13:56:41.495075 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Apr 30 13:56:41.495087 kernel: Initialise system trusted keyrings Apr 30 13:56:41.495094 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Apr 30 13:56:41.495100 kernel: Key type asymmetric registered Apr 30 13:56:41.495106 kernel: Asymmetric key parser 'x509' registered Apr 30 13:56:41.495112 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 13:56:41.495119 kernel: io scheduler mq-deadline registered Apr 30 13:56:41.495125 kernel: io scheduler kyber registered Apr 30 13:56:41.495131 kernel: io scheduler bfq registered Apr 30 13:56:41.495184 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Apr 30 13:56:41.495242 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Apr 30 13:56:41.495298 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Apr 30 13:56:41.495352 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Apr 30 13:56:41.495406 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Apr 30 13:56:41.495460 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Apr 30 13:56:41.495519 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Apr 30 13:56:41.495532 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Apr 30 13:56:41.495539 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Apr 30 13:56:41.495547 kernel: pstore: Using crash dump compression: deflate Apr 30 13:56:41.495554 kernel: pstore: Registered erst as persistent store backend Apr 30 13:56:41.495560 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 13:56:41.495567 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 13:56:41.495573 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 13:56:41.495580 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 13:56:41.495586 kernel: hpet_acpi_add: no address or irqs in _CRS Apr 30 13:56:41.495643 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Apr 30 13:56:41.495655 kernel: i8042: PNP: No PS/2 controller found. Apr 30 13:56:41.495706 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Apr 30 13:56:41.495757 kernel: rtc_cmos rtc_cmos: registered as rtc0 Apr 30 13:56:41.495807 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-04-30T13:56:40 UTC (1746021400) Apr 30 13:56:41.495857 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Apr 30 13:56:41.495866 kernel: intel_pstate: Intel P-state driver initializing Apr 30 13:56:41.495873 kernel: intel_pstate: Disabling energy efficiency optimization Apr 30 13:56:41.495879 kernel: intel_pstate: HWP enabled Apr 30 13:56:41.495888 kernel: NET: Registered PF_INET6 protocol family Apr 30 13:56:41.495894 kernel: Segment Routing with IPv6 Apr 30 13:56:41.495900 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 13:56:41.495907 kernel: NET: Registered PF_PACKET protocol family Apr 30 13:56:41.495914 kernel: Key type dns_resolver registered Apr 30 13:56:41.495920 kernel: microcode: Current revision: 0x00000102 Apr 30 13:56:41.495926 kernel: microcode: Microcode Update Driver: v2.2. Apr 30 13:56:41.495932 kernel: IPI shorthand broadcast: enabled Apr 30 13:56:41.495939 kernel: sched_clock: Marking stable (2496000683, 1442077230)->(4502036626, -563958713) Apr 30 13:56:41.495946 kernel: registered taskstats version 1 Apr 30 13:56:41.495953 kernel: Loading compiled-in X.509 certificates Apr 30 13:56:41.495959 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 10d2d341d26c1df942e743344427c053ef3a2a5f' Apr 30 13:56:41.495966 kernel: Key type .fscrypt registered Apr 30 13:56:41.495972 kernel: Key type fscrypt-provisioning registered Apr 30 13:56:41.495978 kernel: ima: Allocated hash algorithm: sha1 Apr 30 13:56:41.495985 kernel: ima: No architecture policies found Apr 30 13:56:41.495991 kernel: clk: Disabling unused clocks Apr 30 13:56:41.495997 kernel: Freeing unused kernel image (initmem) memory: 43484K Apr 30 13:56:41.496005 kernel: Write protecting the kernel read-only data: 38912k Apr 30 13:56:41.496011 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K Apr 30 13:56:41.496017 kernel: Run /init as init process Apr 30 13:56:41.496024 kernel: with arguments: Apr 30 13:56:41.496030 kernel: /init Apr 30 13:56:41.496036 kernel: with environment: Apr 30 13:56:41.496043 kernel: HOME=/ Apr 30 13:56:41.496049 kernel: TERM=linux Apr 30 13:56:41.496055 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 13:56:41.496063 systemd[1]: Successfully made /usr/ read-only. Apr 30 13:56:41.496072 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 13:56:41.496079 systemd[1]: Detected architecture x86-64. Apr 30 13:56:41.496085 systemd[1]: Running in initrd. Apr 30 13:56:41.496091 systemd[1]: No hostname configured, using default hostname. Apr 30 13:56:41.496098 systemd[1]: Hostname set to . Apr 30 13:56:41.496104 systemd[1]: Initializing machine ID from random generator. Apr 30 13:56:41.496112 systemd[1]: Queued start job for default target initrd.target. Apr 30 13:56:41.496119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 13:56:41.496125 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 13:56:41.496133 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 13:56:41.496139 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 13:56:41.496146 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 13:56:41.496153 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 13:56:41.496161 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 13:56:41.496168 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 13:56:41.496175 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 13:56:41.496181 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 13:56:41.496188 systemd[1]: Reached target paths.target - Path Units. Apr 30 13:56:41.496195 systemd[1]: Reached target slices.target - Slice Units. Apr 30 13:56:41.496202 systemd[1]: Reached target swap.target - Swaps. Apr 30 13:56:41.496208 systemd[1]: Reached target timers.target - Timer Units. Apr 30 13:56:41.496216 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 13:56:41.496223 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 13:56:41.496229 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 13:56:41.496236 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 13:56:41.496243 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 13:56:41.496249 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 13:56:41.496256 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 13:56:41.496263 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 13:56:41.496269 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Apr 30 13:56:41.496277 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Apr 30 13:56:41.496284 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 13:56:41.496290 kernel: clocksource: Switched to clocksource tsc Apr 30 13:56:41.496297 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 13:56:41.496303 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 13:56:41.496310 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 13:56:41.496317 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 13:56:41.496336 systemd-journald[269]: Collecting audit messages is disabled. Apr 30 13:56:41.496353 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 13:56:41.496361 systemd-journald[269]: Journal started Apr 30 13:56:41.496377 systemd-journald[269]: Runtime Journal (/run/log/journal/9d966ccb2fd84e5ba6df9e14f201f915) is 8M, max 639.9M, 631.9M free. Apr 30 13:56:41.507113 systemd-modules-load[272]: Inserted module 'overlay' Apr 30 13:56:41.543318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:56:41.543330 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 13:56:41.543337 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 13:56:41.544202 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 13:56:41.544291 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 13:56:41.544402 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 13:56:41.549958 systemd-modules-load[272]: Inserted module 'br_netfilter' Apr 30 13:56:41.550534 kernel: Bridge firewalling registered Apr 30 13:56:41.555678 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 13:56:41.570014 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 13:56:41.632174 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 13:56:41.652516 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:56:41.682070 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 13:56:41.702249 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 13:56:41.743992 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 13:56:41.758057 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 13:56:41.760060 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 13:56:41.781747 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 13:56:41.782417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:56:41.787356 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 13:56:41.791213 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:56:41.805128 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 13:56:41.844976 systemd-resolved[309]: Positive Trust Anchors: Apr 30 13:56:41.844985 systemd-resolved[309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 13:56:41.845018 systemd-resolved[309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 13:56:41.940750 kernel: SCSI subsystem initialized Apr 30 13:56:41.940766 kernel: Loading iSCSI transport class v2.0-870. Apr 30 13:56:41.940774 kernel: iscsi: registered transport (tcp) Apr 30 13:56:41.940792 dracut-cmdline[311]: dracut-dracut-053 Apr 30 13:56:41.940792 dracut-cmdline[311]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 13:56:42.012739 kernel: iscsi: registered transport (qla4xxx) Apr 30 13:56:42.012766 kernel: QLogic iSCSI HBA Driver Apr 30 13:56:41.847151 systemd-resolved[309]: Defaulting to hostname 'linux'. Apr 30 13:56:41.847899 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 13:56:41.875661 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 13:56:41.981003 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 13:56:42.033840 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 13:56:42.116392 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 13:56:42.116412 kernel: device-mapper: uevent: version 1.0.3 Apr 30 13:56:42.125206 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 13:56:42.160602 kernel: raid6: avx2x4 gen() 47145 MB/s Apr 30 13:56:42.181603 kernel: raid6: avx2x2 gen() 53693 MB/s Apr 30 13:56:42.207667 kernel: raid6: avx2x1 gen() 45133 MB/s Apr 30 13:56:42.207686 kernel: raid6: using algorithm avx2x2 gen() 53693 MB/s Apr 30 13:56:42.234720 kernel: raid6: .... xor() 32409 MB/s, rmw enabled Apr 30 13:56:42.234738 kernel: raid6: using avx2x2 recovery algorithm Apr 30 13:56:42.254562 kernel: xor: automatically using best checksumming function avx Apr 30 13:56:42.353579 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 13:56:42.359044 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 13:56:42.383869 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 13:56:42.391339 systemd-udevd[497]: Using default interface naming scheme 'v255'. Apr 30 13:56:42.394724 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 13:56:42.428810 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 13:56:42.479986 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Apr 30 13:56:42.504979 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 13:56:42.530831 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 13:56:42.590961 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 13:56:42.635542 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 13:56:42.635560 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 13:56:42.635571 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 13:56:42.635579 kernel: PTP clock support registered Apr 30 13:56:42.635586 kernel: libata version 3.00 loaded. Apr 30 13:56:42.635593 kernel: ACPI: bus type USB registered Apr 30 13:56:42.636536 kernel: usbcore: registered new interface driver usbfs Apr 30 13:56:42.636553 kernel: usbcore: registered new interface driver hub Apr 30 13:56:42.636561 kernel: usbcore: registered new device driver usb Apr 30 13:56:42.652944 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 13:56:42.837576 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 13:56:42.837592 kernel: AES CTR mode by8 optimization enabled Apr 30 13:56:42.837600 kernel: ahci 0000:00:17.0: version 3.0 Apr 30 13:56:42.837695 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 30 13:56:42.842857 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Apr 30 13:56:42.842934 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Apr 30 13:56:42.843003 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Apr 30 13:56:42.843069 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Apr 30 13:56:42.843135 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 30 13:56:42.843199 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Apr 30 13:56:42.843263 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Apr 30 13:56:42.843325 kernel: hub 1-0:1.0: USB hub found Apr 30 13:56:42.843397 kernel: scsi host0: ahci Apr 30 13:56:42.843459 kernel: scsi host1: ahci Apr 30 13:56:42.843520 kernel: scsi host2: ahci Apr 30 13:56:42.843587 kernel: scsi host3: ahci Apr 30 13:56:42.843649 kernel: scsi host4: ahci Apr 30 13:56:42.843707 kernel: scsi host5: ahci Apr 30 13:56:42.843765 kernel: scsi host6: ahci Apr 30 13:56:42.843823 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Apr 30 13:56:42.843832 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Apr 30 13:56:42.843839 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Apr 30 13:56:42.843849 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Apr 30 13:56:42.843856 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Apr 30 13:56:42.843863 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Apr 30 13:56:42.843870 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Apr 30 13:56:42.843877 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Apr 30 13:56:42.843884 kernel: hub 1-0:1.0: 16 ports detected Apr 30 13:56:42.843945 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Apr 30 13:56:42.843953 kernel: hub 2-0:1.0: USB hub found Apr 30 13:56:42.844020 kernel: hub 2-0:1.0: 10 ports detected Apr 30 13:56:42.684235 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 13:56:42.883373 kernel: igb 0000:03:00.0: added PHC on eth0 Apr 30 13:56:42.883502 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 30 13:56:42.883603 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:c4 Apr 30 13:56:42.883680 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Apr 30 13:56:42.883795 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 30 13:56:42.684320 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:56:42.969658 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Apr 30 13:56:43.454376 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 30 13:56:43.454458 kernel: igb 0000:04:00.0: added PHC on eth1 Apr 30 13:56:43.454542 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 30 13:56:43.454612 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:c5 Apr 30 13:56:43.454676 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Apr 30 13:56:43.454740 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 30 13:56:43.454803 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 30 13:56:43.454811 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 30 13:56:43.454818 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 13:56:43.454825 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Apr 30 13:56:43.607271 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 13:56:43.607283 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 13:56:43.607291 kernel: ata7: SATA link down (SStatus 0 SControl 300) Apr 30 13:56:43.607298 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 13:56:43.607305 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Apr 30 13:56:43.607313 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Apr 30 13:56:43.607320 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 30 13:56:43.607327 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 30 13:56:43.607336 kernel: ata2.00: Features: NCQ-prio Apr 30 13:56:43.607343 kernel: ata1.00: Features: NCQ-prio Apr 30 13:56:43.607351 kernel: ata2.00: configured for UDMA/133 Apr 30 13:56:43.607358 kernel: ata1.00: configured for UDMA/133 Apr 30 13:56:43.607365 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Apr 30 13:56:43.607452 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Apr 30 13:56:43.607524 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Apr 30 13:56:43.607606 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Apr 30 13:56:43.607679 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Apr 30 13:56:43.607745 kernel: hub 1-14:1.0: USB hub found Apr 30 13:56:43.607826 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Apr 30 13:56:43.607897 kernel: hub 1-14:1.0: 4 ports detected Apr 30 13:56:43.607977 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 13:56:43.607985 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 13:56:43.607993 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 30 13:56:43.608056 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 13:56:43.608121 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 30 13:56:43.608184 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Apr 30 13:56:43.608245 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 13:56:43.608305 kernel: sd 1:0:0:0: [sdb] Write Protect is off Apr 30 13:56:43.608364 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Apr 30 13:56:43.608424 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Apr 30 13:56:43.608484 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 13:56:43.608552 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 13:56:43.608613 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Apr 30 13:56:43.608673 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Apr 30 13:56:43.608733 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 13:56:43.608741 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 13:56:43.608748 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Apr 30 13:56:43.608807 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 13:56:43.608816 kernel: GPT:9289727 != 937703087 Apr 30 13:56:43.608825 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 13:56:43.608835 kernel: GPT:9289727 != 937703087 Apr 30 13:56:43.608844 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 13:56:43.608851 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 13:56:43.608858 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 13:56:43.608921 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (547) Apr 30 13:56:43.608929 kernel: BTRFS: device fsid 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (576) Apr 30 13:56:43.608937 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 13:56:43.609006 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Apr 30 13:56:44.055177 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 30 13:56:44.055710 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Apr 30 13:56:44.056307 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 13:56:44.056352 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 13:56:44.056390 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 13:56:44.056426 kernel: usbcore: registered new interface driver usbhid Apr 30 13:56:44.056462 kernel: usbhid: USB HID core driver Apr 30 13:56:44.056517 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Apr 30 13:56:44.056577 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Apr 30 13:56:44.056979 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Apr 30 13:56:44.057022 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Apr 30 13:56:44.057421 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Apr 30 13:56:44.057877 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Apr 30 13:56:44.058246 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 13:56:42.883311 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 13:56:44.083777 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Apr 30 13:56:44.083864 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Apr 30 13:56:42.883352 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 13:56:42.883460 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:56:42.958693 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:56:42.992703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:56:43.003046 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 13:56:43.003668 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 13:56:44.144682 disk-uuid[712]: Primary Header is updated. Apr 30 13:56:44.144682 disk-uuid[712]: Secondary Entries is updated. Apr 30 13:56:44.144682 disk-uuid[712]: Secondary Header is updated. Apr 30 13:56:43.003692 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 13:56:43.003708 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 13:56:43.004149 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 13:56:43.127642 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:56:43.146730 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 13:56:43.191613 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 13:56:43.221672 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:56:43.392608 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Apr 30 13:56:43.434169 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Apr 30 13:56:43.466014 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Apr 30 13:56:43.487153 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Apr 30 13:56:43.497613 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Apr 30 13:56:43.524684 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 13:56:44.550217 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 13:56:44.558527 disk-uuid[713]: The operation has completed successfully. Apr 30 13:56:44.566740 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 13:56:44.598527 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 13:56:44.598607 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 13:56:44.653808 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 13:56:44.679668 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 13:56:44.679724 sh[745]: Success Apr 30 13:56:44.718058 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 13:56:44.744888 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 13:56:44.752888 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 13:56:44.795599 kernel: BTRFS info (device dm-0): first mount of filesystem 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 Apr 30 13:56:44.795613 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:56:44.805213 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 13:56:44.812233 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 13:56:44.818078 kernel: BTRFS info (device dm-0): using free space tree Apr 30 13:56:44.831537 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 13:56:44.834099 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 13:56:44.844035 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 13:56:44.854765 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 13:56:44.879442 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 13:56:44.953668 kernel: BTRFS info (device sda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:56:44.953682 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:56:44.953690 kernel: BTRFS info (device sda6): using free space tree Apr 30 13:56:44.953697 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 13:56:44.953705 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 13:56:44.953712 kernel: BTRFS info (device sda6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:56:44.952554 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 13:56:44.964981 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 13:56:45.016169 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 13:56:45.042374 ignition[801]: Ignition 2.20.0 Apr 30 13:56:45.042378 ignition[801]: Stage: fetch-offline Apr 30 13:56:45.044662 unknown[801]: fetched base config from "system" Apr 30 13:56:45.042400 ignition[801]: no configs at "/usr/lib/ignition/base.d" Apr 30 13:56:45.044666 unknown[801]: fetched user config from "system" Apr 30 13:56:45.042406 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:56:45.047728 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 13:56:45.042455 ignition[801]: parsed url from cmdline: "" Apr 30 13:56:45.059059 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 13:56:45.042456 ignition[801]: no config URL provided Apr 30 13:56:45.060635 systemd-networkd[925]: lo: Link UP Apr 30 13:56:45.042459 ignition[801]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 13:56:45.060637 systemd-networkd[925]: lo: Gained carrier Apr 30 13:56:45.042480 ignition[801]: parsing config with SHA512: 0796af879e43f8f1bb912df1428ca57f1e6b7ac3815a9e451c6a4f3f0724c05e49711d0b7452516656f2083ffd82a2b5ceaa0cbb45b8ef4a5bff27aa3c97cfc1 Apr 30 13:56:45.063232 systemd-networkd[925]: Enumeration completed Apr 30 13:56:45.044859 ignition[801]: fetch-offline: fetch-offline passed Apr 30 13:56:45.063921 systemd-networkd[925]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 13:56:45.044862 ignition[801]: POST message to Packet Timeline Apr 30 13:56:45.064750 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 13:56:45.044864 ignition[801]: POST Status error: resource requires networking Apr 30 13:56:45.090960 systemd[1]: Reached target network.target - Network. Apr 30 13:56:45.044902 ignition[801]: Ignition finished successfully Apr 30 13:56:45.091768 systemd-networkd[925]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 13:56:45.135213 ignition[938]: Ignition 2.20.0 Apr 30 13:56:45.106706 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 13:56:45.135226 ignition[938]: Stage: kargs Apr 30 13:56:45.117784 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 13:56:45.316726 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Apr 30 13:56:45.135460 ignition[938]: no configs at "/usr/lib/ignition/base.d" Apr 30 13:56:45.120196 systemd-networkd[925]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 13:56:45.135475 ignition[938]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:56:45.308373 systemd-networkd[925]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 13:56:45.136710 ignition[938]: kargs: kargs passed Apr 30 13:56:45.136717 ignition[938]: POST message to Packet Timeline Apr 30 13:56:45.136740 ignition[938]: GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:56:45.137569 ignition[938]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43859->[::1]:53: read: connection refused Apr 30 13:56:45.338416 ignition[938]: GET https://metadata.packet.net/metadata: attempt #2 Apr 30 13:56:45.339378 ignition[938]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56951->[::1]:53: read: connection refused Apr 30 13:56:45.592613 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Apr 30 13:56:45.593226 systemd-networkd[925]: eno1: Link UP Apr 30 13:56:45.593350 systemd-networkd[925]: eno2: Link UP Apr 30 13:56:45.593465 systemd-networkd[925]: enp1s0f0np0: Link UP Apr 30 13:56:45.593614 systemd-networkd[925]: enp1s0f0np0: Gained carrier Apr 30 13:56:45.612847 systemd-networkd[925]: enp1s0f1np1: Link UP Apr 30 13:56:45.649754 systemd-networkd[925]: enp1s0f0np0: DHCPv4 address 147.75.202.185/31, gateway 147.75.202.184 acquired from 145.40.83.140 Apr 30 13:56:45.739515 ignition[938]: GET https://metadata.packet.net/metadata: attempt #3 Apr 30 13:56:45.740650 ignition[938]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56921->[::1]:53: read: connection refused Apr 30 13:56:46.356302 systemd-networkd[925]: enp1s0f1np1: Gained carrier Apr 30 13:56:46.541165 ignition[938]: GET https://metadata.packet.net/metadata: attempt #4 Apr 30 13:56:46.542678 ignition[938]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56049->[::1]:53: read: connection refused Apr 30 13:56:46.676143 systemd-networkd[925]: enp1s0f0np0: Gained IPv6LL Apr 30 13:56:48.020036 systemd-networkd[925]: enp1s0f1np1: Gained IPv6LL Apr 30 13:56:48.143651 ignition[938]: GET https://metadata.packet.net/metadata: attempt #5 Apr 30 13:56:48.144772 ignition[938]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49035->[::1]:53: read: connection refused Apr 30 13:56:51.347250 ignition[938]: GET https://metadata.packet.net/metadata: attempt #6 Apr 30 13:56:52.318691 ignition[938]: GET result: OK Apr 30 13:56:52.717000 ignition[938]: Ignition finished successfully Apr 30 13:56:52.722045 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 13:56:52.746801 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 13:56:52.752907 ignition[956]: Ignition 2.20.0 Apr 30 13:56:52.752912 ignition[956]: Stage: disks Apr 30 13:56:52.753015 ignition[956]: no configs at "/usr/lib/ignition/base.d" Apr 30 13:56:52.753022 ignition[956]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:56:52.753514 ignition[956]: disks: disks passed Apr 30 13:56:52.753517 ignition[956]: POST message to Packet Timeline Apr 30 13:56:52.753527 ignition[956]: GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:56:53.644706 ignition[956]: GET result: OK Apr 30 13:56:53.987456 ignition[956]: Ignition finished successfully Apr 30 13:56:53.989548 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 13:56:54.005716 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 13:56:54.025817 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 13:56:54.047859 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 13:56:54.068952 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 13:56:54.088846 systemd[1]: Reached target basic.target - Basic System. Apr 30 13:56:54.119825 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 13:56:54.150485 systemd-fsck[972]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 13:56:54.160959 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 13:56:54.184741 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 13:56:54.255436 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 13:56:54.271789 kernel: EXT4-fs (sda9): mounted filesystem 59d16236-967d-47d1-a9bd-4b055a17ab77 r/w with ordered data mode. Quota mode: none. Apr 30 13:56:54.265026 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 13:56:54.301882 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 13:56:54.311594 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 13:56:54.378752 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (981) Apr 30 13:56:54.378767 kernel: BTRFS info (device sda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:56:54.378775 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:56:54.378783 kernel: BTRFS info (device sda6): using free space tree Apr 30 13:56:54.378790 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 13:56:54.378797 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 13:56:54.383066 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 13:56:54.395159 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Apr 30 13:56:54.405848 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 13:56:54.405878 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 13:56:54.453701 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 13:56:54.476760 coreos-metadata[998]: Apr 30 13:56:54.465 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 13:56:54.461761 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 13:56:54.507659 coreos-metadata[999]: Apr 30 13:56:54.465 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 13:56:54.501742 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 13:56:54.541750 initrd-setup-root[1013]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 13:56:54.551646 initrd-setup-root[1020]: cut: /sysroot/etc/group: No such file or directory Apr 30 13:56:54.561616 initrd-setup-root[1027]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 13:56:54.571619 initrd-setup-root[1034]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 13:56:54.589105 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 13:56:54.613794 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 13:56:54.641753 kernel: BTRFS info (device sda6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:56:54.632190 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 13:56:54.650441 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 13:56:54.668746 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 13:56:54.672886 ignition[1102]: INFO : Ignition 2.20.0 Apr 30 13:56:54.672886 ignition[1102]: INFO : Stage: mount Apr 30 13:56:54.699707 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 13:56:54.699707 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:56:54.699707 ignition[1102]: INFO : mount: mount passed Apr 30 13:56:54.699707 ignition[1102]: INFO : POST message to Packet Timeline Apr 30 13:56:54.699707 ignition[1102]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:56:55.414644 coreos-metadata[999]: Apr 30 13:56:55.414 INFO Fetch successful Apr 30 13:56:55.493351 systemd[1]: flatcar-static-network.service: Deactivated successfully. Apr 30 13:56:55.493407 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Apr 30 13:56:55.525621 coreos-metadata[998]: Apr 30 13:56:55.513 INFO Fetch successful Apr 30 13:56:55.539737 ignition[1102]: INFO : GET result: OK Apr 30 13:56:55.546779 coreos-metadata[998]: Apr 30 13:56:55.543 INFO wrote hostname ci-4230.1.1-a-07b90b6465 to /sysroot/etc/hostname Apr 30 13:56:55.544537 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 13:56:55.875768 ignition[1102]: INFO : Ignition finished successfully Apr 30 13:56:55.877915 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 13:56:55.912755 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 13:56:55.924333 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 13:56:55.969563 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1129) Apr 30 13:56:55.987178 kernel: BTRFS info (device sda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:56:55.987194 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:56:55.993088 kernel: BTRFS info (device sda6): using free space tree Apr 30 13:56:56.007385 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 13:56:56.007402 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 13:56:56.009201 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 13:56:56.042420 ignition[1146]: INFO : Ignition 2.20.0 Apr 30 13:56:56.042420 ignition[1146]: INFO : Stage: files Apr 30 13:56:56.056782 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 13:56:56.056782 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:56:56.056782 ignition[1146]: DEBUG : files: compiled without relabeling support, skipping Apr 30 13:56:56.056782 ignition[1146]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 13:56:56.056782 ignition[1146]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 13:56:56.056782 ignition[1146]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 13:56:56.056782 ignition[1146]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 13:56:56.056782 ignition[1146]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 13:56:56.056782 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 13:56:56.056782 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 13:56:56.046462 unknown[1146]: wrote ssh authorized keys file for user: core Apr 30 13:56:56.514156 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 13:56:57.009215 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 13:56:57.009215 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 30 13:56:57.041846 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 13:56:57.041846 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 13:56:57.041846 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 13:56:57.041846 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 13:56:57.041846 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 13:56:57.041846 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 13:56:57.041846 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 13:56:57.041846 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 13:56:57.041846 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 13:56:57.041846 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 13:56:57.041846 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 13:56:57.041846 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 13:56:57.041846 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Apr 30 13:56:57.592791 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 30 13:56:58.116061 ignition[1146]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 13:56:58.116061 ignition[1146]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 30 13:56:58.146752 ignition[1146]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 13:56:58.146752 ignition[1146]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 13:56:58.146752 ignition[1146]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 30 13:56:58.146752 ignition[1146]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 30 13:56:58.146752 ignition[1146]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 13:56:58.146752 ignition[1146]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 13:56:58.146752 ignition[1146]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 13:56:58.146752 ignition[1146]: INFO : files: files passed Apr 30 13:56:58.146752 ignition[1146]: INFO : POST message to Packet Timeline Apr 30 13:56:58.146752 ignition[1146]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:56:59.072122 ignition[1146]: INFO : GET result: OK Apr 30 13:56:59.622375 ignition[1146]: INFO : Ignition finished successfully Apr 30 13:56:59.625601 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 13:56:59.665779 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 13:56:59.666256 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 13:56:59.697024 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 13:56:59.697109 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 13:56:59.749806 initrd-setup-root-after-ignition[1183]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 13:56:59.749806 initrd-setup-root-after-ignition[1183]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 13:56:59.720397 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 13:56:59.808762 initrd-setup-root-after-ignition[1187]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 13:56:59.741744 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 13:56:59.772802 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 13:56:59.837371 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 13:56:59.837541 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 13:56:59.857833 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 13:56:59.875803 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 13:56:59.896976 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 13:56:59.905794 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 13:56:59.984382 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 13:57:00.014964 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 13:57:00.047214 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 13:57:00.059155 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 13:57:00.080266 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 13:57:00.098213 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 13:57:00.098666 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 13:57:00.125368 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 13:57:00.147142 systemd[1]: Stopped target basic.target - Basic System. Apr 30 13:57:00.166168 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 13:57:00.185156 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 13:57:00.206258 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 13:57:00.227149 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 13:57:00.247157 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 13:57:00.268209 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 13:57:00.290179 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 13:57:00.310154 systemd[1]: Stopped target swap.target - Swaps. Apr 30 13:57:00.329056 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 13:57:00.329480 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 13:57:00.365018 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 13:57:00.375180 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 13:57:00.396043 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 13:57:00.396491 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 13:57:00.418043 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 13:57:00.418448 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 13:57:00.449141 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 13:57:00.449623 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 13:57:00.469356 systemd[1]: Stopped target paths.target - Path Units. Apr 30 13:57:00.487031 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 13:57:00.487492 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 13:57:00.509165 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 13:57:00.527159 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 13:57:00.545136 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 13:57:00.545436 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 13:57:00.565279 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 13:57:00.565592 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 13:57:00.588275 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 13:57:00.588705 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 13:57:00.607245 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 13:57:00.727720 ignition[1209]: INFO : Ignition 2.20.0 Apr 30 13:57:00.727720 ignition[1209]: INFO : Stage: umount Apr 30 13:57:00.727720 ignition[1209]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 13:57:00.727720 ignition[1209]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:57:00.727720 ignition[1209]: INFO : umount: umount passed Apr 30 13:57:00.727720 ignition[1209]: INFO : POST message to Packet Timeline Apr 30 13:57:00.727720 ignition[1209]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:57:00.607660 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 13:57:00.626246 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 13:57:00.626670 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 13:57:00.656768 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 13:57:00.679423 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 13:57:00.689870 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 13:57:00.689998 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 13:57:00.719881 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 13:57:00.719956 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 13:57:00.748869 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 13:57:00.749870 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 13:57:00.750042 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 13:57:00.787178 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 13:57:00.787423 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 13:57:01.640987 ignition[1209]: INFO : GET result: OK Apr 30 13:57:02.025628 ignition[1209]: INFO : Ignition finished successfully Apr 30 13:57:02.027659 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 13:57:02.027837 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 13:57:02.046860 systemd[1]: Stopped target network.target - Network. Apr 30 13:57:02.061883 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 13:57:02.062059 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 13:57:02.079980 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 13:57:02.080127 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 13:57:02.098042 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 13:57:02.098211 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 13:57:02.115965 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 13:57:02.116135 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 13:57:02.134948 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 13:57:02.135123 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 13:57:02.153295 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 13:57:02.171072 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 13:57:02.190669 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 13:57:02.191016 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 13:57:02.213559 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 13:57:02.214155 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 13:57:02.214430 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 13:57:02.232009 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 13:57:02.234572 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 13:57:02.234741 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 13:57:02.260633 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 13:57:02.278861 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 13:57:02.278902 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 13:57:02.306890 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 13:57:02.306984 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:57:02.325296 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 13:57:02.325457 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 13:57:02.344916 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 13:57:02.345086 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 13:57:02.367153 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 13:57:02.389114 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 13:57:02.389316 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 13:57:02.390377 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 13:57:02.390749 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 13:57:02.419087 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 13:57:02.419118 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 13:57:02.419872 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 13:57:02.419893 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 13:57:02.437793 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 13:57:02.437828 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 13:57:02.476908 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 13:57:02.477024 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 13:57:02.508031 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 13:57:02.508213 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:57:02.548847 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 13:57:02.841612 systemd-journald[269]: Received SIGTERM from PID 1 (systemd). Apr 30 13:57:02.575609 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 13:57:02.575647 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 13:57:02.596731 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 13:57:02.596782 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 13:57:02.618768 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 13:57:02.618868 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 13:57:02.640930 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 13:57:02.641077 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:57:02.666431 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 30 13:57:02.666720 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 13:57:02.668117 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 13:57:02.668351 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 13:57:02.686518 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 13:57:02.686921 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 13:57:02.704889 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 13:57:02.737952 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 13:57:02.785385 systemd[1]: Switching root. Apr 30 13:57:02.953739 systemd-journald[269]: Journal stopped Apr 30 13:57:04.683895 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 13:57:04.683912 kernel: SELinux: policy capability open_perms=1 Apr 30 13:57:04.683919 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 13:57:04.683924 kernel: SELinux: policy capability always_check_network=0 Apr 30 13:57:04.683931 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 13:57:04.683937 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 13:57:04.683943 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 13:57:04.683948 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 13:57:04.683954 kernel: audit: type=1403 audit(1746021423.051:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 13:57:04.683960 systemd[1]: Successfully loaded SELinux policy in 74.771ms. Apr 30 13:57:04.683968 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.122ms. Apr 30 13:57:04.683975 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 13:57:04.683982 systemd[1]: Detected architecture x86-64. Apr 30 13:57:04.683987 systemd[1]: Detected first boot. Apr 30 13:57:04.683994 systemd[1]: Hostname set to . Apr 30 13:57:04.684002 systemd[1]: Initializing machine ID from random generator. Apr 30 13:57:04.684008 zram_generator::config[1264]: No configuration found. Apr 30 13:57:04.684015 systemd[1]: Populated /etc with preset unit settings. Apr 30 13:57:04.684022 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 13:57:04.684028 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 13:57:04.684034 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 13:57:04.684040 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 13:57:04.684048 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 13:57:04.684054 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 13:57:04.684061 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 13:57:04.684067 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 13:57:04.684074 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 13:57:04.684080 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 13:57:04.684087 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 13:57:04.684095 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 13:57:04.684101 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 13:57:04.684108 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 13:57:04.684114 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 13:57:04.684120 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 13:57:04.684127 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 13:57:04.684134 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 13:57:04.684140 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Apr 30 13:57:04.684148 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 13:57:04.684155 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 13:57:04.684162 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 13:57:04.684170 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 13:57:04.684176 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 13:57:04.684183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 13:57:04.684190 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 13:57:04.684197 systemd[1]: Reached target slices.target - Slice Units. Apr 30 13:57:04.684204 systemd[1]: Reached target swap.target - Swaps. Apr 30 13:57:04.684211 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 13:57:04.684217 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 13:57:04.684224 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 13:57:04.684231 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 13:57:04.684239 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 13:57:04.684245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 13:57:04.684252 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 13:57:04.684259 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 13:57:04.684265 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 13:57:04.684272 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 13:57:04.684279 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:57:04.684286 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 13:57:04.684293 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 13:57:04.684300 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 13:57:04.684307 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 13:57:04.684314 systemd[1]: Reached target machines.target - Containers. Apr 30 13:57:04.684321 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 13:57:04.684327 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 13:57:04.684334 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 13:57:04.684341 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 13:57:04.684348 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 13:57:04.684355 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 13:57:04.684362 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 13:57:04.684368 kernel: ACPI: bus type drm_connector registered Apr 30 13:57:04.684375 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 13:57:04.684381 kernel: fuse: init (API version 7.39) Apr 30 13:57:04.684388 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 13:57:04.684394 kernel: loop: module loaded Apr 30 13:57:04.684401 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 13:57:04.684409 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 13:57:04.684416 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 13:57:04.684422 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 13:57:04.684429 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 13:57:04.684436 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 13:57:04.684443 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 13:57:04.684458 systemd-journald[1368]: Collecting audit messages is disabled. Apr 30 13:57:04.684474 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 13:57:04.684481 systemd-journald[1368]: Journal started Apr 30 13:57:04.684495 systemd-journald[1368]: Runtime Journal (/run/log/journal/cdaa603d57b04c45bdc9e542f5bef466) is 8M, max 639.9M, 631.9M free. Apr 30 13:57:03.497138 systemd[1]: Queued start job for default target multi-user.target. Apr 30 13:57:03.513408 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 13:57:03.513648 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 13:57:04.713584 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 13:57:04.724586 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 13:57:04.756610 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 13:57:04.784600 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 13:57:04.805694 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 13:57:04.805716 systemd[1]: Stopped verity-setup.service. Apr 30 13:57:04.830581 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:57:04.838578 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 13:57:04.848006 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 13:57:04.857818 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 13:57:04.867825 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 13:57:04.877822 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 13:57:04.887802 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 13:57:04.897779 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 13:57:04.907906 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 13:57:04.919977 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 13:57:04.932040 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 13:57:04.932269 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 13:57:04.944438 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 13:57:04.944932 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 13:57:04.957487 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 13:57:04.957988 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 13:57:04.969479 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 13:57:04.969972 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 13:57:04.982472 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 13:57:04.982958 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 13:57:04.993511 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 13:57:04.994004 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 13:57:05.004630 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 13:57:05.015542 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 13:57:05.027523 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 13:57:05.040570 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 13:57:05.052572 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 13:57:05.087150 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 13:57:05.118956 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 13:57:05.131759 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 13:57:05.141825 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 13:57:05.141927 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 13:57:05.154919 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 13:57:05.177950 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 13:57:05.191597 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 13:57:05.202047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 13:57:05.204418 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 13:57:05.214149 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 13:57:05.224679 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 13:57:05.225308 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 13:57:05.228824 systemd-journald[1368]: Time spent on flushing to /var/log/journal/cdaa603d57b04c45bdc9e542f5bef466 is 12.497ms for 1369 entries. Apr 30 13:57:05.228824 systemd-journald[1368]: System Journal (/var/log/journal/cdaa603d57b04c45bdc9e542f5bef466) is 8M, max 195.6M, 187.6M free. Apr 30 13:57:05.252683 systemd-journald[1368]: Received client request to flush runtime journal. Apr 30 13:57:05.242669 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 13:57:05.257701 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 13:57:05.268385 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 13:57:05.280247 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 13:57:05.292555 kernel: loop0: detected capacity change from 0 to 147912 Apr 30 13:57:05.297311 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 13:57:05.310108 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. Apr 30 13:57:05.310136 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. Apr 30 13:57:05.310923 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 13:57:05.320563 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 13:57:05.327750 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 13:57:05.339785 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 13:57:05.350795 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 13:57:05.361770 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 13:57:05.372743 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:57:05.380536 kernel: loop1: detected capacity change from 0 to 138176 Apr 30 13:57:05.387757 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 13:57:05.401655 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 13:57:05.422818 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 13:57:05.435316 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 13:57:05.450539 kernel: loop2: detected capacity change from 0 to 8 Apr 30 13:57:05.450848 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 13:57:05.451411 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 13:57:05.464904 udevadm[1409]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 13:57:05.469257 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 13:57:05.488537 kernel: loop3: detected capacity change from 0 to 205544 Apr 30 13:57:05.493705 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 13:57:05.501320 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Apr 30 13:57:05.501330 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Apr 30 13:57:05.506414 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 13:57:05.549566 kernel: loop4: detected capacity change from 0 to 147912 Apr 30 13:57:05.573545 kernel: loop5: detected capacity change from 0 to 138176 Apr 30 13:57:05.580625 ldconfig[1399]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 13:57:05.582047 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 13:57:05.596573 kernel: loop6: detected capacity change from 0 to 8 Apr 30 13:57:05.603595 kernel: loop7: detected capacity change from 0 to 205544 Apr 30 13:57:05.616742 (sd-merge)[1431]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Apr 30 13:57:05.617012 (sd-merge)[1431]: Merged extensions into '/usr'. Apr 30 13:57:05.619462 systemd[1]: Reload requested from client PID 1407 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 13:57:05.619469 systemd[1]: Reloading... Apr 30 13:57:05.647619 zram_generator::config[1458]: No configuration found. Apr 30 13:57:05.719559 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:57:05.772031 systemd[1]: Reloading finished in 152 ms. Apr 30 13:57:05.794311 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 13:57:05.805925 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 13:57:05.829677 systemd[1]: Starting ensure-sysext.service... Apr 30 13:57:05.837599 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 13:57:05.850001 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 13:57:05.866675 systemd-tmpfiles[1516]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 13:57:05.866886 systemd-tmpfiles[1516]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 13:57:05.867535 systemd-tmpfiles[1516]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 13:57:05.867758 systemd-tmpfiles[1516]: ACLs are not supported, ignoring. Apr 30 13:57:05.867810 systemd-tmpfiles[1516]: ACLs are not supported, ignoring. Apr 30 13:57:05.869272 systemd[1]: Reload requested from client PID 1515 ('systemctl') (unit ensure-sysext.service)... Apr 30 13:57:05.869279 systemd[1]: Reloading... Apr 30 13:57:05.870080 systemd-tmpfiles[1516]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 13:57:05.870084 systemd-tmpfiles[1516]: Skipping /boot Apr 30 13:57:05.875395 systemd-tmpfiles[1516]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 13:57:05.875399 systemd-tmpfiles[1516]: Skipping /boot Apr 30 13:57:05.880317 systemd-udevd[1517]: Using default interface naming scheme 'v255'. Apr 30 13:57:05.898562 zram_generator::config[1546]: No configuration found. Apr 30 13:57:05.940391 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Apr 30 13:57:05.940451 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1643) Apr 30 13:57:05.940464 kernel: ACPI: button: Sleep Button [SLPB] Apr 30 13:57:05.946542 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 13:57:05.961586 kernel: IPMI message handler: version 39.2 Apr 30 13:57:05.961636 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 13:57:05.974541 kernel: ACPI: button: Power Button [PWRF] Apr 30 13:57:05.986540 kernel: ipmi device interface Apr 30 13:57:05.986596 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Apr 30 13:57:05.992576 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Apr 30 13:57:06.000513 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Apr 30 13:57:06.020605 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Apr 30 13:57:06.020756 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Apr 30 13:57:06.022450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:57:06.079555 kernel: ipmi_si: IPMI System Interface driver Apr 30 13:57:06.079614 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Apr 30 13:57:06.092063 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Apr 30 13:57:06.092090 kernel: iTCO_vendor_support: vendor-support=0 Apr 30 13:57:06.092107 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Apr 30 13:57:06.092125 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Apr 30 13:57:06.115849 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Apr 30 13:57:06.115948 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Apr 30 13:57:06.116044 kernel: ipmi_si: Adding ACPI-specified kcs state machine Apr 30 13:57:06.116061 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Apr 30 13:57:06.111987 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Apr 30 13:57:06.112348 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Apr 30 13:57:06.154512 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Apr 30 13:57:06.154990 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Apr 30 13:57:06.157854 systemd[1]: Reloading finished in 288 ms. Apr 30 13:57:06.174009 kernel: intel_rapl_common: Found RAPL domain package Apr 30 13:57:06.174049 kernel: intel_rapl_common: Found RAPL domain core Apr 30 13:57:06.179338 kernel: intel_rapl_common: Found RAPL domain dram Apr 30 13:57:06.183362 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 13:57:06.214537 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Apr 30 13:57:06.217294 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 13:57:06.239107 systemd[1]: Finished ensure-sysext.service. Apr 30 13:57:06.256580 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Apr 30 13:57:06.276600 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Apr 30 13:57:06.286634 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:57:06.299689 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 13:57:06.308483 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 13:57:06.318417 augenrules[1719]: No rules Apr 30 13:57:06.320738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 13:57:06.321440 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 13:57:06.331233 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 13:57:06.341158 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 13:57:06.349603 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Apr 30 13:57:06.358537 kernel: ipmi_ssif: IPMI SSIF Interface driver Apr 30 13:57:06.372891 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 13:57:06.383641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 13:57:06.384178 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 13:57:06.395562 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 13:57:06.396128 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 13:57:06.408391 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 13:57:06.409325 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 13:57:06.410144 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 13:57:06.439642 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 13:57:06.452083 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:57:06.462565 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:57:06.463175 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 13:57:06.475671 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 13:57:06.475773 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 13:57:06.475958 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 13:57:06.476094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 13:57:06.476181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 13:57:06.476321 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 13:57:06.476403 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 13:57:06.476543 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 13:57:06.476627 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 13:57:06.476766 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 13:57:06.476846 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 13:57:06.476998 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 13:57:06.477154 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 13:57:06.481916 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 13:57:06.494655 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 13:57:06.494687 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 13:57:06.494719 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 13:57:06.495342 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 13:57:06.496296 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 13:57:06.496319 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 13:57:06.500766 lvm[1748]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 13:57:06.503063 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 13:57:06.519218 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 13:57:06.575793 systemd-resolved[1732]: Positive Trust Anchors: Apr 30 13:57:06.575801 systemd-resolved[1732]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 13:57:06.575829 systemd-resolved[1732]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 13:57:06.578508 systemd-resolved[1732]: Using system hostname 'ci-4230.1.1-a-07b90b6465'. Apr 30 13:57:06.580632 systemd-networkd[1731]: lo: Link UP Apr 30 13:57:06.580636 systemd-networkd[1731]: lo: Gained carrier Apr 30 13:57:06.583277 systemd-networkd[1731]: bond0: netdev ready Apr 30 13:57:06.584270 systemd-networkd[1731]: Enumeration completed Apr 30 13:57:06.588485 systemd-networkd[1731]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:dc:28.network. Apr 30 13:57:06.598868 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 13:57:06.609828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 13:57:06.619638 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 13:57:06.629771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:57:06.640744 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 13:57:06.652750 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 13:57:06.662619 systemd[1]: Reached target network.target - Network. Apr 30 13:57:06.670603 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 13:57:06.681621 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 13:57:06.691650 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 13:57:06.702619 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 13:57:06.713608 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 13:57:06.724609 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 13:57:06.724625 systemd[1]: Reached target paths.target - Path Units. Apr 30 13:57:06.732603 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 13:57:06.742686 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 13:57:06.752652 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 13:57:06.763599 systemd[1]: Reached target timers.target - Timer Units. Apr 30 13:57:06.772307 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 13:57:06.782219 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 13:57:06.791518 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 13:57:06.812013 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 13:57:06.821780 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 13:57:06.844721 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 13:57:06.846398 lvm[1774]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 13:57:06.856376 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 13:57:06.876669 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 13:57:06.888027 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 13:57:06.897770 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 13:57:06.909004 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 13:57:06.918661 systemd[1]: Reached target basic.target - Basic System. Apr 30 13:57:06.926680 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 13:57:06.926700 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 13:57:06.936669 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 13:57:06.947353 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 13:57:06.957159 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 13:57:06.966132 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 13:57:06.969653 coreos-metadata[1779]: Apr 30 13:57:06.969 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 13:57:06.970472 coreos-metadata[1779]: Apr 30 13:57:06.970 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Apr 30 13:57:06.976388 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 13:57:06.977440 dbus-daemon[1780]: [system] SELinux support is enabled Apr 30 13:57:06.977971 jq[1783]: false Apr 30 13:57:06.985621 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 13:57:06.986224 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 13:57:06.994063 extend-filesystems[1785]: Found loop4 Apr 30 13:57:06.995671 extend-filesystems[1785]: Found loop5 Apr 30 13:57:06.995671 extend-filesystems[1785]: Found loop6 Apr 30 13:57:06.995671 extend-filesystems[1785]: Found loop7 Apr 30 13:57:06.995671 extend-filesystems[1785]: Found sda Apr 30 13:57:06.995671 extend-filesystems[1785]: Found sda1 Apr 30 13:57:06.995671 extend-filesystems[1785]: Found sda2 Apr 30 13:57:06.995671 extend-filesystems[1785]: Found sda3 Apr 30 13:57:06.995671 extend-filesystems[1785]: Found usr Apr 30 13:57:06.995671 extend-filesystems[1785]: Found sda4 Apr 30 13:57:06.995671 extend-filesystems[1785]: Found sda6 Apr 30 13:57:06.995671 extend-filesystems[1785]: Found sda7 Apr 30 13:57:06.995671 extend-filesystems[1785]: Found sda9 Apr 30 13:57:06.995671 extend-filesystems[1785]: Checking size of /dev/sda9 Apr 30 13:57:07.135680 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Apr 30 13:57:07.135708 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1663) Apr 30 13:57:06.996185 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 13:57:07.135817 extend-filesystems[1785]: Resized partition /dev/sda9 Apr 30 13:57:07.027369 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 13:57:07.143875 extend-filesystems[1793]: resize2fs 1.47.1 (20-May-2024) Apr 30 13:57:07.066113 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 13:57:07.075043 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 13:57:07.110690 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Apr 30 13:57:07.116875 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 13:57:07.117183 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 13:57:07.142103 systemd-logind[1805]: Watching system buttons on /dev/input/event3 (Power Button) Apr 30 13:57:07.142115 systemd-logind[1805]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 30 13:57:07.142125 systemd-logind[1805]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Apr 30 13:57:07.142284 systemd-logind[1805]: New seat seat0. Apr 30 13:57:07.151610 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 13:57:07.152018 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 13:57:07.158558 update_engine[1810]: I20250430 13:57:07.158517 1810 main.cc:92] Flatcar Update Engine starting Apr 30 13:57:07.159340 update_engine[1810]: I20250430 13:57:07.159322 1810 update_check_scheduler.cc:74] Next update check in 5m17s Apr 30 13:57:07.169602 jq[1811]: true Apr 30 13:57:07.178126 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 13:57:07.198759 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 13:57:07.198865 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 13:57:07.199052 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 13:57:07.199150 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 13:57:07.209106 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 13:57:07.209207 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 13:57:07.218050 sshd_keygen[1808]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 13:57:07.233902 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 13:57:07.235452 jq[1818]: true Apr 30 13:57:07.246337 (ntainerd)[1823]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 13:57:07.250468 tar[1813]: linux-amd64/helm Apr 30 13:57:07.251162 dbus-daemon[1780]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 13:57:07.253353 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Apr 30 13:57:07.253466 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Apr 30 13:57:07.265519 systemd[1]: Started update-engine.service - Update Engine. Apr 30 13:57:07.288699 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 13:57:07.292109 bash[1850]: Updated "/home/core/.ssh/authorized_keys" Apr 30 13:57:07.296637 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 13:57:07.296780 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 13:57:07.307653 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 13:57:07.307788 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 13:57:07.329749 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 13:57:07.344461 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 13:57:07.351977 locksmithd[1860]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 13:57:07.355851 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 13:57:07.355969 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 13:57:07.371730 systemd[1]: Starting sshkeys.service... Apr 30 13:57:07.379441 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 13:57:07.393672 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 13:57:07.409538 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Apr 30 13:57:07.417086 containerd[1823]: time="2025-04-30T13:57:07.417011138Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 13:57:07.423732 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Apr 30 13:57:07.423057 systemd-networkd[1731]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:dc:29.network. Apr 30 13:57:07.423810 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 13:57:07.430465 containerd[1823]: time="2025-04-30T13:57:07.430445367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:57:07.431211 containerd[1823]: time="2025-04-30T13:57:07.431196386Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:57:07.431235 containerd[1823]: time="2025-04-30T13:57:07.431211567Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 13:57:07.431235 containerd[1823]: time="2025-04-30T13:57:07.431220784Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 13:57:07.431309 containerd[1823]: time="2025-04-30T13:57:07.431301212Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 13:57:07.431333 containerd[1823]: time="2025-04-30T13:57:07.431312445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 13:57:07.431353 containerd[1823]: time="2025-04-30T13:57:07.431344055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:57:07.431370 containerd[1823]: time="2025-04-30T13:57:07.431353381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:57:07.431486 containerd[1823]: time="2025-04-30T13:57:07.431477033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:57:07.431506 containerd[1823]: time="2025-04-30T13:57:07.431485791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 13:57:07.431506 containerd[1823]: time="2025-04-30T13:57:07.431493188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:57:07.431506 containerd[1823]: time="2025-04-30T13:57:07.431498841Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 13:57:07.431555 containerd[1823]: time="2025-04-30T13:57:07.431547929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:57:07.431667 containerd[1823]: time="2025-04-30T13:57:07.431658968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:57:07.431733 containerd[1823]: time="2025-04-30T13:57:07.431724032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:57:07.431733 containerd[1823]: time="2025-04-30T13:57:07.431732277Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 13:57:07.431780 containerd[1823]: time="2025-04-30T13:57:07.431773557Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 13:57:07.431807 containerd[1823]: time="2025-04-30T13:57:07.431800550Z" level=info msg="metadata content store policy set" policy=shared Apr 30 13:57:07.434603 coreos-metadata[1874]: Apr 30 13:57:07.434 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 13:57:07.435216 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 13:57:07.435341 coreos-metadata[1874]: Apr 30 13:57:07.435 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Apr 30 13:57:07.442695 containerd[1823]: time="2025-04-30T13:57:07.442679514Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 13:57:07.442746 containerd[1823]: time="2025-04-30T13:57:07.442707890Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 13:57:07.442746 containerd[1823]: time="2025-04-30T13:57:07.442717751Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 13:57:07.442746 containerd[1823]: time="2025-04-30T13:57:07.442727851Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 13:57:07.442746 containerd[1823]: time="2025-04-30T13:57:07.442735600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 13:57:07.442834 containerd[1823]: time="2025-04-30T13:57:07.442810173Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 13:57:07.442936 containerd[1823]: time="2025-04-30T13:57:07.442926838Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 13:57:07.442988 containerd[1823]: time="2025-04-30T13:57:07.442979875Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 13:57:07.443014 containerd[1823]: time="2025-04-30T13:57:07.442989277Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 13:57:07.443014 containerd[1823]: time="2025-04-30T13:57:07.442996909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 13:57:07.443014 containerd[1823]: time="2025-04-30T13:57:07.443004146Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 13:57:07.443014 containerd[1823]: time="2025-04-30T13:57:07.443011223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 13:57:07.443098 containerd[1823]: time="2025-04-30T13:57:07.443017891Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 13:57:07.443098 containerd[1823]: time="2025-04-30T13:57:07.443025202Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 13:57:07.443098 containerd[1823]: time="2025-04-30T13:57:07.443034222Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 13:57:07.443098 containerd[1823]: time="2025-04-30T13:57:07.443041372Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 13:57:07.443098 containerd[1823]: time="2025-04-30T13:57:07.443050071Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 13:57:07.443098 containerd[1823]: time="2025-04-30T13:57:07.443056480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 13:57:07.443098 containerd[1823]: time="2025-04-30T13:57:07.443068215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443098 containerd[1823]: time="2025-04-30T13:57:07.443076387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443098 containerd[1823]: time="2025-04-30T13:57:07.443083594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443098 containerd[1823]: time="2025-04-30T13:57:07.443090901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443098 containerd[1823]: time="2025-04-30T13:57:07.443097758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443340 containerd[1823]: time="2025-04-30T13:57:07.443104816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443340 containerd[1823]: time="2025-04-30T13:57:07.443110940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443340 containerd[1823]: time="2025-04-30T13:57:07.443117418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443340 containerd[1823]: time="2025-04-30T13:57:07.443124952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443340 containerd[1823]: time="2025-04-30T13:57:07.443132892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443340 containerd[1823]: time="2025-04-30T13:57:07.443139722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443340 containerd[1823]: time="2025-04-30T13:57:07.443145731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443340 containerd[1823]: time="2025-04-30T13:57:07.443152054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443340 containerd[1823]: time="2025-04-30T13:57:07.443160194Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 13:57:07.443340 containerd[1823]: time="2025-04-30T13:57:07.443171349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443340 containerd[1823]: time="2025-04-30T13:57:07.443179078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443340 containerd[1823]: time="2025-04-30T13:57:07.443185551Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 13:57:07.443631 containerd[1823]: time="2025-04-30T13:57:07.443563640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 13:57:07.443631 containerd[1823]: time="2025-04-30T13:57:07.443576537Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 13:57:07.443631 containerd[1823]: time="2025-04-30T13:57:07.443583098Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 13:57:07.443631 containerd[1823]: time="2025-04-30T13:57:07.443589927Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 13:57:07.443631 containerd[1823]: time="2025-04-30T13:57:07.443595953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443631 containerd[1823]: time="2025-04-30T13:57:07.443603454Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 13:57:07.443631 containerd[1823]: time="2025-04-30T13:57:07.443609042Z" level=info msg="NRI interface is disabled by configuration." Apr 30 13:57:07.443631 containerd[1823]: time="2025-04-30T13:57:07.443614492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 13:57:07.443818 containerd[1823]: time="2025-04-30T13:57:07.443775153Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 13:57:07.443818 containerd[1823]: time="2025-04-30T13:57:07.443802312Z" level=info msg="Connect containerd service" Apr 30 13:57:07.443818 containerd[1823]: time="2025-04-30T13:57:07.443818729Z" level=info msg="using legacy CRI server" Apr 30 13:57:07.443968 containerd[1823]: time="2025-04-30T13:57:07.443823093Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 13:57:07.443968 containerd[1823]: time="2025-04-30T13:57:07.443886082Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 13:57:07.444191 containerd[1823]: time="2025-04-30T13:57:07.444179704Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 13:57:07.444319 containerd[1823]: time="2025-04-30T13:57:07.444297099Z" level=info msg="Start subscribing containerd event" Apr 30 13:57:07.444349 containerd[1823]: time="2025-04-30T13:57:07.444315082Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 13:57:07.444349 containerd[1823]: time="2025-04-30T13:57:07.444329239Z" level=info msg="Start recovering state" Apr 30 13:57:07.444349 containerd[1823]: time="2025-04-30T13:57:07.444340539Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 13:57:07.444413 containerd[1823]: time="2025-04-30T13:57:07.444363835Z" level=info msg="Start event monitor" Apr 30 13:57:07.444413 containerd[1823]: time="2025-04-30T13:57:07.444370296Z" level=info msg="Start snapshots syncer" Apr 30 13:57:07.444413 containerd[1823]: time="2025-04-30T13:57:07.444376692Z" level=info msg="Start cni network conf syncer for default" Apr 30 13:57:07.444413 containerd[1823]: time="2025-04-30T13:57:07.444381624Z" level=info msg="Start streaming server" Apr 30 13:57:07.444501 containerd[1823]: time="2025-04-30T13:57:07.444414145Z" level=info msg="containerd successfully booted in 0.027940s" Apr 30 13:57:07.446080 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 13:57:07.473898 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 13:57:07.483669 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Apr 30 13:57:07.494799 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 13:57:07.512430 tar[1813]: linux-amd64/LICENSE Apr 30 13:57:07.512491 tar[1813]: linux-amd64/README.md Apr 30 13:57:07.525590 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Apr 30 13:57:07.552750 extend-filesystems[1793]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 13:57:07.552750 extend-filesystems[1793]: old_desc_blocks = 1, new_desc_blocks = 56 Apr 30 13:57:07.552750 extend-filesystems[1793]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Apr 30 13:57:07.596983 extend-filesystems[1785]: Resized filesystem in /dev/sda9 Apr 30 13:57:07.596983 extend-filesystems[1785]: Found sdb Apr 30 13:57:07.627744 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Apr 30 13:57:07.628198 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Apr 30 13:57:07.628237 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Apr 30 13:57:07.553232 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 13:57:07.553351 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 13:57:07.610598 systemd-networkd[1731]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Apr 30 13:57:07.612152 systemd-networkd[1731]: enp1s0f0np0: Link UP Apr 30 13:57:07.612552 systemd-networkd[1731]: enp1s0f0np0: Gained carrier Apr 30 13:57:07.621721 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 13:57:07.640585 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 13:57:07.641698 systemd-networkd[1731]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:dc:28.network. Apr 30 13:57:07.642577 systemd-networkd[1731]: enp1s0f1np1: Link UP Apr 30 13:57:07.643318 systemd-networkd[1731]: enp1s0f1np1: Gained carrier Apr 30 13:57:07.662248 systemd-networkd[1731]: bond0: Link UP Apr 30 13:57:07.663104 systemd-networkd[1731]: bond0: Gained carrier Apr 30 13:57:07.663778 systemd-timesyncd[1733]: Network configuration changed, trying to establish connection. Apr 30 13:57:07.665285 systemd-timesyncd[1733]: Network configuration changed, trying to establish connection. Apr 30 13:57:07.666170 systemd-timesyncd[1733]: Network configuration changed, trying to establish connection. Apr 30 13:57:07.666661 systemd-timesyncd[1733]: Network configuration changed, trying to establish connection. Apr 30 13:57:07.729697 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Apr 30 13:57:07.729848 kernel: bond0: active interface up! Apr 30 13:57:07.757857 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 13:57:07.777733 systemd[1]: Started sshd@0-147.75.202.185:22-147.75.109.163:35384.service - OpenSSH per-connection server daemon (147.75.109.163:35384). Apr 30 13:57:07.817311 sshd[1900]: Accepted publickey for core from 147.75.109.163 port 35384 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:07.818159 sshd-session[1900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:07.825259 systemd-logind[1805]: New session 1 of user core. Apr 30 13:57:07.826057 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 13:57:07.846582 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Apr 30 13:57:07.846725 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 13:57:07.860573 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 13:57:07.883903 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 13:57:07.900836 (systemd)[1904]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 13:57:07.902217 systemd-logind[1805]: New session c1 of user core. Apr 30 13:57:07.970650 coreos-metadata[1779]: Apr 30 13:57:07.970 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Apr 30 13:57:08.001177 systemd[1904]: Queued start job for default target default.target. Apr 30 13:57:08.011069 systemd[1904]: Created slice app.slice - User Application Slice. Apr 30 13:57:08.011103 systemd[1904]: Reached target paths.target - Paths. Apr 30 13:57:08.011124 systemd[1904]: Reached target timers.target - Timers. Apr 30 13:57:08.011736 systemd[1904]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 13:57:08.017193 systemd[1904]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 13:57:08.017221 systemd[1904]: Reached target sockets.target - Sockets. Apr 30 13:57:08.017245 systemd[1904]: Reached target basic.target - Basic System. Apr 30 13:57:08.017266 systemd[1904]: Reached target default.target - Main User Target. Apr 30 13:57:08.017280 systemd[1904]: Startup finished in 111ms. Apr 30 13:57:08.017334 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 13:57:08.027638 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 13:57:08.435405 coreos-metadata[1874]: Apr 30 13:57:08.435 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Apr 30 13:57:08.819660 systemd-networkd[1731]: bond0: Gained IPv6LL Apr 30 13:57:08.819963 systemd-timesyncd[1733]: Network configuration changed, trying to establish connection. Apr 30 13:57:09.651995 systemd-timesyncd[1733]: Network configuration changed, trying to establish connection. Apr 30 13:57:09.652113 systemd-timesyncd[1733]: Network configuration changed, trying to establish connection. Apr 30 13:57:09.653072 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 13:57:09.666353 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 13:57:09.702014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:57:09.712395 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 13:57:09.731083 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 13:57:10.397109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:57:10.408191 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:57:10.856105 kubelet[1930]: E0430 13:57:10.855992 1930 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:57:10.857096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:57:10.857175 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:57:10.857371 systemd[1]: kubelet.service: Consumed 545ms CPU time, 242.1M memory peak. Apr 30 13:57:11.286316 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Apr 30 13:57:11.287028 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Apr 30 13:57:11.405113 systemd[1]: Started sshd@1-147.75.202.185:22-147.75.109.163:35398.service - OpenSSH per-connection server daemon (147.75.109.163:35398). Apr 30 13:57:11.443596 sshd[1953]: Accepted publickey for core from 147.75.109.163 port 35398 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:11.444265 sshd-session[1953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:11.446964 systemd-logind[1805]: New session 2 of user core. Apr 30 13:57:11.456859 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 13:57:11.513066 sshd[1955]: Connection closed by 147.75.109.163 port 35398 Apr 30 13:57:11.513172 sshd-session[1953]: pam_unix(sshd:session): session closed for user core Apr 30 13:57:11.522864 systemd[1]: sshd@1-147.75.202.185:22-147.75.109.163:35398.service: Deactivated successfully. Apr 30 13:57:11.523658 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 13:57:11.524283 systemd-logind[1805]: Session 2 logged out. Waiting for processes to exit. Apr 30 13:57:11.525041 systemd[1]: Started sshd@2-147.75.202.185:22-147.75.109.163:35402.service - OpenSSH per-connection server daemon (147.75.109.163:35402). Apr 30 13:57:11.536400 systemd-logind[1805]: Removed session 2. Apr 30 13:57:11.561455 sshd[1960]: Accepted publickey for core from 147.75.109.163 port 35402 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:11.562089 sshd-session[1960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:11.564524 systemd-logind[1805]: New session 3 of user core. Apr 30 13:57:11.579919 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 13:57:11.636013 sshd[1963]: Connection closed by 147.75.109.163 port 35402 Apr 30 13:57:11.636118 sshd-session[1960]: pam_unix(sshd:session): session closed for user core Apr 30 13:57:11.637317 systemd[1]: sshd@2-147.75.202.185:22-147.75.109.163:35402.service: Deactivated successfully. Apr 30 13:57:11.638173 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 13:57:11.638920 systemd-logind[1805]: Session 3 logged out. Waiting for processes to exit. Apr 30 13:57:11.639455 systemd-logind[1805]: Removed session 3. Apr 30 13:57:12.576079 login[1891]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Apr 30 13:57:12.607063 login[1892]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 13:57:12.611070 systemd-logind[1805]: New session 4 of user core. Apr 30 13:57:12.620842 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 13:57:13.577203 login[1891]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 13:57:13.589269 systemd-logind[1805]: New session 5 of user core. Apr 30 13:57:13.612171 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 13:57:14.090084 systemd-timesyncd[1733]: Network configuration changed, trying to establish connection. Apr 30 13:57:14.207253 coreos-metadata[1874]: Apr 30 13:57:14.207 INFO Fetch successful Apr 30 13:57:14.246116 unknown[1874]: wrote ssh authorized keys file for user: core Apr 30 13:57:14.269092 update-ssh-keys[1992]: Updated "/home/core/.ssh/authorized_keys" Apr 30 13:57:14.269468 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 13:57:14.270414 systemd[1]: Finished sshkeys.service. Apr 30 13:57:14.751128 coreos-metadata[1779]: Apr 30 13:57:14.751 INFO Fetch successful Apr 30 13:57:14.800567 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 13:57:14.801652 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Apr 30 13:57:15.994999 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Apr 30 13:57:15.997804 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 13:57:15.998504 systemd[1]: Startup finished in 2.687s (kernel) + 22.203s (initrd) + 13.020s (userspace) = 37.911s. Apr 30 13:57:21.108505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 13:57:21.117833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:57:21.359148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:57:21.361247 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:57:21.381479 kubelet[2011]: E0430 13:57:21.381425 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:57:21.383325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:57:21.383405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:57:21.383614 systemd[1]: kubelet.service: Consumed 120ms CPU time, 108.4M memory peak. Apr 30 13:57:21.661515 systemd[1]: Started sshd@3-147.75.202.185:22-147.75.109.163:39792.service - OpenSSH per-connection server daemon (147.75.109.163:39792). Apr 30 13:57:21.689021 sshd[2027]: Accepted publickey for core from 147.75.109.163 port 39792 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:21.689704 sshd-session[2027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:21.692581 systemd-logind[1805]: New session 6 of user core. Apr 30 13:57:21.700820 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 13:57:21.760682 sshd[2029]: Connection closed by 147.75.109.163 port 39792 Apr 30 13:57:21.761451 sshd-session[2027]: pam_unix(sshd:session): session closed for user core Apr 30 13:57:21.792483 systemd[1]: sshd@3-147.75.202.185:22-147.75.109.163:39792.service: Deactivated successfully. Apr 30 13:57:21.793272 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 13:57:21.793769 systemd-logind[1805]: Session 6 logged out. Waiting for processes to exit. Apr 30 13:57:21.794537 systemd[1]: Started sshd@4-147.75.202.185:22-147.75.109.163:39806.service - OpenSSH per-connection server daemon (147.75.109.163:39806). Apr 30 13:57:21.795115 systemd-logind[1805]: Removed session 6. Apr 30 13:57:21.823052 sshd[2034]: Accepted publickey for core from 147.75.109.163 port 39806 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:21.823745 sshd-session[2034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:21.826984 systemd-logind[1805]: New session 7 of user core. Apr 30 13:57:21.836782 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 13:57:21.886714 sshd[2038]: Connection closed by 147.75.109.163 port 39806 Apr 30 13:57:21.886893 sshd-session[2034]: pam_unix(sshd:session): session closed for user core Apr 30 13:57:21.898711 systemd[1]: sshd@4-147.75.202.185:22-147.75.109.163:39806.service: Deactivated successfully. Apr 30 13:57:21.899557 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 13:57:21.900144 systemd-logind[1805]: Session 7 logged out. Waiting for processes to exit. Apr 30 13:57:21.901121 systemd[1]: Started sshd@5-147.75.202.185:22-147.75.109.163:39820.service - OpenSSH per-connection server daemon (147.75.109.163:39820). Apr 30 13:57:21.901676 systemd-logind[1805]: Removed session 7. Apr 30 13:57:21.930122 sshd[2043]: Accepted publickey for core from 147.75.109.163 port 39820 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:21.930870 sshd-session[2043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:21.934232 systemd-logind[1805]: New session 8 of user core. Apr 30 13:57:21.958952 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 13:57:22.014544 sshd[2046]: Connection closed by 147.75.109.163 port 39820 Apr 30 13:57:22.014711 sshd-session[2043]: pam_unix(sshd:session): session closed for user core Apr 30 13:57:22.032326 systemd[1]: sshd@5-147.75.202.185:22-147.75.109.163:39820.service: Deactivated successfully. Apr 30 13:57:22.033260 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 13:57:22.033838 systemd-logind[1805]: Session 8 logged out. Waiting for processes to exit. Apr 30 13:57:22.034897 systemd[1]: Started sshd@6-147.75.202.185:22-147.75.109.163:39836.service - OpenSSH per-connection server daemon (147.75.109.163:39836). Apr 30 13:57:22.035625 systemd-logind[1805]: Removed session 8. Apr 30 13:57:22.069135 sshd[2051]: Accepted publickey for core from 147.75.109.163 port 39836 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:22.070047 sshd-session[2051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:22.073762 systemd-logind[1805]: New session 9 of user core. Apr 30 13:57:22.088853 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 13:57:22.155065 sudo[2055]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 13:57:22.155213 sudo[2055]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:57:22.169254 sudo[2055]: pam_unix(sudo:session): session closed for user root Apr 30 13:57:22.170107 sshd[2054]: Connection closed by 147.75.109.163 port 39836 Apr 30 13:57:22.170289 sshd-session[2051]: pam_unix(sshd:session): session closed for user core Apr 30 13:57:22.185479 systemd[1]: sshd@6-147.75.202.185:22-147.75.109.163:39836.service: Deactivated successfully. Apr 30 13:57:22.186560 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 13:57:22.187198 systemd-logind[1805]: Session 9 logged out. Waiting for processes to exit. Apr 30 13:57:22.188521 systemd[1]: Started sshd@7-147.75.202.185:22-147.75.109.163:39840.service - OpenSSH per-connection server daemon (147.75.109.163:39840). Apr 30 13:57:22.189303 systemd-logind[1805]: Removed session 9. Apr 30 13:57:22.227178 sshd[2060]: Accepted publickey for core from 147.75.109.163 port 39840 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:22.228313 sshd-session[2060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:22.232664 systemd-logind[1805]: New session 10 of user core. Apr 30 13:57:22.246865 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 13:57:22.301655 sudo[2065]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 13:57:22.301809 sudo[2065]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:57:22.303837 sudo[2065]: pam_unix(sudo:session): session closed for user root Apr 30 13:57:22.306587 sudo[2064]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 13:57:22.306742 sudo[2064]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:57:22.321951 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 13:57:22.344451 augenrules[2087]: No rules Apr 30 13:57:22.344813 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 13:57:22.344923 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 13:57:22.345403 sudo[2064]: pam_unix(sudo:session): session closed for user root Apr 30 13:57:22.346168 sshd[2063]: Connection closed by 147.75.109.163 port 39840 Apr 30 13:57:22.346337 sshd-session[2060]: pam_unix(sshd:session): session closed for user core Apr 30 13:57:22.364041 systemd[1]: sshd@7-147.75.202.185:22-147.75.109.163:39840.service: Deactivated successfully. Apr 30 13:57:22.364927 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 13:57:22.365470 systemd-logind[1805]: Session 10 logged out. Waiting for processes to exit. Apr 30 13:57:22.366581 systemd[1]: Started sshd@8-147.75.202.185:22-147.75.109.163:39850.service - OpenSSH per-connection server daemon (147.75.109.163:39850). Apr 30 13:57:22.367247 systemd-logind[1805]: Removed session 10. Apr 30 13:57:22.401177 sshd[2095]: Accepted publickey for core from 147.75.109.163 port 39850 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:22.402090 sshd-session[2095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:22.405825 systemd-logind[1805]: New session 11 of user core. Apr 30 13:57:22.415783 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 13:57:22.469353 sudo[2099]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 13:57:22.469499 sudo[2099]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:57:22.766917 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 13:57:22.766968 (dockerd)[2126]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 13:57:23.034934 dockerd[2126]: time="2025-04-30T13:57:23.034867908Z" level=info msg="Starting up" Apr 30 13:57:23.111705 dockerd[2126]: time="2025-04-30T13:57:23.111659073Z" level=info msg="Loading containers: start." Apr 30 13:57:23.252607 kernel: Initializing XFRM netlink socket Apr 30 13:57:23.268417 systemd-timesyncd[1733]: Network configuration changed, trying to establish connection. Apr 30 13:57:23.340508 systemd-networkd[1731]: docker0: Link UP Apr 30 13:57:23.379832 dockerd[2126]: time="2025-04-30T13:57:23.379774392Z" level=info msg="Loading containers: done." Apr 30 13:57:23.390935 dockerd[2126]: time="2025-04-30T13:57:23.390910818Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 13:57:23.391020 dockerd[2126]: time="2025-04-30T13:57:23.390953095Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Apr 30 13:57:23.391020 dockerd[2126]: time="2025-04-30T13:57:23.391007812Z" level=info msg="Daemon has completed initialization" Apr 30 13:57:23.391178 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck919386006-merged.mount: Deactivated successfully. Apr 30 13:57:23.405252 dockerd[2126]: time="2025-04-30T13:57:23.405199491Z" level=info msg="API listen on /run/docker.sock" Apr 30 13:57:23.405300 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 13:57:24.246842 systemd-resolved[1732]: Clock change detected. Flushing caches. Apr 30 13:57:24.246900 systemd-timesyncd[1733]: Contacted time server [2607:b500:410:7700::1]:123 (2.flatcar.pool.ntp.org). Apr 30 13:57:24.246934 systemd-timesyncd[1733]: Initial clock synchronization to Wed 2025-04-30 13:57:24.246752 UTC. Apr 30 13:57:24.876212 containerd[1823]: time="2025-04-30T13:57:24.876073360Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Apr 30 13:57:25.461870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355428781.mount: Deactivated successfully. Apr 30 13:57:26.712324 containerd[1823]: time="2025-04-30T13:57:26.712275455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:26.712531 containerd[1823]: time="2025-04-30T13:57:26.712462415Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" Apr 30 13:57:26.712866 containerd[1823]: time="2025-04-30T13:57:26.712827168Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:26.714450 containerd[1823]: time="2025-04-30T13:57:26.714409545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:26.715017 containerd[1823]: time="2025-04-30T13:57:26.714976662Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.838831333s" Apr 30 13:57:26.715017 containerd[1823]: time="2025-04-30T13:57:26.714993677Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Apr 30 13:57:26.716062 containerd[1823]: time="2025-04-30T13:57:26.716019326Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Apr 30 13:57:28.294429 containerd[1823]: time="2025-04-30T13:57:28.294405370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:28.294707 containerd[1823]: time="2025-04-30T13:57:28.294595751Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" Apr 30 13:57:28.295013 containerd[1823]: time="2025-04-30T13:57:28.295002331Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:28.296642 containerd[1823]: time="2025-04-30T13:57:28.296630985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:28.297372 containerd[1823]: time="2025-04-30T13:57:28.297356284Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.581316401s" Apr 30 13:57:28.297397 containerd[1823]: time="2025-04-30T13:57:28.297378444Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Apr 30 13:57:28.297752 containerd[1823]: time="2025-04-30T13:57:28.297712258Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Apr 30 13:57:29.372912 containerd[1823]: time="2025-04-30T13:57:29.372889348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:29.373125 containerd[1823]: time="2025-04-30T13:57:29.373031331Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" Apr 30 13:57:29.373478 containerd[1823]: time="2025-04-30T13:57:29.373468026Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:29.375062 containerd[1823]: time="2025-04-30T13:57:29.375020083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:29.375737 containerd[1823]: time="2025-04-30T13:57:29.375693952Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.077965639s" Apr 30 13:57:29.375737 containerd[1823]: time="2025-04-30T13:57:29.375711099Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Apr 30 13:57:29.376000 containerd[1823]: time="2025-04-30T13:57:29.375985213Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Apr 30 13:57:30.133128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4046251329.mount: Deactivated successfully. Apr 30 13:57:30.398551 containerd[1823]: time="2025-04-30T13:57:30.398459079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:30.398743 containerd[1823]: time="2025-04-30T13:57:30.398671671Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" Apr 30 13:57:30.398995 containerd[1823]: time="2025-04-30T13:57:30.398955772Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:30.399980 containerd[1823]: time="2025-04-30T13:57:30.399936727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:30.400366 containerd[1823]: time="2025-04-30T13:57:30.400325614Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.024322942s" Apr 30 13:57:30.400366 containerd[1823]: time="2025-04-30T13:57:30.400341239Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Apr 30 13:57:30.400625 containerd[1823]: time="2025-04-30T13:57:30.400613673Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 13:57:30.837389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3613836476.mount: Deactivated successfully. Apr 30 13:57:31.412507 containerd[1823]: time="2025-04-30T13:57:31.412482400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:31.412733 containerd[1823]: time="2025-04-30T13:57:31.412704815Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 13:57:31.413060 containerd[1823]: time="2025-04-30T13:57:31.413050111Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:31.415491 containerd[1823]: time="2025-04-30T13:57:31.415449889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:31.416032 containerd[1823]: time="2025-04-30T13:57:31.415986153Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.015357609s" Apr 30 13:57:31.416032 containerd[1823]: time="2025-04-30T13:57:31.416001670Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 13:57:31.416402 containerd[1823]: time="2025-04-30T13:57:31.416368229Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 13:57:31.831493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1865367240.mount: Deactivated successfully. Apr 30 13:57:31.832973 containerd[1823]: time="2025-04-30T13:57:31.832957048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:31.833130 containerd[1823]: time="2025-04-30T13:57:31.833110505Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 30 13:57:31.833611 containerd[1823]: time="2025-04-30T13:57:31.833600695Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:31.834671 containerd[1823]: time="2025-04-30T13:57:31.834660614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:31.835191 containerd[1823]: time="2025-04-30T13:57:31.835180441Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 418.775604ms" Apr 30 13:57:31.835213 containerd[1823]: time="2025-04-30T13:57:31.835195450Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 13:57:31.835647 containerd[1823]: time="2025-04-30T13:57:31.835622815Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Apr 30 13:57:32.306442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 13:57:32.320386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:57:32.321436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4163405044.mount: Deactivated successfully. Apr 30 13:57:32.546296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:57:32.548556 (kubelet)[2498]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:57:32.572207 kubelet[2498]: E0430 13:57:32.572136 2498 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:57:32.573756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:57:32.573852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:57:32.574060 systemd[1]: kubelet.service: Consumed 92ms CPU time, 101.7M memory peak. Apr 30 13:57:33.547612 containerd[1823]: time="2025-04-30T13:57:33.547584697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:33.547836 containerd[1823]: time="2025-04-30T13:57:33.547730497Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Apr 30 13:57:33.548317 containerd[1823]: time="2025-04-30T13:57:33.548305412Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:33.550272 containerd[1823]: time="2025-04-30T13:57:33.550258364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:33.550852 containerd[1823]: time="2025-04-30T13:57:33.550842159Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.71519076s" Apr 30 13:57:33.550873 containerd[1823]: time="2025-04-30T13:57:33.550856429Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Apr 30 13:57:35.455212 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:57:35.455369 systemd[1]: kubelet.service: Consumed 92ms CPU time, 101.7M memory peak. Apr 30 13:57:35.475628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:57:35.489526 systemd[1]: Reload requested from client PID 2594 ('systemctl') (unit session-11.scope)... Apr 30 13:57:35.489534 systemd[1]: Reloading... Apr 30 13:57:35.530272 zram_generator::config[2640]: No configuration found. Apr 30 13:57:35.600762 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:57:35.684117 systemd[1]: Reloading finished in 194 ms. Apr 30 13:57:35.723988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:57:35.725613 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:57:35.726098 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 13:57:35.726208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:57:35.726227 systemd[1]: kubelet.service: Consumed 48ms CPU time, 83.5M memory peak. Apr 30 13:57:35.727118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:57:35.930529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:57:35.933472 (kubelet)[2709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 13:57:35.953065 kubelet[2709]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:57:35.953065 kubelet[2709]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 13:57:35.953065 kubelet[2709]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:57:35.953946 kubelet[2709]: I0430 13:57:35.953900 2709 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 13:57:36.089915 kubelet[2709]: I0430 13:57:36.089875 2709 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 13:57:36.089915 kubelet[2709]: I0430 13:57:36.089886 2709 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 13:57:36.090013 kubelet[2709]: I0430 13:57:36.090001 2709 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 13:57:36.112422 kubelet[2709]: I0430 13:57:36.112368 2709 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 13:57:36.112996 kubelet[2709]: E0430 13:57:36.112938 2709 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.202.185:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.202.185:6443: connect: connection refused" logger="UnhandledError" Apr 30 13:57:36.119040 kubelet[2709]: E0430 13:57:36.118965 2709 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 13:57:36.119040 kubelet[2709]: I0430 13:57:36.119009 2709 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 13:57:36.129560 kubelet[2709]: I0430 13:57:36.129523 2709 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 13:57:36.130439 kubelet[2709]: I0430 13:57:36.130402 2709 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 13:57:36.130485 kubelet[2709]: I0430 13:57:36.130471 2709 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 13:57:36.130602 kubelet[2709]: I0430 13:57:36.130482 2709 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-a-07b90b6465","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 13:57:36.130602 kubelet[2709]: I0430 13:57:36.130581 2709 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 13:57:36.130602 kubelet[2709]: I0430 13:57:36.130587 2709 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 13:57:36.130709 kubelet[2709]: I0430 13:57:36.130642 2709 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:57:36.132181 kubelet[2709]: I0430 13:57:36.132173 2709 kubelet.go:408] "Attempting to sync node with API server" Apr 30 13:57:36.132181 kubelet[2709]: I0430 13:57:36.132182 2709 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 13:57:36.132290 kubelet[2709]: I0430 13:57:36.132197 2709 kubelet.go:314] "Adding apiserver pod source" Apr 30 13:57:36.132290 kubelet[2709]: I0430 13:57:36.132203 2709 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 13:57:36.135527 kubelet[2709]: W0430 13:57:36.135484 2709 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.202.185:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.202.185:6443: connect: connection refused Apr 30 13:57:36.135580 kubelet[2709]: E0430 13:57:36.135547 2709 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.202.185:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.202.185:6443: connect: connection refused" logger="UnhandledError" Apr 30 13:57:36.137109 kubelet[2709]: I0430 13:57:36.137096 2709 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 13:57:36.138426 kubelet[2709]: I0430 13:57:36.138416 2709 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 13:57:36.138466 kubelet[2709]: W0430 13:57:36.138419 2709 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.202.185:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-07b90b6465&limit=500&resourceVersion=0": dial tcp 147.75.202.185:6443: connect: connection refused Apr 30 13:57:36.138466 kubelet[2709]: E0430 13:57:36.138454 2709 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.202.185:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-07b90b6465&limit=500&resourceVersion=0\": dial tcp 147.75.202.185:6443: connect: connection refused" logger="UnhandledError" Apr 30 13:57:36.138995 kubelet[2709]: W0430 13:57:36.138984 2709 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 13:57:36.139397 kubelet[2709]: I0430 13:57:36.139367 2709 server.go:1269] "Started kubelet" Apr 30 13:57:36.139446 kubelet[2709]: I0430 13:57:36.139407 2709 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 13:57:36.139487 kubelet[2709]: I0430 13:57:36.139423 2709 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 13:57:36.139671 kubelet[2709]: I0430 13:57:36.139632 2709 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 13:57:36.140287 kubelet[2709]: I0430 13:57:36.140280 2709 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 13:57:36.140336 kubelet[2709]: I0430 13:57:36.140326 2709 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 13:57:36.140475 kubelet[2709]: I0430 13:57:36.140346 2709 server.go:460] "Adding debug handlers to kubelet server" Apr 30 13:57:36.140475 kubelet[2709]: I0430 13:57:36.140410 2709 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 13:57:36.140475 kubelet[2709]: E0430 13:57:36.140416 2709 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-a-07b90b6465\" not found" Apr 30 13:57:36.140475 kubelet[2709]: I0430 13:57:36.140439 2709 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 13:57:36.140475 kubelet[2709]: I0430 13:57:36.140469 2709 reconciler.go:26] "Reconciler: start to sync state" Apr 30 13:57:36.140627 kubelet[2709]: E0430 13:57:36.140574 2709 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.185:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-07b90b6465?timeout=10s\": dial tcp 147.75.202.185:6443: connect: connection refused" interval="200ms" Apr 30 13:57:36.140660 kubelet[2709]: E0430 13:57:36.140636 2709 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 13:57:36.140660 kubelet[2709]: W0430 13:57:36.140622 2709 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.202.185:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.185:6443: connect: connection refused Apr 30 13:57:36.140660 kubelet[2709]: E0430 13:57:36.140656 2709 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.202.185:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.202.185:6443: connect: connection refused" logger="UnhandledError" Apr 30 13:57:36.140753 kubelet[2709]: I0430 13:57:36.140660 2709 factory.go:221] Registration of the systemd container factory successfully Apr 30 13:57:36.140753 kubelet[2709]: I0430 13:57:36.140722 2709 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 13:57:36.141160 kubelet[2709]: I0430 13:57:36.141150 2709 factory.go:221] Registration of the containerd container factory successfully Apr 30 13:57:36.146570 kubelet[2709]: E0430 13:57:36.144500 2709 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.202.185:6443/api/v1/namespaces/default/events\": dial tcp 147.75.202.185:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-a-07b90b6465.183b1d4377658ad9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-a-07b90b6465,UID:ci-4230.1.1-a-07b90b6465,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-a-07b90b6465,},FirstTimestamp:2025-04-30 13:57:36.139356889 +0000 UTC m=+0.203645025,LastTimestamp:2025-04-30 13:57:36.139356889 +0000 UTC m=+0.203645025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-a-07b90b6465,}" Apr 30 13:57:36.149403 kubelet[2709]: I0430 13:57:36.149376 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 13:57:36.149745 kubelet[2709]: I0430 13:57:36.149735 2709 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 13:57:36.149745 kubelet[2709]: I0430 13:57:36.149743 2709 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 13:57:36.149813 kubelet[2709]: I0430 13:57:36.149752 2709 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:57:36.150025 kubelet[2709]: I0430 13:57:36.150017 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 13:57:36.150046 kubelet[2709]: I0430 13:57:36.150034 2709 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 13:57:36.150046 kubelet[2709]: I0430 13:57:36.150046 2709 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 13:57:36.150083 kubelet[2709]: E0430 13:57:36.150066 2709 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 13:57:36.150357 kubelet[2709]: W0430 13:57:36.150316 2709 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.202.185:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.202.185:6443: connect: connection refused Apr 30 13:57:36.150387 kubelet[2709]: E0430 13:57:36.150358 2709 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.202.185:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.202.185:6443: connect: connection refused" logger="UnhandledError" Apr 30 13:57:36.150714 kubelet[2709]: I0430 13:57:36.150683 2709 policy_none.go:49] "None policy: Start" Apr 30 13:57:36.150990 kubelet[2709]: I0430 13:57:36.150951 2709 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 13:57:36.150990 kubelet[2709]: I0430 13:57:36.150962 2709 state_mem.go:35] "Initializing new in-memory state store" Apr 30 13:57:36.157681 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 13:57:36.186427 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 13:57:36.188820 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 13:57:36.206132 kubelet[2709]: I0430 13:57:36.206072 2709 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 13:57:36.206286 kubelet[2709]: I0430 13:57:36.206244 2709 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 13:57:36.206286 kubelet[2709]: I0430 13:57:36.206257 2709 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 13:57:36.206454 kubelet[2709]: I0430 13:57:36.206407 2709 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 13:57:36.207184 kubelet[2709]: E0430 13:57:36.207138 2709 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-a-07b90b6465\" not found" Apr 30 13:57:36.275568 systemd[1]: Created slice kubepods-burstable-pod39dad65fb4a36ea20f59dd5c395f9a15.slice - libcontainer container kubepods-burstable-pod39dad65fb4a36ea20f59dd5c395f9a15.slice. Apr 30 13:57:36.311404 kubelet[2709]: I0430 13:57:36.311314 2709 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.312171 kubelet[2709]: E0430 13:57:36.312110 2709 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.202.185:6443/api/v1/nodes\": dial tcp 147.75.202.185:6443: connect: connection refused" node="ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.314267 systemd[1]: Created slice kubepods-burstable-pod50e9aee89eee93ec1e016b307399124d.slice - libcontainer container kubepods-burstable-pod50e9aee89eee93ec1e016b307399124d.slice. Apr 30 13:57:36.341726 kubelet[2709]: E0430 13:57:36.341477 2709 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.185:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-07b90b6465?timeout=10s\": dial tcp 147.75.202.185:6443: connect: connection refused" interval="400ms" Apr 30 13:57:36.342656 kubelet[2709]: I0430 13:57:36.342551 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50e9aee89eee93ec1e016b307399124d-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-a-07b90b6465\" (UID: \"50e9aee89eee93ec1e016b307399124d\") " pod="kube-system/kube-scheduler-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.342656 kubelet[2709]: I0430 13:57:36.342637 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/443602997cf2579bf62ef17539f45b27-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-a-07b90b6465\" (UID: \"443602997cf2579bf62ef17539f45b27\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.342975 kubelet[2709]: I0430 13:57:36.342689 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/443602997cf2579bf62ef17539f45b27-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-a-07b90b6465\" (UID: \"443602997cf2579bf62ef17539f45b27\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.342975 kubelet[2709]: I0430 13:57:36.342739 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/443602997cf2579bf62ef17539f45b27-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-a-07b90b6465\" (UID: \"443602997cf2579bf62ef17539f45b27\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.342975 kubelet[2709]: I0430 13:57:36.342797 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39dad65fb4a36ea20f59dd5c395f9a15-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-07b90b6465\" (UID: \"39dad65fb4a36ea20f59dd5c395f9a15\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.342975 kubelet[2709]: I0430 13:57:36.342848 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39dad65fb4a36ea20f59dd5c395f9a15-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-a-07b90b6465\" (UID: \"39dad65fb4a36ea20f59dd5c395f9a15\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.342975 kubelet[2709]: I0430 13:57:36.342927 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/39dad65fb4a36ea20f59dd5c395f9a15-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-a-07b90b6465\" (UID: \"39dad65fb4a36ea20f59dd5c395f9a15\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.343483 kubelet[2709]: I0430 13:57:36.342977 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39dad65fb4a36ea20f59dd5c395f9a15-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-07b90b6465\" (UID: \"39dad65fb4a36ea20f59dd5c395f9a15\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.343483 kubelet[2709]: I0430 13:57:36.343025 2709 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39dad65fb4a36ea20f59dd5c395f9a15-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-a-07b90b6465\" (UID: \"39dad65fb4a36ea20f59dd5c395f9a15\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.347683 systemd[1]: Created slice kubepods-burstable-pod443602997cf2579bf62ef17539f45b27.slice - libcontainer container kubepods-burstable-pod443602997cf2579bf62ef17539f45b27.slice. Apr 30 13:57:36.517105 kubelet[2709]: I0430 13:57:36.517053 2709 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.517866 kubelet[2709]: E0430 13:57:36.517782 2709 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.202.185:6443/api/v1/nodes\": dial tcp 147.75.202.185:6443: connect: connection refused" node="ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.606833 containerd[1823]: time="2025-04-30T13:57:36.606589617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-a-07b90b6465,Uid:39dad65fb4a36ea20f59dd5c395f9a15,Namespace:kube-system,Attempt:0,}" Apr 30 13:57:36.640855 containerd[1823]: time="2025-04-30T13:57:36.640723600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-a-07b90b6465,Uid:50e9aee89eee93ec1e016b307399124d,Namespace:kube-system,Attempt:0,}" Apr 30 13:57:36.654453 containerd[1823]: time="2025-04-30T13:57:36.654345302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-a-07b90b6465,Uid:443602997cf2579bf62ef17539f45b27,Namespace:kube-system,Attempt:0,}" Apr 30 13:57:36.742162 kubelet[2709]: E0430 13:57:36.742107 2709 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.185:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-07b90b6465?timeout=10s\": dial tcp 147.75.202.185:6443: connect: connection refused" interval="800ms" Apr 30 13:57:36.919395 kubelet[2709]: I0430 13:57:36.919256 2709 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.919664 kubelet[2709]: E0430 13:57:36.919603 2709 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.202.185:6443/api/v1/nodes\": dial tcp 147.75.202.185:6443: connect: connection refused" node="ci-4230.1.1-a-07b90b6465" Apr 30 13:57:36.939083 kubelet[2709]: W0430 13:57:36.939017 2709 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.202.185:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.202.185:6443: connect: connection refused Apr 30 13:57:36.939083 kubelet[2709]: E0430 13:57:36.939062 2709 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.202.185:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.202.185:6443: connect: connection refused" logger="UnhandledError" Apr 30 13:57:37.017952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount132457229.mount: Deactivated successfully. Apr 30 13:57:37.019198 containerd[1823]: time="2025-04-30T13:57:37.019180047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:57:37.019466 containerd[1823]: time="2025-04-30T13:57:37.019449745Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 13:57:37.020455 containerd[1823]: time="2025-04-30T13:57:37.020414288Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:57:37.020854 containerd[1823]: time="2025-04-30T13:57:37.020813564Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 13:57:37.021169 containerd[1823]: time="2025-04-30T13:57:37.021135222Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:57:37.022215 containerd[1823]: time="2025-04-30T13:57:37.022186651Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:57:37.022576 containerd[1823]: time="2025-04-30T13:57:37.022559301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 13:57:37.024374 containerd[1823]: time="2025-04-30T13:57:37.024333556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:57:37.025148 containerd[1823]: time="2025-04-30T13:57:37.025105874Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 418.305672ms" Apr 30 13:57:37.025511 containerd[1823]: time="2025-04-30T13:57:37.025471452Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 384.529955ms" Apr 30 13:57:37.026981 containerd[1823]: time="2025-04-30T13:57:37.026945939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 372.428724ms" Apr 30 13:57:37.115712 containerd[1823]: time="2025-04-30T13:57:37.115643889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:57:37.115712 containerd[1823]: time="2025-04-30T13:57:37.115668084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:57:37.115712 containerd[1823]: time="2025-04-30T13:57:37.115677912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:37.115712 containerd[1823]: time="2025-04-30T13:57:37.115636754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:57:37.115712 containerd[1823]: time="2025-04-30T13:57:37.115665684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:57:37.115712 containerd[1823]: time="2025-04-30T13:57:37.115672459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:37.115712 containerd[1823]: time="2025-04-30T13:57:37.115649555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:57:37.115712 containerd[1823]: time="2025-04-30T13:57:37.115680306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:57:37.115712 containerd[1823]: time="2025-04-30T13:57:37.115691859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:37.115954 containerd[1823]: time="2025-04-30T13:57:37.115724571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:37.115954 containerd[1823]: time="2025-04-30T13:57:37.115721588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:37.115954 containerd[1823]: time="2025-04-30T13:57:37.115743232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:37.135562 systemd[1]: Started cri-containerd-16e6cbfc14ea7f5e939c8c29172029b43573632c357c88d330b571ee5156d34f.scope - libcontainer container 16e6cbfc14ea7f5e939c8c29172029b43573632c357c88d330b571ee5156d34f. Apr 30 13:57:37.136255 systemd[1]: Started cri-containerd-3295f9683c95b21c74c858165d8b5dcb1e79eb14faac6c5f5dbc7a0cc1945ece.scope - libcontainer container 3295f9683c95b21c74c858165d8b5dcb1e79eb14faac6c5f5dbc7a0cc1945ece. Apr 30 13:57:37.136974 systemd[1]: Started cri-containerd-a6beef197b5ba376bea8aa08f9db2ad983434cb2bc07c4f05b8fa2ed5934cbaf.scope - libcontainer container a6beef197b5ba376bea8aa08f9db2ad983434cb2bc07c4f05b8fa2ed5934cbaf. Apr 30 13:57:37.159090 containerd[1823]: time="2025-04-30T13:57:37.159062830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-a-07b90b6465,Uid:50e9aee89eee93ec1e016b307399124d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3295f9683c95b21c74c858165d8b5dcb1e79eb14faac6c5f5dbc7a0cc1945ece\"" Apr 30 13:57:37.159090 containerd[1823]: time="2025-04-30T13:57:37.159087884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-a-07b90b6465,Uid:39dad65fb4a36ea20f59dd5c395f9a15,Namespace:kube-system,Attempt:0,} returns sandbox id \"16e6cbfc14ea7f5e939c8c29172029b43573632c357c88d330b571ee5156d34f\"" Apr 30 13:57:37.160675 containerd[1823]: time="2025-04-30T13:57:37.160639722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-a-07b90b6465,Uid:443602997cf2579bf62ef17539f45b27,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6beef197b5ba376bea8aa08f9db2ad983434cb2bc07c4f05b8fa2ed5934cbaf\"" Apr 30 13:57:37.161666 containerd[1823]: time="2025-04-30T13:57:37.161651815Z" level=info msg="CreateContainer within sandbox \"16e6cbfc14ea7f5e939c8c29172029b43573632c357c88d330b571ee5156d34f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 13:57:37.161701 containerd[1823]: time="2025-04-30T13:57:37.161668772Z" level=info msg="CreateContainer within sandbox \"3295f9683c95b21c74c858165d8b5dcb1e79eb14faac6c5f5dbc7a0cc1945ece\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 13:57:37.161701 containerd[1823]: time="2025-04-30T13:57:37.161690671Z" level=info msg="CreateContainer within sandbox \"a6beef197b5ba376bea8aa08f9db2ad983434cb2bc07c4f05b8fa2ed5934cbaf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 13:57:37.174316 containerd[1823]: time="2025-04-30T13:57:37.174254602Z" level=info msg="CreateContainer within sandbox \"16e6cbfc14ea7f5e939c8c29172029b43573632c357c88d330b571ee5156d34f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"969ee786da71ae468c12a16a8289b66eef2798e86e16a68b0d4484be2a8e138d\"" Apr 30 13:57:37.174550 containerd[1823]: time="2025-04-30T13:57:37.174506619Z" level=info msg="StartContainer for \"969ee786da71ae468c12a16a8289b66eef2798e86e16a68b0d4484be2a8e138d\"" Apr 30 13:57:37.175723 containerd[1823]: time="2025-04-30T13:57:37.175682732Z" level=info msg="CreateContainer within sandbox \"3295f9683c95b21c74c858165d8b5dcb1e79eb14faac6c5f5dbc7a0cc1945ece\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ede0b62351d034aee82c434d57ca02cd0c69998621c3099db9f261980cb53475\"" Apr 30 13:57:37.175864 containerd[1823]: time="2025-04-30T13:57:37.175823172Z" level=info msg="StartContainer for \"ede0b62351d034aee82c434d57ca02cd0c69998621c3099db9f261980cb53475\"" Apr 30 13:57:37.176215 containerd[1823]: time="2025-04-30T13:57:37.176192938Z" level=info msg="CreateContainer within sandbox \"a6beef197b5ba376bea8aa08f9db2ad983434cb2bc07c4f05b8fa2ed5934cbaf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"64706425d1f34f41d30cb5a45f80a9e05de986a47cbaa7181ba046a1362a6454\"" Apr 30 13:57:37.176543 containerd[1823]: time="2025-04-30T13:57:37.176524626Z" level=info msg="StartContainer for \"64706425d1f34f41d30cb5a45f80a9e05de986a47cbaa7181ba046a1362a6454\"" Apr 30 13:57:37.203556 systemd[1]: Started cri-containerd-64706425d1f34f41d30cb5a45f80a9e05de986a47cbaa7181ba046a1362a6454.scope - libcontainer container 64706425d1f34f41d30cb5a45f80a9e05de986a47cbaa7181ba046a1362a6454. Apr 30 13:57:37.204178 systemd[1]: Started cri-containerd-969ee786da71ae468c12a16a8289b66eef2798e86e16a68b0d4484be2a8e138d.scope - libcontainer container 969ee786da71ae468c12a16a8289b66eef2798e86e16a68b0d4484be2a8e138d. Apr 30 13:57:37.204894 systemd[1]: Started cri-containerd-ede0b62351d034aee82c434d57ca02cd0c69998621c3099db9f261980cb53475.scope - libcontainer container ede0b62351d034aee82c434d57ca02cd0c69998621c3099db9f261980cb53475. Apr 30 13:57:37.230378 containerd[1823]: time="2025-04-30T13:57:37.230353995Z" level=info msg="StartContainer for \"ede0b62351d034aee82c434d57ca02cd0c69998621c3099db9f261980cb53475\" returns successfully" Apr 30 13:57:37.239038 containerd[1823]: time="2025-04-30T13:57:37.239013264Z" level=info msg="StartContainer for \"969ee786da71ae468c12a16a8289b66eef2798e86e16a68b0d4484be2a8e138d\" returns successfully" Apr 30 13:57:37.239137 containerd[1823]: time="2025-04-30T13:57:37.239013393Z" level=info msg="StartContainer for \"64706425d1f34f41d30cb5a45f80a9e05de986a47cbaa7181ba046a1362a6454\" returns successfully" Apr 30 13:57:37.245948 kubelet[2709]: W0430 13:57:37.245888 2709 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.202.185:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.185:6443: connect: connection refused Apr 30 13:57:37.245948 kubelet[2709]: E0430 13:57:37.245941 2709 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.202.185:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.202.185:6443: connect: connection refused" logger="UnhandledError" Apr 30 13:57:37.721154 kubelet[2709]: I0430 13:57:37.721137 2709 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-a-07b90b6465" Apr 30 13:57:37.731372 kubelet[2709]: E0430 13:57:37.731350 2709 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.1-a-07b90b6465\" not found" node="ci-4230.1.1-a-07b90b6465" Apr 30 13:57:37.830314 kubelet[2709]: I0430 13:57:37.830292 2709 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.1-a-07b90b6465" Apr 30 13:57:37.830314 kubelet[2709]: E0430 13:57:37.830315 2709 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230.1.1-a-07b90b6465\": node \"ci-4230.1.1-a-07b90b6465\" not found" Apr 30 13:57:38.132614 kubelet[2709]: I0430 13:57:38.132595 2709 apiserver.go:52] "Watching apiserver" Apr 30 13:57:38.140544 kubelet[2709]: I0430 13:57:38.140532 2709 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 13:57:38.155425 kubelet[2709]: E0430 13:57:38.155409 2709 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.1.1-a-07b90b6465\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:38.155502 kubelet[2709]: E0430 13:57:38.155409 2709 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.1.1-a-07b90b6465\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:38.155502 kubelet[2709]: E0430 13:57:38.155411 2709 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-a-07b90b6465\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:39.162744 kubelet[2709]: W0430 13:57:39.162681 2709 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:57:39.162744 kubelet[2709]: W0430 13:57:39.162688 2709 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:57:40.378451 systemd[1]: Reload requested from client PID 3026 ('systemctl') (unit session-11.scope)... Apr 30 13:57:40.378459 systemd[1]: Reloading... Apr 30 13:57:40.429318 zram_generator::config[3072]: No configuration found. Apr 30 13:57:40.501359 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:57:40.593039 systemd[1]: Reloading finished in 214 ms. Apr 30 13:57:40.613566 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:57:40.621085 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 13:57:40.621210 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:57:40.621241 systemd[1]: kubelet.service: Consumed 662ms CPU time, 133.1M memory peak. Apr 30 13:57:40.636705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:57:40.828773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:57:40.831008 (kubelet)[3137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 13:57:40.851811 kubelet[3137]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:57:40.851811 kubelet[3137]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 13:57:40.851811 kubelet[3137]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:57:40.852182 kubelet[3137]: I0430 13:57:40.851850 3137 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 13:57:40.856608 kubelet[3137]: I0430 13:57:40.856572 3137 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 13:57:40.856608 kubelet[3137]: I0430 13:57:40.856602 3137 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 13:57:40.856755 kubelet[3137]: I0430 13:57:40.856747 3137 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 13:57:40.857534 kubelet[3137]: I0430 13:57:40.857513 3137 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 13:57:40.858655 kubelet[3137]: I0430 13:57:40.858625 3137 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 13:57:40.860774 kubelet[3137]: E0430 13:57:40.860730 3137 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 13:57:40.860774 kubelet[3137]: I0430 13:57:40.860749 3137 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 13:57:40.869281 kubelet[3137]: I0430 13:57:40.869217 3137 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 13:57:40.869413 kubelet[3137]: I0430 13:57:40.869365 3137 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 13:57:40.869524 kubelet[3137]: I0430 13:57:40.869449 3137 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 13:57:40.869643 kubelet[3137]: I0430 13:57:40.869494 3137 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-a-07b90b6465","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 13:57:40.869643 kubelet[3137]: I0430 13:57:40.869617 3137 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 13:57:40.869643 kubelet[3137]: I0430 13:57:40.869623 3137 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 13:57:40.869643 kubelet[3137]: I0430 13:57:40.869639 3137 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:57:40.869766 kubelet[3137]: I0430 13:57:40.869695 3137 kubelet.go:408] "Attempting to sync node with API server" Apr 30 13:57:40.869766 kubelet[3137]: I0430 13:57:40.869702 3137 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 13:57:40.869766 kubelet[3137]: I0430 13:57:40.869716 3137 kubelet.go:314] "Adding apiserver pod source" Apr 30 13:57:40.869766 kubelet[3137]: I0430 13:57:40.869723 3137 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 13:57:40.870218 kubelet[3137]: I0430 13:57:40.870198 3137 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 13:57:40.870497 kubelet[3137]: I0430 13:57:40.870467 3137 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 13:57:40.870779 kubelet[3137]: I0430 13:57:40.870770 3137 server.go:1269] "Started kubelet" Apr 30 13:57:40.870822 kubelet[3137]: I0430 13:57:40.870797 3137 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 13:57:40.870859 kubelet[3137]: I0430 13:57:40.870811 3137 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 13:57:40.870982 kubelet[3137]: I0430 13:57:40.870976 3137 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 13:57:40.871554 kubelet[3137]: I0430 13:57:40.871546 3137 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 13:57:40.871554 kubelet[3137]: I0430 13:57:40.871551 3137 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 13:57:40.871619 kubelet[3137]: I0430 13:57:40.871606 3137 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 13:57:40.871650 kubelet[3137]: I0430 13:57:40.871621 3137 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 13:57:40.871681 kubelet[3137]: E0430 13:57:40.871602 3137 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.1.1-a-07b90b6465\" not found" Apr 30 13:57:40.871710 kubelet[3137]: E0430 13:57:40.871680 3137 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 13:57:40.872061 kubelet[3137]: I0430 13:57:40.872045 3137 reconciler.go:26] "Reconciler: start to sync state" Apr 30 13:57:40.872360 kubelet[3137]: I0430 13:57:40.872344 3137 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 13:57:40.872830 kubelet[3137]: I0430 13:57:40.872799 3137 server.go:460] "Adding debug handlers to kubelet server" Apr 30 13:57:40.873605 kubelet[3137]: I0430 13:57:40.873595 3137 factory.go:221] Registration of the containerd container factory successfully Apr 30 13:57:40.873605 kubelet[3137]: I0430 13:57:40.873604 3137 factory.go:221] Registration of the systemd container factory successfully Apr 30 13:57:40.877228 kubelet[3137]: I0430 13:57:40.877200 3137 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 13:57:40.877985 kubelet[3137]: I0430 13:57:40.877970 3137 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 13:57:40.878019 kubelet[3137]: I0430 13:57:40.877992 3137 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 13:57:40.878019 kubelet[3137]: I0430 13:57:40.878004 3137 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 13:57:40.878073 kubelet[3137]: E0430 13:57:40.878031 3137 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 13:57:40.887044 kubelet[3137]: I0430 13:57:40.886987 3137 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 13:57:40.887044 kubelet[3137]: I0430 13:57:40.887001 3137 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 13:57:40.887044 kubelet[3137]: I0430 13:57:40.887011 3137 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:57:40.887136 kubelet[3137]: I0430 13:57:40.887088 3137 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 13:57:40.887136 kubelet[3137]: I0430 13:57:40.887095 3137 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 13:57:40.887136 kubelet[3137]: I0430 13:57:40.887106 3137 policy_none.go:49] "None policy: Start" Apr 30 13:57:40.887383 kubelet[3137]: I0430 13:57:40.887377 3137 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 13:57:40.887409 kubelet[3137]: I0430 13:57:40.887386 3137 state_mem.go:35] "Initializing new in-memory state store" Apr 30 13:57:40.887457 kubelet[3137]: I0430 13:57:40.887452 3137 state_mem.go:75] "Updated machine memory state" Apr 30 13:57:40.889697 kubelet[3137]: I0430 13:57:40.889690 3137 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 13:57:40.889805 kubelet[3137]: I0430 13:57:40.889770 3137 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 13:57:40.889805 kubelet[3137]: I0430 13:57:40.889777 3137 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 13:57:40.889871 kubelet[3137]: I0430 13:57:40.889865 3137 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 13:57:40.986538 kubelet[3137]: W0430 13:57:40.986434 3137 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:57:40.986538 kubelet[3137]: W0430 13:57:40.986499 3137 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:57:40.986896 kubelet[3137]: W0430 13:57:40.986579 3137 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:57:40.986896 kubelet[3137]: E0430 13:57:40.986591 3137 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.1.1-a-07b90b6465\" already exists" pod="kube-system/kube-scheduler-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:40.986896 kubelet[3137]: E0430 13:57:40.986693 3137 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-a-07b90b6465\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:40.996896 kubelet[3137]: I0430 13:57:40.996850 3137 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.006864 kubelet[3137]: I0430 13:57:41.006785 3137 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.007059 kubelet[3137]: I0430 13:57:41.006934 3137 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.072830 kubelet[3137]: I0430 13:57:41.072705 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50e9aee89eee93ec1e016b307399124d-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-a-07b90b6465\" (UID: \"50e9aee89eee93ec1e016b307399124d\") " pod="kube-system/kube-scheduler-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.173681 kubelet[3137]: I0430 13:57:41.173442 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/443602997cf2579bf62ef17539f45b27-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-a-07b90b6465\" (UID: \"443602997cf2579bf62ef17539f45b27\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.173681 kubelet[3137]: I0430 13:57:41.173560 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/39dad65fb4a36ea20f59dd5c395f9a15-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-a-07b90b6465\" (UID: \"39dad65fb4a36ea20f59dd5c395f9a15\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.173681 kubelet[3137]: I0430 13:57:41.173643 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39dad65fb4a36ea20f59dd5c395f9a15-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-a-07b90b6465\" (UID: \"39dad65fb4a36ea20f59dd5c395f9a15\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.174136 kubelet[3137]: I0430 13:57:41.173786 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39dad65fb4a36ea20f59dd5c395f9a15-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-a-07b90b6465\" (UID: \"39dad65fb4a36ea20f59dd5c395f9a15\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.174136 kubelet[3137]: I0430 13:57:41.173998 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/443602997cf2579bf62ef17539f45b27-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-a-07b90b6465\" (UID: \"443602997cf2579bf62ef17539f45b27\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.174136 kubelet[3137]: I0430 13:57:41.174092 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/443602997cf2579bf62ef17539f45b27-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-a-07b90b6465\" (UID: \"443602997cf2579bf62ef17539f45b27\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.174455 kubelet[3137]: I0430 13:57:41.174189 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39dad65fb4a36ea20f59dd5c395f9a15-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-07b90b6465\" (UID: \"39dad65fb4a36ea20f59dd5c395f9a15\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.174455 kubelet[3137]: I0430 13:57:41.174339 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39dad65fb4a36ea20f59dd5c395f9a15-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-07b90b6465\" (UID: \"39dad65fb4a36ea20f59dd5c395f9a15\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.870311 kubelet[3137]: I0430 13:57:41.870228 3137 apiserver.go:52] "Watching apiserver" Apr 30 13:57:41.872681 kubelet[3137]: I0430 13:57:41.872644 3137 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 13:57:41.884073 kubelet[3137]: W0430 13:57:41.884033 3137 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:57:41.884073 kubelet[3137]: E0430 13:57:41.884064 3137 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-a-07b90b6465\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.884495 kubelet[3137]: W0430 13:57:41.884463 3137 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:57:41.884533 kubelet[3137]: W0430 13:57:41.884503 3137 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:57:41.884533 kubelet[3137]: E0430 13:57:41.884522 3137 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.1.1-a-07b90b6465\" already exists" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.884571 kubelet[3137]: E0430 13:57:41.884504 3137 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.1.1-a-07b90b6465\" already exists" pod="kube-system/kube-scheduler-ci-4230.1.1-a-07b90b6465" Apr 30 13:57:41.896592 kubelet[3137]: I0430 13:57:41.896563 3137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-a-07b90b6465" podStartSLOduration=2.8965535190000002 podStartE2EDuration="2.896553519s" podCreationTimestamp="2025-04-30 13:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:57:41.891686814 +0000 UTC m=+1.058815519" watchObservedRunningTime="2025-04-30 13:57:41.896553519 +0000 UTC m=+1.063682225" Apr 30 13:57:41.900882 kubelet[3137]: I0430 13:57:41.900780 3137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-07b90b6465" podStartSLOduration=1.900740791 podStartE2EDuration="1.900740791s" podCreationTimestamp="2025-04-30 13:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:57:41.896650479 +0000 UTC m=+1.063779185" watchObservedRunningTime="2025-04-30 13:57:41.900740791 +0000 UTC m=+1.067869573" Apr 30 13:57:41.917763 kubelet[3137]: I0430 13:57:41.917633 3137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-a-07b90b6465" podStartSLOduration=2.917602816 podStartE2EDuration="2.917602816s" podCreationTimestamp="2025-04-30 13:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:57:41.901187655 +0000 UTC m=+1.068316426" watchObservedRunningTime="2025-04-30 13:57:41.917602816 +0000 UTC m=+1.084731581" Apr 30 13:57:45.142667 sudo[2099]: pam_unix(sudo:session): session closed for user root Apr 30 13:57:45.143374 sshd[2098]: Connection closed by 147.75.109.163 port 39850 Apr 30 13:57:45.143555 sshd-session[2095]: pam_unix(sshd:session): session closed for user core Apr 30 13:57:45.145122 systemd[1]: sshd@8-147.75.202.185:22-147.75.109.163:39850.service: Deactivated successfully. Apr 30 13:57:45.146142 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 13:57:45.146241 systemd[1]: session-11.scope: Consumed 3.191s CPU time, 224.7M memory peak. Apr 30 13:57:45.147291 systemd-logind[1805]: Session 11 logged out. Waiting for processes to exit. Apr 30 13:57:45.147892 systemd-logind[1805]: Removed session 11. Apr 30 13:57:46.565333 kubelet[3137]: I0430 13:57:46.565280 3137 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 13:57:46.565763 kubelet[3137]: I0430 13:57:46.565722 3137 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 13:57:46.565801 containerd[1823]: time="2025-04-30T13:57:46.565566385Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 13:57:47.585598 systemd[1]: Created slice kubepods-besteffort-podbc8e2980_5269_4984_8d0a_62fb1420b6d4.slice - libcontainer container kubepods-besteffort-podbc8e2980_5269_4984_8d0a_62fb1420b6d4.slice. Apr 30 13:57:47.618734 kubelet[3137]: I0430 13:57:47.618651 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc8e2980-5269-4984-8d0a-62fb1420b6d4-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-mt8rs\" (UID: \"bc8e2980-5269-4984-8d0a-62fb1420b6d4\") " pod="tigera-operator/tigera-operator-6f6897fdc5-mt8rs" Apr 30 13:57:47.619628 kubelet[3137]: I0430 13:57:47.618748 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktmhk\" (UniqueName: \"kubernetes.io/projected/bc8e2980-5269-4984-8d0a-62fb1420b6d4-kube-api-access-ktmhk\") pod \"tigera-operator-6f6897fdc5-mt8rs\" (UID: \"bc8e2980-5269-4984-8d0a-62fb1420b6d4\") " pod="tigera-operator/tigera-operator-6f6897fdc5-mt8rs" Apr 30 13:57:47.683266 systemd[1]: Created slice kubepods-besteffort-pode2ac6e74_1079_42e5_b8cf_45e503edda9e.slice - libcontainer container kubepods-besteffort-pode2ac6e74_1079_42e5_b8cf_45e503edda9e.slice. Apr 30 13:57:47.719700 kubelet[3137]: I0430 13:57:47.719599 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2ac6e74-1079-42e5-b8cf-45e503edda9e-xtables-lock\") pod \"kube-proxy-57lhf\" (UID: \"e2ac6e74-1079-42e5-b8cf-45e503edda9e\") " pod="kube-system/kube-proxy-57lhf" Apr 30 13:57:47.720033 kubelet[3137]: I0430 13:57:47.719853 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2ac6e74-1079-42e5-b8cf-45e503edda9e-lib-modules\") pod \"kube-proxy-57lhf\" (UID: \"e2ac6e74-1079-42e5-b8cf-45e503edda9e\") " pod="kube-system/kube-proxy-57lhf" Apr 30 13:57:47.720033 kubelet[3137]: I0430 13:57:47.719953 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgcwr\" (UniqueName: \"kubernetes.io/projected/e2ac6e74-1079-42e5-b8cf-45e503edda9e-kube-api-access-dgcwr\") pod \"kube-proxy-57lhf\" (UID: \"e2ac6e74-1079-42e5-b8cf-45e503edda9e\") " pod="kube-system/kube-proxy-57lhf" Apr 30 13:57:47.720442 kubelet[3137]: I0430 13:57:47.720052 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e2ac6e74-1079-42e5-b8cf-45e503edda9e-kube-proxy\") pod \"kube-proxy-57lhf\" (UID: \"e2ac6e74-1079-42e5-b8cf-45e503edda9e\") " pod="kube-system/kube-proxy-57lhf" Apr 30 13:57:47.902539 containerd[1823]: time="2025-04-30T13:57:47.902274540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-mt8rs,Uid:bc8e2980-5269-4984-8d0a-62fb1420b6d4,Namespace:tigera-operator,Attempt:0,}" Apr 30 13:57:47.914983 containerd[1823]: time="2025-04-30T13:57:47.914868109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:57:47.914983 containerd[1823]: time="2025-04-30T13:57:47.914893894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:57:47.914983 containerd[1823]: time="2025-04-30T13:57:47.914900592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:47.915122 containerd[1823]: time="2025-04-30T13:57:47.914977030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:47.931765 systemd[1]: Started cri-containerd-62c971f261c0542c38b8dfebdc39c4d7a4be70c8e9c313e5d447cb1ece60a420.scope - libcontainer container 62c971f261c0542c38b8dfebdc39c4d7a4be70c8e9c313e5d447cb1ece60a420. Apr 30 13:57:47.975863 containerd[1823]: time="2025-04-30T13:57:47.975833907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-mt8rs,Uid:bc8e2980-5269-4984-8d0a-62fb1420b6d4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"62c971f261c0542c38b8dfebdc39c4d7a4be70c8e9c313e5d447cb1ece60a420\"" Apr 30 13:57:47.976944 containerd[1823]: time="2025-04-30T13:57:47.976925134Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" Apr 30 13:57:47.989473 containerd[1823]: time="2025-04-30T13:57:47.989392796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-57lhf,Uid:e2ac6e74-1079-42e5-b8cf-45e503edda9e,Namespace:kube-system,Attempt:0,}" Apr 30 13:57:47.999666 containerd[1823]: time="2025-04-30T13:57:47.999562964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:57:47.999844 containerd[1823]: time="2025-04-30T13:57:47.999783925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:57:47.999844 containerd[1823]: time="2025-04-30T13:57:47.999794530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:47.999844 containerd[1823]: time="2025-04-30T13:57:47.999834957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:48.028503 systemd[1]: Started cri-containerd-29408ec3249b72cbbeeeb59beb32c029ce636afa42cfb68ccba8eb220c07a489.scope - libcontainer container 29408ec3249b72cbbeeeb59beb32c029ce636afa42cfb68ccba8eb220c07a489. Apr 30 13:57:48.045436 containerd[1823]: time="2025-04-30T13:57:48.045357463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-57lhf,Uid:e2ac6e74-1079-42e5-b8cf-45e503edda9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"29408ec3249b72cbbeeeb59beb32c029ce636afa42cfb68ccba8eb220c07a489\"" Apr 30 13:57:48.047593 containerd[1823]: time="2025-04-30T13:57:48.047559287Z" level=info msg="CreateContainer within sandbox \"29408ec3249b72cbbeeeb59beb32c029ce636afa42cfb68ccba8eb220c07a489\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 13:57:48.056647 containerd[1823]: time="2025-04-30T13:57:48.056633348Z" level=info msg="CreateContainer within sandbox \"29408ec3249b72cbbeeeb59beb32c029ce636afa42cfb68ccba8eb220c07a489\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a657953f77c4c363e7e4a71526b630ad1e194ceec8b4eac31700896060ddf88f\"" Apr 30 13:57:48.056835 containerd[1823]: time="2025-04-30T13:57:48.056823152Z" level=info msg="StartContainer for \"a657953f77c4c363e7e4a71526b630ad1e194ceec8b4eac31700896060ddf88f\"" Apr 30 13:57:48.085387 systemd[1]: Started cri-containerd-a657953f77c4c363e7e4a71526b630ad1e194ceec8b4eac31700896060ddf88f.scope - libcontainer container a657953f77c4c363e7e4a71526b630ad1e194ceec8b4eac31700896060ddf88f. Apr 30 13:57:48.102329 containerd[1823]: time="2025-04-30T13:57:48.102303890Z" level=info msg="StartContainer for \"a657953f77c4c363e7e4a71526b630ad1e194ceec8b4eac31700896060ddf88f\" returns successfully" Apr 30 13:57:48.921863 kubelet[3137]: I0430 13:57:48.921718 3137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-57lhf" podStartSLOduration=1.921682761 podStartE2EDuration="1.921682761s" podCreationTimestamp="2025-04-30 13:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:57:48.921417388 +0000 UTC m=+8.088546163" watchObservedRunningTime="2025-04-30 13:57:48.921682761 +0000 UTC m=+8.088811518" Apr 30 13:57:50.068543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount957998892.mount: Deactivated successfully. Apr 30 13:57:50.267494 containerd[1823]: time="2025-04-30T13:57:50.267470050Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:50.267734 containerd[1823]: time="2025-04-30T13:57:50.267604339Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" Apr 30 13:57:50.268042 containerd[1823]: time="2025-04-30T13:57:50.268032061Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:50.269038 containerd[1823]: time="2025-04-30T13:57:50.269025704Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:50.269505 containerd[1823]: time="2025-04-30T13:57:50.269487999Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.292542139s" Apr 30 13:57:50.269505 containerd[1823]: time="2025-04-30T13:57:50.269503344Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" Apr 30 13:57:50.270430 containerd[1823]: time="2025-04-30T13:57:50.270391990Z" level=info msg="CreateContainer within sandbox \"62c971f261c0542c38b8dfebdc39c4d7a4be70c8e9c313e5d447cb1ece60a420\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 30 13:57:50.275065 containerd[1823]: time="2025-04-30T13:57:50.275046624Z" level=info msg="CreateContainer within sandbox \"62c971f261c0542c38b8dfebdc39c4d7a4be70c8e9c313e5d447cb1ece60a420\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c82f6f7be5e5045e53060234ad9e66262168b248a0b1e8a3b3bc775e219d3154\"" Apr 30 13:57:50.275310 containerd[1823]: time="2025-04-30T13:57:50.275295482Z" level=info msg="StartContainer for \"c82f6f7be5e5045e53060234ad9e66262168b248a0b1e8a3b3bc775e219d3154\"" Apr 30 13:57:50.308567 systemd[1]: Started cri-containerd-c82f6f7be5e5045e53060234ad9e66262168b248a0b1e8a3b3bc775e219d3154.scope - libcontainer container c82f6f7be5e5045e53060234ad9e66262168b248a0b1e8a3b3bc775e219d3154. Apr 30 13:57:50.321133 containerd[1823]: time="2025-04-30T13:57:50.321043139Z" level=info msg="StartContainer for \"c82f6f7be5e5045e53060234ad9e66262168b248a0b1e8a3b3bc775e219d3154\" returns successfully" Apr 30 13:57:50.911394 kubelet[3137]: I0430 13:57:50.911360 3137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-mt8rs" podStartSLOduration=1.618156716 podStartE2EDuration="3.911349053s" podCreationTimestamp="2025-04-30 13:57:47 +0000 UTC" firstStartedPulling="2025-04-30 13:57:47.976668093 +0000 UTC m=+7.143796813" lastFinishedPulling="2025-04-30 13:57:50.269860445 +0000 UTC m=+9.436989150" observedRunningTime="2025-04-30 13:57:50.911310846 +0000 UTC m=+10.078439554" watchObservedRunningTime="2025-04-30 13:57:50.911349053 +0000 UTC m=+10.078477758" Apr 30 13:57:52.812321 update_engine[1810]: I20250430 13:57:52.812286 1810 update_attempter.cc:509] Updating boot flags... Apr 30 13:57:52.841247 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (3623) Apr 30 13:57:52.868245 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (3622) Apr 30 13:57:53.178867 systemd[1]: Created slice kubepods-besteffort-pod7f9a7a1c_dd22_433e_a990_16e60b8be824.slice - libcontainer container kubepods-besteffort-pod7f9a7a1c_dd22_433e_a990_16e60b8be824.slice. Apr 30 13:57:53.199801 systemd[1]: Created slice kubepods-besteffort-poda7fbd89e_2a99_45e6_8e31_167c43850b29.slice - libcontainer container kubepods-besteffort-poda7fbd89e_2a99_45e6_8e31_167c43850b29.slice. Apr 30 13:57:53.258333 kubelet[3137]: I0430 13:57:53.258209 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f9a7a1c-dd22-433e-a990-16e60b8be824-tigera-ca-bundle\") pod \"calico-typha-66bfcd7456-6nklp\" (UID: \"7f9a7a1c-dd22-433e-a990-16e60b8be824\") " pod="calico-system/calico-typha-66bfcd7456-6nklp" Apr 30 13:57:53.258333 kubelet[3137]: I0430 13:57:53.258334 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a7fbd89e-2a99-45e6-8e31-167c43850b29-cni-net-dir\") pod \"calico-node-ftgx7\" (UID: \"a7fbd89e-2a99-45e6-8e31-167c43850b29\") " pod="calico-system/calico-node-ftgx7" Apr 30 13:57:53.259352 kubelet[3137]: I0430 13:57:53.258399 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7fbd89e-2a99-45e6-8e31-167c43850b29-lib-modules\") pod \"calico-node-ftgx7\" (UID: \"a7fbd89e-2a99-45e6-8e31-167c43850b29\") " pod="calico-system/calico-node-ftgx7" Apr 30 13:57:53.259352 kubelet[3137]: I0430 13:57:53.258447 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a7fbd89e-2a99-45e6-8e31-167c43850b29-var-run-calico\") pod \"calico-node-ftgx7\" (UID: \"a7fbd89e-2a99-45e6-8e31-167c43850b29\") " pod="calico-system/calico-node-ftgx7" Apr 30 13:57:53.259352 kubelet[3137]: I0430 13:57:53.258496 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a7fbd89e-2a99-45e6-8e31-167c43850b29-flexvol-driver-host\") pod \"calico-node-ftgx7\" (UID: \"a7fbd89e-2a99-45e6-8e31-167c43850b29\") " pod="calico-system/calico-node-ftgx7" Apr 30 13:57:53.259352 kubelet[3137]: I0430 13:57:53.258549 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xwrc\" (UniqueName: \"kubernetes.io/projected/7f9a7a1c-dd22-433e-a990-16e60b8be824-kube-api-access-8xwrc\") pod \"calico-typha-66bfcd7456-6nklp\" (UID: \"7f9a7a1c-dd22-433e-a990-16e60b8be824\") " pod="calico-system/calico-typha-66bfcd7456-6nklp" Apr 30 13:57:53.259352 kubelet[3137]: I0430 13:57:53.258596 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a7fbd89e-2a99-45e6-8e31-167c43850b29-node-certs\") pod \"calico-node-ftgx7\" (UID: \"a7fbd89e-2a99-45e6-8e31-167c43850b29\") " pod="calico-system/calico-node-ftgx7" Apr 30 13:57:53.259887 kubelet[3137]: I0430 13:57:53.258643 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7fbd89e-2a99-45e6-8e31-167c43850b29-tigera-ca-bundle\") pod \"calico-node-ftgx7\" (UID: \"a7fbd89e-2a99-45e6-8e31-167c43850b29\") " pod="calico-system/calico-node-ftgx7" Apr 30 13:57:53.259887 kubelet[3137]: I0430 13:57:53.258762 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8flg6\" (UniqueName: \"kubernetes.io/projected/a7fbd89e-2a99-45e6-8e31-167c43850b29-kube-api-access-8flg6\") pod \"calico-node-ftgx7\" (UID: \"a7fbd89e-2a99-45e6-8e31-167c43850b29\") " pod="calico-system/calico-node-ftgx7" Apr 30 13:57:53.259887 kubelet[3137]: I0430 13:57:53.258921 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a7fbd89e-2a99-45e6-8e31-167c43850b29-policysync\") pod \"calico-node-ftgx7\" (UID: \"a7fbd89e-2a99-45e6-8e31-167c43850b29\") " pod="calico-system/calico-node-ftgx7" Apr 30 13:57:53.259887 kubelet[3137]: I0430 13:57:53.259005 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a7fbd89e-2a99-45e6-8e31-167c43850b29-cni-bin-dir\") pod \"calico-node-ftgx7\" (UID: \"a7fbd89e-2a99-45e6-8e31-167c43850b29\") " pod="calico-system/calico-node-ftgx7" Apr 30 13:57:53.259887 kubelet[3137]: I0430 13:57:53.259071 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7fbd89e-2a99-45e6-8e31-167c43850b29-xtables-lock\") pod \"calico-node-ftgx7\" (UID: \"a7fbd89e-2a99-45e6-8e31-167c43850b29\") " pod="calico-system/calico-node-ftgx7" Apr 30 13:57:53.260394 kubelet[3137]: I0430 13:57:53.259174 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a7fbd89e-2a99-45e6-8e31-167c43850b29-var-lib-calico\") pod \"calico-node-ftgx7\" (UID: \"a7fbd89e-2a99-45e6-8e31-167c43850b29\") " pod="calico-system/calico-node-ftgx7" Apr 30 13:57:53.260394 kubelet[3137]: I0430 13:57:53.259295 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a7fbd89e-2a99-45e6-8e31-167c43850b29-cni-log-dir\") pod \"calico-node-ftgx7\" (UID: \"a7fbd89e-2a99-45e6-8e31-167c43850b29\") " pod="calico-system/calico-node-ftgx7" Apr 30 13:57:53.260394 kubelet[3137]: I0430 13:57:53.259369 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7f9a7a1c-dd22-433e-a990-16e60b8be824-typha-certs\") pod \"calico-typha-66bfcd7456-6nklp\" (UID: \"7f9a7a1c-dd22-433e-a990-16e60b8be824\") " pod="calico-system/calico-typha-66bfcd7456-6nklp" Apr 30 13:57:53.325369 kubelet[3137]: E0430 13:57:53.325150 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gc5mn" podUID="f155cb52-e455-42c3-b112-e9f1dc1f3da7" Apr 30 13:57:53.360000 kubelet[3137]: I0430 13:57:53.359970 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f155cb52-e455-42c3-b112-e9f1dc1f3da7-socket-dir\") pod \"csi-node-driver-gc5mn\" (UID: \"f155cb52-e455-42c3-b112-e9f1dc1f3da7\") " pod="calico-system/csi-node-driver-gc5mn" Apr 30 13:57:53.360102 kubelet[3137]: I0430 13:57:53.360017 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f155cb52-e455-42c3-b112-e9f1dc1f3da7-registration-dir\") pod \"csi-node-driver-gc5mn\" (UID: \"f155cb52-e455-42c3-b112-e9f1dc1f3da7\") " pod="calico-system/csi-node-driver-gc5mn" Apr 30 13:57:53.360146 kubelet[3137]: I0430 13:57:53.360132 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f155cb52-e455-42c3-b112-e9f1dc1f3da7-kubelet-dir\") pod \"csi-node-driver-gc5mn\" (UID: \"f155cb52-e455-42c3-b112-e9f1dc1f3da7\") " pod="calico-system/csi-node-driver-gc5mn" Apr 30 13:57:53.360223 kubelet[3137]: I0430 13:57:53.360155 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bknxl\" (UniqueName: \"kubernetes.io/projected/f155cb52-e455-42c3-b112-e9f1dc1f3da7-kube-api-access-bknxl\") pod \"csi-node-driver-gc5mn\" (UID: \"f155cb52-e455-42c3-b112-e9f1dc1f3da7\") " pod="calico-system/csi-node-driver-gc5mn" Apr 30 13:57:53.360463 kubelet[3137]: E0430 13:57:53.360430 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.360506 kubelet[3137]: W0430 13:57:53.360464 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.360506 kubelet[3137]: E0430 13:57:53.360478 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.360662 kubelet[3137]: E0430 13:57:53.360650 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.360694 kubelet[3137]: W0430 13:57:53.360664 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.360694 kubelet[3137]: E0430 13:57:53.360681 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.360876 kubelet[3137]: E0430 13:57:53.360867 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.360925 kubelet[3137]: W0430 13:57:53.360877 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.360925 kubelet[3137]: E0430 13:57:53.360890 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.360925 kubelet[3137]: I0430 13:57:53.360905 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f155cb52-e455-42c3-b112-e9f1dc1f3da7-varrun\") pod \"csi-node-driver-gc5mn\" (UID: \"f155cb52-e455-42c3-b112-e9f1dc1f3da7\") " pod="calico-system/csi-node-driver-gc5mn" Apr 30 13:57:53.361071 kubelet[3137]: E0430 13:57:53.361060 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.361118 kubelet[3137]: W0430 13:57:53.361071 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.361118 kubelet[3137]: E0430 13:57:53.361088 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.361361 kubelet[3137]: E0430 13:57:53.361316 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.361361 kubelet[3137]: W0430 13:57:53.361329 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.361361 kubelet[3137]: E0430 13:57:53.361341 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.361483 kubelet[3137]: E0430 13:57:53.361476 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.361519 kubelet[3137]: W0430 13:57:53.361485 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.361519 kubelet[3137]: E0430 13:57:53.361505 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.361643 kubelet[3137]: E0430 13:57:53.361635 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.361681 kubelet[3137]: W0430 13:57:53.361642 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.361681 kubelet[3137]: E0430 13:57:53.361659 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.361809 kubelet[3137]: E0430 13:57:53.361798 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.361809 kubelet[3137]: W0430 13:57:53.361804 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.361902 kubelet[3137]: E0430 13:57:53.361822 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.361965 kubelet[3137]: E0430 13:57:53.361957 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.361965 kubelet[3137]: W0430 13:57:53.361964 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.362058 kubelet[3137]: E0430 13:57:53.361980 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.362113 kubelet[3137]: E0430 13:57:53.362105 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.362113 kubelet[3137]: W0430 13:57:53.362112 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.362196 kubelet[3137]: E0430 13:57:53.362128 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.362249 kubelet[3137]: E0430 13:57:53.362233 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.362249 kubelet[3137]: W0430 13:57:53.362244 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.362346 kubelet[3137]: E0430 13:57:53.362258 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.362393 kubelet[3137]: E0430 13:57:53.362360 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.362393 kubelet[3137]: W0430 13:57:53.362366 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.362576 kubelet[3137]: E0430 13:57:53.362400 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.362576 kubelet[3137]: E0430 13:57:53.362546 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.362576 kubelet[3137]: W0430 13:57:53.362555 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.362700 kubelet[3137]: E0430 13:57:53.362576 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.362700 kubelet[3137]: E0430 13:57:53.362682 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.362700 kubelet[3137]: W0430 13:57:53.362689 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.362827 kubelet[3137]: E0430 13:57:53.362703 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.362827 kubelet[3137]: E0430 13:57:53.362798 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.362827 kubelet[3137]: W0430 13:57:53.362803 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.362827 kubelet[3137]: E0430 13:57:53.362819 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.362989 kubelet[3137]: E0430 13:57:53.362928 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.362989 kubelet[3137]: W0430 13:57:53.362933 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.362989 kubelet[3137]: E0430 13:57:53.362945 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.363115 kubelet[3137]: E0430 13:57:53.363025 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.363115 kubelet[3137]: W0430 13:57:53.363031 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.363115 kubelet[3137]: E0430 13:57:53.363042 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.363245 kubelet[3137]: E0430 13:57:53.363121 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.363245 kubelet[3137]: W0430 13:57:53.363126 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.363245 kubelet[3137]: E0430 13:57:53.363136 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.363245 kubelet[3137]: E0430 13:57:53.363219 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.363245 kubelet[3137]: W0430 13:57:53.363224 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.363454 kubelet[3137]: E0430 13:57:53.363243 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.363454 kubelet[3137]: E0430 13:57:53.363345 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.363454 kubelet[3137]: W0430 13:57:53.363351 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.363454 kubelet[3137]: E0430 13:57:53.363359 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.363617 kubelet[3137]: E0430 13:57:53.363462 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.363617 kubelet[3137]: W0430 13:57:53.363468 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.363617 kubelet[3137]: E0430 13:57:53.363476 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.363617 kubelet[3137]: E0430 13:57:53.363570 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.363617 kubelet[3137]: W0430 13:57:53.363576 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.363617 kubelet[3137]: E0430 13:57:53.363583 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.363855 kubelet[3137]: E0430 13:57:53.363691 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.363855 kubelet[3137]: W0430 13:57:53.363697 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.363855 kubelet[3137]: E0430 13:57:53.363705 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.363978 kubelet[3137]: E0430 13:57:53.363892 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.363978 kubelet[3137]: W0430 13:57:53.363902 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.363978 kubelet[3137]: E0430 13:57:53.363921 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.364095 kubelet[3137]: E0430 13:57:53.364039 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.364095 kubelet[3137]: W0430 13:57:53.364048 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.364095 kubelet[3137]: E0430 13:57:53.364058 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.364218 kubelet[3137]: E0430 13:57:53.364195 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.364218 kubelet[3137]: W0430 13:57:53.364202 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.364218 kubelet[3137]: E0430 13:57:53.364211 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.364387 kubelet[3137]: E0430 13:57:53.364375 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.364387 kubelet[3137]: W0430 13:57:53.364385 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.364474 kubelet[3137]: E0430 13:57:53.364398 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.364555 kubelet[3137]: E0430 13:57:53.364546 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.364555 kubelet[3137]: W0430 13:57:53.364553 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.364640 kubelet[3137]: E0430 13:57:53.364573 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.364690 kubelet[3137]: E0430 13:57:53.364682 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.364736 kubelet[3137]: W0430 13:57:53.364691 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.364736 kubelet[3137]: E0430 13:57:53.364707 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.364824 kubelet[3137]: E0430 13:57:53.364812 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.364824 kubelet[3137]: W0430 13:57:53.364818 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.364911 kubelet[3137]: E0430 13:57:53.364826 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.364953 kubelet[3137]: E0430 13:57:53.364943 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.364953 kubelet[3137]: W0430 13:57:53.364951 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.365043 kubelet[3137]: E0430 13:57:53.364959 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.365085 kubelet[3137]: E0430 13:57:53.365067 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.365085 kubelet[3137]: W0430 13:57:53.365073 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.365085 kubelet[3137]: E0430 13:57:53.365079 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.365960 kubelet[3137]: E0430 13:57:53.365946 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.365960 kubelet[3137]: W0430 13:57:53.365957 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.366079 kubelet[3137]: E0430 13:57:53.365966 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.366129 kubelet[3137]: E0430 13:57:53.366089 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.366129 kubelet[3137]: W0430 13:57:53.366095 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.366129 kubelet[3137]: E0430 13:57:53.366102 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.366392 kubelet[3137]: E0430 13:57:53.366377 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.366392 kubelet[3137]: W0430 13:57:53.366390 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.366491 kubelet[3137]: E0430 13:57:53.366403 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.462560 kubelet[3137]: E0430 13:57:53.462319 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.462560 kubelet[3137]: W0430 13:57:53.462365 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.462560 kubelet[3137]: E0430 13:57:53.462411 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.463136 kubelet[3137]: E0430 13:57:53.463039 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.463136 kubelet[3137]: W0430 13:57:53.463075 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.463136 kubelet[3137]: E0430 13:57:53.463118 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.463881 kubelet[3137]: E0430 13:57:53.463786 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.463881 kubelet[3137]: W0430 13:57:53.463823 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.463881 kubelet[3137]: E0430 13:57:53.463866 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.464556 kubelet[3137]: E0430 13:57:53.464461 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.464556 kubelet[3137]: W0430 13:57:53.464498 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.464556 kubelet[3137]: E0430 13:57:53.464553 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.465281 kubelet[3137]: E0430 13:57:53.465167 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.465281 kubelet[3137]: W0430 13:57:53.465203 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.465585 kubelet[3137]: E0430 13:57:53.465348 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.465939 kubelet[3137]: E0430 13:57:53.465854 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.465939 kubelet[3137]: W0430 13:57:53.465890 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.466264 kubelet[3137]: E0430 13:57:53.466010 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.466511 kubelet[3137]: E0430 13:57:53.466421 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.466511 kubelet[3137]: W0430 13:57:53.466453 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.466791 kubelet[3137]: E0430 13:57:53.466564 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.467223 kubelet[3137]: E0430 13:57:53.467126 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.467223 kubelet[3137]: W0430 13:57:53.467165 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.467223 kubelet[3137]: E0430 13:57:53.467208 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.467958 kubelet[3137]: E0430 13:57:53.467860 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.467958 kubelet[3137]: W0430 13:57:53.467900 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.467958 kubelet[3137]: E0430 13:57:53.467946 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.468661 kubelet[3137]: E0430 13:57:53.468564 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.468661 kubelet[3137]: W0430 13:57:53.468601 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.468661 kubelet[3137]: E0430 13:57:53.468644 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.469215 kubelet[3137]: E0430 13:57:53.469177 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.469215 kubelet[3137]: W0430 13:57:53.469206 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.469508 kubelet[3137]: E0430 13:57:53.469304 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.469906 kubelet[3137]: E0430 13:57:53.469825 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.469906 kubelet[3137]: W0430 13:57:53.469862 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.470203 kubelet[3137]: E0430 13:57:53.469977 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.470541 kubelet[3137]: E0430 13:57:53.470485 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.470541 kubelet[3137]: W0430 13:57:53.470524 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.470926 kubelet[3137]: E0430 13:57:53.470618 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.471173 kubelet[3137]: E0430 13:57:53.471121 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.471343 kubelet[3137]: W0430 13:57:53.471170 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.471343 kubelet[3137]: E0430 13:57:53.471253 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.471859 kubelet[3137]: E0430 13:57:53.471770 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.471859 kubelet[3137]: W0430 13:57:53.471806 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.472124 kubelet[3137]: E0430 13:57:53.471930 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.472342 kubelet[3137]: E0430 13:57:53.472297 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.472342 kubelet[3137]: W0430 13:57:53.472332 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.472594 kubelet[3137]: E0430 13:57:53.472438 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.472884 kubelet[3137]: E0430 13:57:53.472807 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.472884 kubelet[3137]: W0430 13:57:53.472841 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.473168 kubelet[3137]: E0430 13:57:53.472927 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.473370 kubelet[3137]: E0430 13:57:53.473323 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.473370 kubelet[3137]: W0430 13:57:53.473351 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.473566 kubelet[3137]: E0430 13:57:53.473439 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.473987 kubelet[3137]: E0430 13:57:53.473910 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.473987 kubelet[3137]: W0430 13:57:53.473945 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.474311 kubelet[3137]: E0430 13:57:53.474025 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.475185 kubelet[3137]: E0430 13:57:53.475084 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.475185 kubelet[3137]: W0430 13:57:53.475151 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.475687 kubelet[3137]: E0430 13:57:53.475405 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.478527 kubelet[3137]: E0430 13:57:53.478476 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.478685 kubelet[3137]: W0430 13:57:53.478553 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.479561 kubelet[3137]: E0430 13:57:53.478691 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.480163 kubelet[3137]: E0430 13:57:53.480097 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.480163 kubelet[3137]: W0430 13:57:53.480144 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.480456 kubelet[3137]: E0430 13:57:53.480190 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.480867 kubelet[3137]: E0430 13:57:53.480807 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.480867 kubelet[3137]: W0430 13:57:53.480844 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.481124 kubelet[3137]: E0430 13:57:53.480887 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.481596 kubelet[3137]: E0430 13:57:53.481534 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.481596 kubelet[3137]: W0430 13:57:53.481581 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.481895 kubelet[3137]: E0430 13:57:53.481620 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.482309 kubelet[3137]: E0430 13:57:53.482264 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.482533 kubelet[3137]: W0430 13:57:53.482307 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.482533 kubelet[3137]: E0430 13:57:53.482353 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.483151 containerd[1823]: time="2025-04-30T13:57:53.483061759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66bfcd7456-6nklp,Uid:7f9a7a1c-dd22-433e-a990-16e60b8be824,Namespace:calico-system,Attempt:0,}" Apr 30 13:57:53.490396 kubelet[3137]: E0430 13:57:53.490382 3137 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:57:53.490396 kubelet[3137]: W0430 13:57:53.490393 3137 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:57:53.490475 kubelet[3137]: E0430 13:57:53.490404 3137 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:57:53.496895 containerd[1823]: time="2025-04-30T13:57:53.496808999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:57:53.496895 containerd[1823]: time="2025-04-30T13:57:53.496837114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:57:53.496895 containerd[1823]: time="2025-04-30T13:57:53.496843751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:53.497005 containerd[1823]: time="2025-04-30T13:57:53.496880374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:53.501764 containerd[1823]: time="2025-04-30T13:57:53.501741485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ftgx7,Uid:a7fbd89e-2a99-45e6-8e31-167c43850b29,Namespace:calico-system,Attempt:0,}" Apr 30 13:57:53.511111 containerd[1823]: time="2025-04-30T13:57:53.510805393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:57:53.511111 containerd[1823]: time="2025-04-30T13:57:53.511039766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:57:53.511111 containerd[1823]: time="2025-04-30T13:57:53.511047904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:53.511111 containerd[1823]: time="2025-04-30T13:57:53.511088415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:53.512407 systemd[1]: Started cri-containerd-28892f7f08a22238e6eb42d0f8a6db3b66cb3ea9dfbbcdf2f12df9fbbeff45a0.scope - libcontainer container 28892f7f08a22238e6eb42d0f8a6db3b66cb3ea9dfbbcdf2f12df9fbbeff45a0. Apr 30 13:57:53.517177 systemd[1]: Started cri-containerd-160431594b299defc2d943e6ea193b5524f05a34783160b4b9519fa7f5d15a89.scope - libcontainer container 160431594b299defc2d943e6ea193b5524f05a34783160b4b9519fa7f5d15a89. Apr 30 13:57:53.527897 containerd[1823]: time="2025-04-30T13:57:53.527873384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ftgx7,Uid:a7fbd89e-2a99-45e6-8e31-167c43850b29,Namespace:calico-system,Attempt:0,} returns sandbox id \"160431594b299defc2d943e6ea193b5524f05a34783160b4b9519fa7f5d15a89\"" Apr 30 13:57:53.528624 containerd[1823]: time="2025-04-30T13:57:53.528609590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 13:57:53.536635 containerd[1823]: time="2025-04-30T13:57:53.536614200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66bfcd7456-6nklp,Uid:7f9a7a1c-dd22-433e-a990-16e60b8be824,Namespace:calico-system,Attempt:0,} returns sandbox id \"28892f7f08a22238e6eb42d0f8a6db3b66cb3ea9dfbbcdf2f12df9fbbeff45a0\"" Apr 30 13:57:54.878307 kubelet[3137]: E0430 13:57:54.878247 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gc5mn" podUID="f155cb52-e455-42c3-b112-e9f1dc1f3da7" Apr 30 13:57:55.148814 containerd[1823]: time="2025-04-30T13:57:55.148728371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:55.149037 containerd[1823]: time="2025-04-30T13:57:55.148869370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" Apr 30 13:57:55.149288 containerd[1823]: time="2025-04-30T13:57:55.149276062Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:55.150854 containerd[1823]: time="2025-04-30T13:57:55.150654547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:55.151883 containerd[1823]: time="2025-04-30T13:57:55.151849674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.623219775s" Apr 30 13:57:55.151883 containerd[1823]: time="2025-04-30T13:57:55.151867580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 13:57:55.152295 containerd[1823]: time="2025-04-30T13:57:55.152282369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" Apr 30 13:57:55.152879 containerd[1823]: time="2025-04-30T13:57:55.152866060Z" level=info msg="CreateContainer within sandbox \"160431594b299defc2d943e6ea193b5524f05a34783160b4b9519fa7f5d15a89\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 13:57:55.158115 containerd[1823]: time="2025-04-30T13:57:55.158046772Z" level=info msg="CreateContainer within sandbox \"160431594b299defc2d943e6ea193b5524f05a34783160b4b9519fa7f5d15a89\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ce6758fef26ddcf7924430f33618d208c1ae615a74ebea80e2f5f2f6ec001cf6\"" Apr 30 13:57:55.158307 containerd[1823]: time="2025-04-30T13:57:55.158295405Z" level=info msg="StartContainer for \"ce6758fef26ddcf7924430f33618d208c1ae615a74ebea80e2f5f2f6ec001cf6\"" Apr 30 13:57:55.194488 systemd[1]: Started cri-containerd-ce6758fef26ddcf7924430f33618d208c1ae615a74ebea80e2f5f2f6ec001cf6.scope - libcontainer container ce6758fef26ddcf7924430f33618d208c1ae615a74ebea80e2f5f2f6ec001cf6. Apr 30 13:57:55.212538 containerd[1823]: time="2025-04-30T13:57:55.212508526Z" level=info msg="StartContainer for \"ce6758fef26ddcf7924430f33618d208c1ae615a74ebea80e2f5f2f6ec001cf6\" returns successfully" Apr 30 13:57:55.221284 systemd[1]: cri-containerd-ce6758fef26ddcf7924430f33618d208c1ae615a74ebea80e2f5f2f6ec001cf6.scope: Deactivated successfully. Apr 30 13:57:55.242466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce6758fef26ddcf7924430f33618d208c1ae615a74ebea80e2f5f2f6ec001cf6-rootfs.mount: Deactivated successfully. Apr 30 13:57:55.472198 containerd[1823]: time="2025-04-30T13:57:55.472134901Z" level=info msg="shim disconnected" id=ce6758fef26ddcf7924430f33618d208c1ae615a74ebea80e2f5f2f6ec001cf6 namespace=k8s.io Apr 30 13:57:55.472198 containerd[1823]: time="2025-04-30T13:57:55.472163933Z" level=warning msg="cleaning up after shim disconnected" id=ce6758fef26ddcf7924430f33618d208c1ae615a74ebea80e2f5f2f6ec001cf6 namespace=k8s.io Apr 30 13:57:55.472198 containerd[1823]: time="2025-04-30T13:57:55.472168711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:57:56.879204 kubelet[3137]: E0430 13:57:56.879124 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gc5mn" podUID="f155cb52-e455-42c3-b112-e9f1dc1f3da7" Apr 30 13:57:57.541114 containerd[1823]: time="2025-04-30T13:57:57.541061530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:57.541341 containerd[1823]: time="2025-04-30T13:57:57.541244810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" Apr 30 13:57:57.541617 containerd[1823]: time="2025-04-30T13:57:57.541607027Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:57.542542 containerd[1823]: time="2025-04-30T13:57:57.542513513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:57:57.542951 containerd[1823]: time="2025-04-30T13:57:57.542915666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.390618537s" Apr 30 13:57:57.542951 containerd[1823]: time="2025-04-30T13:57:57.542929007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" Apr 30 13:57:57.543447 containerd[1823]: time="2025-04-30T13:57:57.543411650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 13:57:57.546343 containerd[1823]: time="2025-04-30T13:57:57.546297932Z" level=info msg="CreateContainer within sandbox \"28892f7f08a22238e6eb42d0f8a6db3b66cb3ea9dfbbcdf2f12df9fbbeff45a0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 30 13:57:57.551557 containerd[1823]: time="2025-04-30T13:57:57.551537605Z" level=info msg="CreateContainer within sandbox \"28892f7f08a22238e6eb42d0f8a6db3b66cb3ea9dfbbcdf2f12df9fbbeff45a0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bfd57a756f1f7241c16e72e3ca252ce97c52402af102f0367fecebed85cd1657\"" Apr 30 13:57:57.551799 containerd[1823]: time="2025-04-30T13:57:57.551787421Z" level=info msg="StartContainer for \"bfd57a756f1f7241c16e72e3ca252ce97c52402af102f0367fecebed85cd1657\"" Apr 30 13:57:57.573587 systemd[1]: Started cri-containerd-bfd57a756f1f7241c16e72e3ca252ce97c52402af102f0367fecebed85cd1657.scope - libcontainer container bfd57a756f1f7241c16e72e3ca252ce97c52402af102f0367fecebed85cd1657. Apr 30 13:57:57.598146 containerd[1823]: time="2025-04-30T13:57:57.598122518Z" level=info msg="StartContainer for \"bfd57a756f1f7241c16e72e3ca252ce97c52402af102f0367fecebed85cd1657\" returns successfully" Apr 30 13:57:57.937044 kubelet[3137]: I0430 13:57:57.936951 3137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66bfcd7456-6nklp" podStartSLOduration=0.93066597 podStartE2EDuration="4.936935774s" podCreationTimestamp="2025-04-30 13:57:53 +0000 UTC" firstStartedPulling="2025-04-30 13:57:53.537079759 +0000 UTC m=+12.704208461" lastFinishedPulling="2025-04-30 13:57:57.543349558 +0000 UTC m=+16.710478265" observedRunningTime="2025-04-30 13:57:57.936932266 +0000 UTC m=+17.104060976" watchObservedRunningTime="2025-04-30 13:57:57.936935774 +0000 UTC m=+17.104064490" Apr 30 13:57:58.879152 kubelet[3137]: E0430 13:57:58.879003 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gc5mn" podUID="f155cb52-e455-42c3-b112-e9f1dc1f3da7" Apr 30 13:57:58.931517 kubelet[3137]: I0430 13:57:58.931439 3137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 13:58:00.351123 containerd[1823]: time="2025-04-30T13:58:00.351069762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:00.351334 containerd[1823]: time="2025-04-30T13:58:00.351242315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 13:58:00.351592 containerd[1823]: time="2025-04-30T13:58:00.351577816Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:00.352746 containerd[1823]: time="2025-04-30T13:58:00.352704543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:00.353224 containerd[1823]: time="2025-04-30T13:58:00.353201354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 2.809773524s" Apr 30 13:58:00.353224 containerd[1823]: time="2025-04-30T13:58:00.353219208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 13:58:00.354045 containerd[1823]: time="2025-04-30T13:58:00.354034044Z" level=info msg="CreateContainer within sandbox \"160431594b299defc2d943e6ea193b5524f05a34783160b4b9519fa7f5d15a89\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 13:58:00.358924 containerd[1823]: time="2025-04-30T13:58:00.358909212Z" level=info msg="CreateContainer within sandbox \"160431594b299defc2d943e6ea193b5524f05a34783160b4b9519fa7f5d15a89\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1adb1edb7ad42cc617a152ecb1078b84c067686bd53612d258f5b5e12499ff27\"" Apr 30 13:58:00.359102 containerd[1823]: time="2025-04-30T13:58:00.359068840Z" level=info msg="StartContainer for \"1adb1edb7ad42cc617a152ecb1078b84c067686bd53612d258f5b5e12499ff27\"" Apr 30 13:58:00.384572 systemd[1]: Started cri-containerd-1adb1edb7ad42cc617a152ecb1078b84c067686bd53612d258f5b5e12499ff27.scope - libcontainer container 1adb1edb7ad42cc617a152ecb1078b84c067686bd53612d258f5b5e12499ff27. Apr 30 13:58:00.398818 containerd[1823]: time="2025-04-30T13:58:00.398767029Z" level=info msg="StartContainer for \"1adb1edb7ad42cc617a152ecb1078b84c067686bd53612d258f5b5e12499ff27\" returns successfully" Apr 30 13:58:00.878859 kubelet[3137]: E0430 13:58:00.878819 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gc5mn" podUID="f155cb52-e455-42c3-b112-e9f1dc1f3da7" Apr 30 13:58:00.930632 systemd[1]: cri-containerd-1adb1edb7ad42cc617a152ecb1078b84c067686bd53612d258f5b5e12499ff27.scope: Deactivated successfully. Apr 30 13:58:00.930811 systemd[1]: cri-containerd-1adb1edb7ad42cc617a152ecb1078b84c067686bd53612d258f5b5e12499ff27.scope: Consumed 332ms CPU time, 176.5M memory peak, 154M written to disk. Apr 30 13:58:00.943921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1adb1edb7ad42cc617a152ecb1078b84c067686bd53612d258f5b5e12499ff27-rootfs.mount: Deactivated successfully. Apr 30 13:58:00.972876 kubelet[3137]: I0430 13:58:00.972843 3137 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Apr 30 13:58:01.038141 systemd[1]: Created slice kubepods-burstable-podab8656fb_99ce_48bb_acd9_1526ae955046.slice - libcontainer container kubepods-burstable-podab8656fb_99ce_48bb_acd9_1526ae955046.slice. Apr 30 13:58:01.048155 systemd[1]: Created slice kubepods-burstable-pod08e4b302_96b3_481a_a360_157647755634.slice - libcontainer container kubepods-burstable-pod08e4b302_96b3_481a_a360_157647755634.slice. Apr 30 13:58:01.056162 systemd[1]: Created slice kubepods-besteffort-pod12494da2_c4f7_4daa_8094_3e959335689c.slice - libcontainer container kubepods-besteffort-pod12494da2_c4f7_4daa_8094_3e959335689c.slice. Apr 30 13:58:01.062077 systemd[1]: Created slice kubepods-besteffort-pod4722e80e_9f8b_4616_8557_09179829c5a7.slice - libcontainer container kubepods-besteffort-pod4722e80e_9f8b_4616_8557_09179829c5a7.slice. Apr 30 13:58:01.067158 systemd[1]: Created slice kubepods-besteffort-pod1b537974_b389_45c2_aa3e_aa0f95c40835.slice - libcontainer container kubepods-besteffort-pod1b537974_b389_45c2_aa3e_aa0f95c40835.slice. Apr 30 13:58:01.127605 kubelet[3137]: I0430 13:58:01.127550 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r674m\" (UniqueName: \"kubernetes.io/projected/1b537974-b389-45c2-aa3e-aa0f95c40835-kube-api-access-r674m\") pod \"calico-apiserver-5869549b56-tp96n\" (UID: \"1b537974-b389-45c2-aa3e-aa0f95c40835\") " pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" Apr 30 13:58:01.127605 kubelet[3137]: I0430 13:58:01.127576 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfnn8\" (UniqueName: \"kubernetes.io/projected/08e4b302-96b3-481a-a360-157647755634-kube-api-access-hfnn8\") pod \"coredns-6f6b679f8f-5qrfx\" (UID: \"08e4b302-96b3-481a-a360-157647755634\") " pod="kube-system/coredns-6f6b679f8f-5qrfx" Apr 30 13:58:01.127605 kubelet[3137]: I0430 13:58:01.127588 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4722e80e-9f8b-4616-8557-09179829c5a7-calico-apiserver-certs\") pod \"calico-apiserver-5869549b56-fp9s8\" (UID: \"4722e80e-9f8b-4616-8557-09179829c5a7\") " pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" Apr 30 13:58:01.127605 kubelet[3137]: I0430 13:58:01.127599 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vhbk\" (UniqueName: \"kubernetes.io/projected/12494da2-c4f7-4daa-8094-3e959335689c-kube-api-access-6vhbk\") pod \"calico-kube-controllers-79b68cbf8-dwcxs\" (UID: \"12494da2-c4f7-4daa-8094-3e959335689c\") " pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" Apr 30 13:58:01.127605 kubelet[3137]: I0430 13:58:01.127612 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4fdj\" (UniqueName: \"kubernetes.io/projected/4722e80e-9f8b-4616-8557-09179829c5a7-kube-api-access-r4fdj\") pod \"calico-apiserver-5869549b56-fp9s8\" (UID: \"4722e80e-9f8b-4616-8557-09179829c5a7\") " pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" Apr 30 13:58:01.127794 kubelet[3137]: I0430 13:58:01.127622 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12494da2-c4f7-4daa-8094-3e959335689c-tigera-ca-bundle\") pod \"calico-kube-controllers-79b68cbf8-dwcxs\" (UID: \"12494da2-c4f7-4daa-8094-3e959335689c\") " pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" Apr 30 13:58:01.127794 kubelet[3137]: I0430 13:58:01.127649 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08e4b302-96b3-481a-a360-157647755634-config-volume\") pod \"coredns-6f6b679f8f-5qrfx\" (UID: \"08e4b302-96b3-481a-a360-157647755634\") " pod="kube-system/coredns-6f6b679f8f-5qrfx" Apr 30 13:58:01.127794 kubelet[3137]: I0430 13:58:01.127676 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab8656fb-99ce-48bb-acd9-1526ae955046-config-volume\") pod \"coredns-6f6b679f8f-vfzr4\" (UID: \"ab8656fb-99ce-48bb-acd9-1526ae955046\") " pod="kube-system/coredns-6f6b679f8f-vfzr4" Apr 30 13:58:01.127794 kubelet[3137]: I0430 13:58:01.127697 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24zj2\" (UniqueName: \"kubernetes.io/projected/ab8656fb-99ce-48bb-acd9-1526ae955046-kube-api-access-24zj2\") pod \"coredns-6f6b679f8f-vfzr4\" (UID: \"ab8656fb-99ce-48bb-acd9-1526ae955046\") " pod="kube-system/coredns-6f6b679f8f-vfzr4" Apr 30 13:58:01.127794 kubelet[3137]: I0430 13:58:01.127714 3137 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1b537974-b389-45c2-aa3e-aa0f95c40835-calico-apiserver-certs\") pod \"calico-apiserver-5869549b56-tp96n\" (UID: \"1b537974-b389-45c2-aa3e-aa0f95c40835\") " pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" Apr 30 13:58:01.345200 containerd[1823]: time="2025-04-30T13:58:01.345014260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vfzr4,Uid:ab8656fb-99ce-48bb-acd9-1526ae955046,Namespace:kube-system,Attempt:0,}" Apr 30 13:58:01.353433 containerd[1823]: time="2025-04-30T13:58:01.353328136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5qrfx,Uid:08e4b302-96b3-481a-a360-157647755634,Namespace:kube-system,Attempt:0,}" Apr 30 13:58:01.359715 containerd[1823]: time="2025-04-30T13:58:01.359644668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79b68cbf8-dwcxs,Uid:12494da2-c4f7-4daa-8094-3e959335689c,Namespace:calico-system,Attempt:0,}" Apr 30 13:58:01.365991 containerd[1823]: time="2025-04-30T13:58:01.365905391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-fp9s8,Uid:4722e80e-9f8b-4616-8557-09179829c5a7,Namespace:calico-apiserver,Attempt:0,}" Apr 30 13:58:01.370361 containerd[1823]: time="2025-04-30T13:58:01.370233684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-tp96n,Uid:1b537974-b389-45c2-aa3e-aa0f95c40835,Namespace:calico-apiserver,Attempt:0,}" Apr 30 13:58:01.638048 containerd[1823]: time="2025-04-30T13:58:01.637873262Z" level=info msg="shim disconnected" id=1adb1edb7ad42cc617a152ecb1078b84c067686bd53612d258f5b5e12499ff27 namespace=k8s.io Apr 30 13:58:01.638048 containerd[1823]: time="2025-04-30T13:58:01.637951368Z" level=warning msg="cleaning up after shim disconnected" id=1adb1edb7ad42cc617a152ecb1078b84c067686bd53612d258f5b5e12499ff27 namespace=k8s.io Apr 30 13:58:01.638048 containerd[1823]: time="2025-04-30T13:58:01.637972936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:58:01.678347 containerd[1823]: time="2025-04-30T13:58:01.678308466Z" level=error msg="Failed to destroy network for sandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.678576 containerd[1823]: time="2025-04-30T13:58:01.678558756Z" level=error msg="encountered an error cleaning up failed sandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.678626 containerd[1823]: time="2025-04-30T13:58:01.678610428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vfzr4,Uid:ab8656fb-99ce-48bb-acd9-1526ae955046,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.678803 kubelet[3137]: E0430 13:58:01.678777 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.678859 kubelet[3137]: E0430 13:58:01.678832 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vfzr4" Apr 30 13:58:01.678859 kubelet[3137]: E0430 13:58:01.678852 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vfzr4" Apr 30 13:58:01.678910 kubelet[3137]: E0430 13:58:01.678892 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-vfzr4_kube-system(ab8656fb-99ce-48bb-acd9-1526ae955046)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-vfzr4_kube-system(ab8656fb-99ce-48bb-acd9-1526ae955046)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vfzr4" podUID="ab8656fb-99ce-48bb-acd9-1526ae955046" Apr 30 13:58:01.679202 containerd[1823]: time="2025-04-30T13:58:01.679181126Z" level=error msg="Failed to destroy network for sandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679364 containerd[1823]: time="2025-04-30T13:58:01.679348175Z" level=error msg="Failed to destroy network for sandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679403 containerd[1823]: time="2025-04-30T13:58:01.679358974Z" level=error msg="encountered an error cleaning up failed sandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679403 containerd[1823]: time="2025-04-30T13:58:01.679394877Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5qrfx,Uid:08e4b302-96b3-481a-a360-157647755634,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679444 containerd[1823]: time="2025-04-30T13:58:01.679372399Z" level=error msg="Failed to destroy network for sandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679536 kubelet[3137]: E0430 13:58:01.679492 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679536 kubelet[3137]: E0430 13:58:01.679522 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5qrfx" Apr 30 13:58:01.679536 kubelet[3137]: E0430 13:58:01.679533 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5qrfx" Apr 30 13:58:01.679610 containerd[1823]: time="2025-04-30T13:58:01.679506494Z" level=error msg="encountered an error cleaning up failed sandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679610 containerd[1823]: time="2025-04-30T13:58:01.679516280Z" level=error msg="Failed to destroy network for sandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679610 containerd[1823]: time="2025-04-30T13:58:01.679530091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79b68cbf8-dwcxs,Uid:12494da2-c4f7-4daa-8094-3e959335689c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679610 containerd[1823]: time="2025-04-30T13:58:01.679574236Z" level=error msg="encountered an error cleaning up failed sandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679702 kubelet[3137]: E0430 13:58:01.679565 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-5qrfx_kube-system(08e4b302-96b3-481a-a360-157647755634)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-5qrfx_kube-system(08e4b302-96b3-481a-a360-157647755634)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5qrfx" podUID="08e4b302-96b3-481a-a360-157647755634" Apr 30 13:58:01.679702 kubelet[3137]: E0430 13:58:01.679595 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679702 kubelet[3137]: E0430 13:58:01.679613 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" Apr 30 13:58:01.679778 containerd[1823]: time="2025-04-30T13:58:01.679603198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-tp96n,Uid:1b537974-b389-45c2-aa3e-aa0f95c40835,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679778 containerd[1823]: time="2025-04-30T13:58:01.679673219Z" level=error msg="encountered an error cleaning up failed sandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679778 containerd[1823]: time="2025-04-30T13:58:01.679693113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-fp9s8,Uid:4722e80e-9f8b-4616-8557-09179829c5a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679852 kubelet[3137]: E0430 13:58:01.679623 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" Apr 30 13:58:01.679852 kubelet[3137]: E0430 13:58:01.679645 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79b68cbf8-dwcxs_calico-system(12494da2-c4f7-4daa-8094-3e959335689c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79b68cbf8-dwcxs_calico-system(12494da2-c4f7-4daa-8094-3e959335689c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" podUID="12494da2-c4f7-4daa-8094-3e959335689c" Apr 30 13:58:01.679852 kubelet[3137]: E0430 13:58:01.679664 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.679837 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8-shm.mount: Deactivated successfully. Apr 30 13:58:01.680047 kubelet[3137]: E0430 13:58:01.679680 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" Apr 30 13:58:01.680047 kubelet[3137]: E0430 13:58:01.679688 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" Apr 30 13:58:01.680047 kubelet[3137]: E0430 13:58:01.679701 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5869549b56-tp96n_calico-apiserver(1b537974-b389-45c2-aa3e-aa0f95c40835)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5869549b56-tp96n_calico-apiserver(1b537974-b389-45c2-aa3e-aa0f95c40835)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" podUID="1b537974-b389-45c2-aa3e-aa0f95c40835" Apr 30 13:58:01.680119 kubelet[3137]: E0430 13:58:01.679742 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.680119 kubelet[3137]: E0430 13:58:01.679756 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" Apr 30 13:58:01.680119 kubelet[3137]: E0430 13:58:01.679766 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" Apr 30 13:58:01.680177 kubelet[3137]: E0430 13:58:01.679784 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5869549b56-fp9s8_calico-apiserver(4722e80e-9f8b-4616-8557-09179829c5a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5869549b56-fp9s8_calico-apiserver(4722e80e-9f8b-4616-8557-09179829c5a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" podUID="4722e80e-9f8b-4616-8557-09179829c5a7" Apr 30 13:58:01.939738 kubelet[3137]: I0430 13:58:01.939531 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d" Apr 30 13:58:01.940886 containerd[1823]: time="2025-04-30T13:58:01.940818817Z" level=info msg="StopPodSandbox for \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\"" Apr 30 13:58:01.941552 containerd[1823]: time="2025-04-30T13:58:01.941455884Z" level=info msg="Ensure that sandbox 5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d in task-service has been cleanup successfully" Apr 30 13:58:01.941802 kubelet[3137]: I0430 13:58:01.941597 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323" Apr 30 13:58:01.942044 containerd[1823]: time="2025-04-30T13:58:01.941977396Z" level=info msg="TearDown network for sandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\" successfully" Apr 30 13:58:01.942044 containerd[1823]: time="2025-04-30T13:58:01.942033608Z" level=info msg="StopPodSandbox for \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\" returns successfully" Apr 30 13:58:01.942925 containerd[1823]: time="2025-04-30T13:58:01.942847987Z" level=info msg="StopPodSandbox for \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\"" Apr 30 13:58:01.943353 containerd[1823]: time="2025-04-30T13:58:01.943283905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-tp96n,Uid:1b537974-b389-45c2-aa3e-aa0f95c40835,Namespace:calico-apiserver,Attempt:1,}" Apr 30 13:58:01.943546 containerd[1823]: time="2025-04-30T13:58:01.943410965Z" level=info msg="Ensure that sandbox e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323 in task-service has been cleanup successfully" Apr 30 13:58:01.943854 containerd[1823]: time="2025-04-30T13:58:01.943799894Z" level=info msg="TearDown network for sandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\" successfully" Apr 30 13:58:01.943854 containerd[1823]: time="2025-04-30T13:58:01.943843949Z" level=info msg="StopPodSandbox for \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\" returns successfully" Apr 30 13:58:01.944198 kubelet[3137]: I0430 13:58:01.944101 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8" Apr 30 13:58:01.944469 containerd[1823]: time="2025-04-30T13:58:01.944454798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-fp9s8,Uid:4722e80e-9f8b-4616-8557-09179829c5a7,Namespace:calico-apiserver,Attempt:1,}" Apr 30 13:58:01.944611 containerd[1823]: time="2025-04-30T13:58:01.944593254Z" level=info msg="StopPodSandbox for \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\"" Apr 30 13:58:01.944681 containerd[1823]: time="2025-04-30T13:58:01.944670320Z" level=info msg="Ensure that sandbox 0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8 in task-service has been cleanup successfully" Apr 30 13:58:01.944745 containerd[1823]: time="2025-04-30T13:58:01.944736678Z" level=info msg="TearDown network for sandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\" successfully" Apr 30 13:58:01.944745 containerd[1823]: time="2025-04-30T13:58:01.944743932Z" level=info msg="StopPodSandbox for \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\" returns successfully" Apr 30 13:58:01.944790 kubelet[3137]: I0430 13:58:01.944754 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35" Apr 30 13:58:01.944943 containerd[1823]: time="2025-04-30T13:58:01.944930708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vfzr4,Uid:ab8656fb-99ce-48bb-acd9-1526ae955046,Namespace:kube-system,Attempt:1,}" Apr 30 13:58:01.945007 containerd[1823]: time="2025-04-30T13:58:01.944992965Z" level=info msg="StopPodSandbox for \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\"" Apr 30 13:58:01.945103 containerd[1823]: time="2025-04-30T13:58:01.945092948Z" level=info msg="Ensure that sandbox 3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35 in task-service has been cleanup successfully" Apr 30 13:58:01.945159 kubelet[3137]: I0430 13:58:01.945150 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade" Apr 30 13:58:01.945197 containerd[1823]: time="2025-04-30T13:58:01.945161744Z" level=info msg="TearDown network for sandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\" successfully" Apr 30 13:58:01.945197 containerd[1823]: time="2025-04-30T13:58:01.945169269Z" level=info msg="StopPodSandbox for \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\" returns successfully" Apr 30 13:58:01.945362 containerd[1823]: time="2025-04-30T13:58:01.945350325Z" level=info msg="StopPodSandbox for \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\"" Apr 30 13:58:01.945401 containerd[1823]: time="2025-04-30T13:58:01.945370170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79b68cbf8-dwcxs,Uid:12494da2-c4f7-4daa-8094-3e959335689c,Namespace:calico-system,Attempt:1,}" Apr 30 13:58:01.945451 containerd[1823]: time="2025-04-30T13:58:01.945441376Z" level=info msg="Ensure that sandbox 5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade in task-service has been cleanup successfully" Apr 30 13:58:01.945523 containerd[1823]: time="2025-04-30T13:58:01.945512779Z" level=info msg="TearDown network for sandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\" successfully" Apr 30 13:58:01.945550 containerd[1823]: time="2025-04-30T13:58:01.945523337Z" level=info msg="StopPodSandbox for \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\" returns successfully" Apr 30 13:58:01.945695 containerd[1823]: time="2025-04-30T13:58:01.945683894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5qrfx,Uid:08e4b302-96b3-481a-a360-157647755634,Namespace:kube-system,Attempt:1,}" Apr 30 13:58:01.946229 containerd[1823]: time="2025-04-30T13:58:01.946216723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 13:58:01.983858 containerd[1823]: time="2025-04-30T13:58:01.983781037Z" level=error msg="Failed to destroy network for sandbox \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984052 containerd[1823]: time="2025-04-30T13:58:01.984026131Z" level=error msg="Failed to destroy network for sandbox \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984269 containerd[1823]: time="2025-04-30T13:58:01.984043756Z" level=error msg="encountered an error cleaning up failed sandbox \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984269 containerd[1823]: time="2025-04-30T13:58:01.984063223Z" level=error msg="Failed to destroy network for sandbox \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984269 containerd[1823]: time="2025-04-30T13:58:01.984094644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5qrfx,Uid:08e4b302-96b3-481a-a360-157647755634,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984363 containerd[1823]: time="2025-04-30T13:58:01.984339220Z" level=error msg="encountered an error cleaning up failed sandbox \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984400 kubelet[3137]: E0430 13:58:01.984258 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984400 kubelet[3137]: E0430 13:58:01.984308 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5qrfx" Apr 30 13:58:01.984400 kubelet[3137]: E0430 13:58:01.984322 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5qrfx" Apr 30 13:58:01.984502 containerd[1823]: time="2025-04-30T13:58:01.984372383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-fp9s8,Uid:4722e80e-9f8b-4616-8557-09179829c5a7,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984502 containerd[1823]: time="2025-04-30T13:58:01.984397867Z" level=error msg="encountered an error cleaning up failed sandbox \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984502 containerd[1823]: time="2025-04-30T13:58:01.984431723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-tp96n,Uid:1b537974-b389-45c2-aa3e-aa0f95c40835,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984602 kubelet[3137]: E0430 13:58:01.984350 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-5qrfx_kube-system(08e4b302-96b3-481a-a360-157647755634)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-5qrfx_kube-system(08e4b302-96b3-481a-a360-157647755634)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5qrfx" podUID="08e4b302-96b3-481a-a360-157647755634" Apr 30 13:58:01.984602 kubelet[3137]: E0430 13:58:01.984439 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984602 kubelet[3137]: E0430 13:58:01.984462 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" Apr 30 13:58:01.984679 kubelet[3137]: E0430 13:58:01.984473 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" Apr 30 13:58:01.984679 kubelet[3137]: E0430 13:58:01.984492 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5869549b56-fp9s8_calico-apiserver(4722e80e-9f8b-4616-8557-09179829c5a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5869549b56-fp9s8_calico-apiserver(4722e80e-9f8b-4616-8557-09179829c5a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" podUID="4722e80e-9f8b-4616-8557-09179829c5a7" Apr 30 13:58:01.984679 kubelet[3137]: E0430 13:58:01.984507 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984749 containerd[1823]: time="2025-04-30T13:58:01.984628452Z" level=error msg="Failed to destroy network for sandbox \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984767 kubelet[3137]: E0430 13:58:01.984528 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" Apr 30 13:58:01.984767 kubelet[3137]: E0430 13:58:01.984538 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" Apr 30 13:58:01.984767 kubelet[3137]: E0430 13:58:01.984553 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5869549b56-tp96n_calico-apiserver(1b537974-b389-45c2-aa3e-aa0f95c40835)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5869549b56-tp96n_calico-apiserver(1b537974-b389-45c2-aa3e-aa0f95c40835)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" podUID="1b537974-b389-45c2-aa3e-aa0f95c40835" Apr 30 13:58:01.984844 containerd[1823]: time="2025-04-30T13:58:01.984809747Z" level=error msg="encountered an error cleaning up failed sandbox \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984876 containerd[1823]: time="2025-04-30T13:58:01.984841949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vfzr4,Uid:ab8656fb-99ce-48bb-acd9-1526ae955046,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984923 kubelet[3137]: E0430 13:58:01.984913 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.984945 kubelet[3137]: E0430 13:58:01.984930 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vfzr4" Apr 30 13:58:01.984945 kubelet[3137]: E0430 13:58:01.984941 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vfzr4" Apr 30 13:58:01.984982 kubelet[3137]: E0430 13:58:01.984957 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-vfzr4_kube-system(ab8656fb-99ce-48bb-acd9-1526ae955046)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-vfzr4_kube-system(ab8656fb-99ce-48bb-acd9-1526ae955046)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vfzr4" podUID="ab8656fb-99ce-48bb-acd9-1526ae955046" Apr 30 13:58:01.985342 containerd[1823]: time="2025-04-30T13:58:01.985326776Z" level=error msg="Failed to destroy network for sandbox \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.985499 containerd[1823]: time="2025-04-30T13:58:01.985463047Z" level=error msg="encountered an error cleaning up failed sandbox \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.985499 containerd[1823]: time="2025-04-30T13:58:01.985486148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79b68cbf8-dwcxs,Uid:12494da2-c4f7-4daa-8094-3e959335689c,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.985550 kubelet[3137]: E0430 13:58:01.985536 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:01.985571 kubelet[3137]: E0430 13:58:01.985550 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" Apr 30 13:58:01.985571 kubelet[3137]: E0430 13:58:01.985560 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" Apr 30 13:58:01.985615 kubelet[3137]: E0430 13:58:01.985577 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79b68cbf8-dwcxs_calico-system(12494da2-c4f7-4daa-8094-3e959335689c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79b68cbf8-dwcxs_calico-system(12494da2-c4f7-4daa-8094-3e959335689c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" podUID="12494da2-c4f7-4daa-8094-3e959335689c" Apr 30 13:58:02.363768 systemd[1]: run-netns-cni\x2dc42dc50c\x2dfae0\x2db1dc\x2d265b\x2d18a176ac40b4.mount: Deactivated successfully. Apr 30 13:58:02.363820 systemd[1]: run-netns-cni\x2d56fe3931\x2d18a0\x2dc70d\x2d107a\x2d6429fb53600c.mount: Deactivated successfully. Apr 30 13:58:02.363854 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d-shm.mount: Deactivated successfully. Apr 30 13:58:02.363891 systemd[1]: run-netns-cni\x2d182e4b30\x2d2316\x2ddb71\x2d8874\x2d0c1b9a877171.mount: Deactivated successfully. Apr 30 13:58:02.363923 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323-shm.mount: Deactivated successfully. Apr 30 13:58:02.363958 systemd[1]: run-netns-cni\x2d08891de4\x2d3bad\x2da7e9\x2dc1c5\x2d47cf3f0b3414.mount: Deactivated successfully. Apr 30 13:58:02.363997 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35-shm.mount: Deactivated successfully. Apr 30 13:58:02.364035 systemd[1]: run-netns-cni\x2d9a08e60c\x2df32f\x2d0165\x2d3de8\x2d20af648c104e.mount: Deactivated successfully. Apr 30 13:58:02.364067 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade-shm.mount: Deactivated successfully. Apr 30 13:58:02.894738 systemd[1]: Created slice kubepods-besteffort-podf155cb52_e455_42c3_b112_e9f1dc1f3da7.slice - libcontainer container kubepods-besteffort-podf155cb52_e455_42c3_b112_e9f1dc1f3da7.slice. Apr 30 13:58:02.901042 containerd[1823]: time="2025-04-30T13:58:02.900926581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gc5mn,Uid:f155cb52-e455-42c3-b112-e9f1dc1f3da7,Namespace:calico-system,Attempt:0,}" Apr 30 13:58:02.932663 containerd[1823]: time="2025-04-30T13:58:02.932611595Z" level=error msg="Failed to destroy network for sandbox \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.932832 containerd[1823]: time="2025-04-30T13:58:02.932795594Z" level=error msg="encountered an error cleaning up failed sandbox \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.932832 containerd[1823]: time="2025-04-30T13:58:02.932827796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gc5mn,Uid:f155cb52-e455-42c3-b112-e9f1dc1f3da7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.933036 kubelet[3137]: E0430 13:58:02.933015 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.933066 kubelet[3137]: E0430 13:58:02.933054 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gc5mn" Apr 30 13:58:02.933091 kubelet[3137]: E0430 13:58:02.933067 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gc5mn" Apr 30 13:58:02.933109 kubelet[3137]: E0430 13:58:02.933093 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gc5mn_calico-system(f155cb52-e455-42c3-b112-e9f1dc1f3da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gc5mn_calico-system(f155cb52-e455-42c3-b112-e9f1dc1f3da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gc5mn" podUID="f155cb52-e455-42c3-b112-e9f1dc1f3da7" Apr 30 13:58:02.934166 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f-shm.mount: Deactivated successfully. Apr 30 13:58:02.947493 kubelet[3137]: I0430 13:58:02.947480 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861" Apr 30 13:58:02.947785 containerd[1823]: time="2025-04-30T13:58:02.947768751Z" level=info msg="StopPodSandbox for \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\"" Apr 30 13:58:02.947918 kubelet[3137]: I0430 13:58:02.947909 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a" Apr 30 13:58:02.947951 containerd[1823]: time="2025-04-30T13:58:02.947914515Z" level=info msg="Ensure that sandbox e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861 in task-service has been cleanup successfully" Apr 30 13:58:02.948026 containerd[1823]: time="2025-04-30T13:58:02.948013752Z" level=info msg="TearDown network for sandbox \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\" successfully" Apr 30 13:58:02.948057 containerd[1823]: time="2025-04-30T13:58:02.948026683Z" level=info msg="StopPodSandbox for \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\" returns successfully" Apr 30 13:58:02.948115 containerd[1823]: time="2025-04-30T13:58:02.948104344Z" level=info msg="StopPodSandbox for \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\"" Apr 30 13:58:02.948161 containerd[1823]: time="2025-04-30T13:58:02.948152061Z" level=info msg="StopPodSandbox for \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\"" Apr 30 13:58:02.948202 containerd[1823]: time="2025-04-30T13:58:02.948193702Z" level=info msg="TearDown network for sandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\" successfully" Apr 30 13:58:02.948222 containerd[1823]: time="2025-04-30T13:58:02.948201658Z" level=info msg="StopPodSandbox for \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\" returns successfully" Apr 30 13:58:02.948247 containerd[1823]: time="2025-04-30T13:58:02.948227419Z" level=info msg="Ensure that sandbox 23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a in task-service has been cleanup successfully" Apr 30 13:58:02.948345 containerd[1823]: time="2025-04-30T13:58:02.948332628Z" level=info msg="TearDown network for sandbox \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\" successfully" Apr 30 13:58:02.948382 containerd[1823]: time="2025-04-30T13:58:02.948345270Z" level=info msg="StopPodSandbox for \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\" returns successfully" Apr 30 13:58:02.948403 containerd[1823]: time="2025-04-30T13:58:02.948390185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79b68cbf8-dwcxs,Uid:12494da2-c4f7-4daa-8094-3e959335689c,Namespace:calico-system,Attempt:2,}" Apr 30 13:58:02.948520 containerd[1823]: time="2025-04-30T13:58:02.948509348Z" level=info msg="StopPodSandbox for \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\"" Apr 30 13:58:02.948558 containerd[1823]: time="2025-04-30T13:58:02.948550660Z" level=info msg="TearDown network for sandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\" successfully" Apr 30 13:58:02.948581 kubelet[3137]: I0430 13:58:02.948550 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139" Apr 30 13:58:02.948609 containerd[1823]: time="2025-04-30T13:58:02.948557844Z" level=info msg="StopPodSandbox for \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\" returns successfully" Apr 30 13:58:02.948752 containerd[1823]: time="2025-04-30T13:58:02.948740893Z" level=info msg="StopPodSandbox for \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\"" Apr 30 13:58:02.948784 containerd[1823]: time="2025-04-30T13:58:02.948772705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5qrfx,Uid:08e4b302-96b3-481a-a360-157647755634,Namespace:kube-system,Attempt:2,}" Apr 30 13:58:02.948837 containerd[1823]: time="2025-04-30T13:58:02.948826837Z" level=info msg="Ensure that sandbox 3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139 in task-service has been cleanup successfully" Apr 30 13:58:02.948909 containerd[1823]: time="2025-04-30T13:58:02.948899819Z" level=info msg="TearDown network for sandbox \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\" successfully" Apr 30 13:58:02.948909 containerd[1823]: time="2025-04-30T13:58:02.948907801Z" level=info msg="StopPodSandbox for \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\" returns successfully" Apr 30 13:58:02.949086 containerd[1823]: time="2025-04-30T13:58:02.949072350Z" level=info msg="StopPodSandbox for \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\"" Apr 30 13:58:02.949130 kubelet[3137]: I0430 13:58:02.949121 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134" Apr 30 13:58:02.949166 containerd[1823]: time="2025-04-30T13:58:02.949125839Z" level=info msg="TearDown network for sandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\" successfully" Apr 30 13:58:02.949166 containerd[1823]: time="2025-04-30T13:58:02.949136910Z" level=info msg="StopPodSandbox for \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\" returns successfully" Apr 30 13:58:02.949349 containerd[1823]: time="2025-04-30T13:58:02.949336049Z" level=info msg="StopPodSandbox for \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\"" Apr 30 13:58:02.949385 containerd[1823]: time="2025-04-30T13:58:02.949358236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-tp96n,Uid:1b537974-b389-45c2-aa3e-aa0f95c40835,Namespace:calico-apiserver,Attempt:2,}" Apr 30 13:58:02.949433 containerd[1823]: time="2025-04-30T13:58:02.949423642Z" level=info msg="Ensure that sandbox 5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134 in task-service has been cleanup successfully" Apr 30 13:58:02.949430 systemd[1]: run-netns-cni\x2dd9db98e6\x2d3a7c\x2d94ac\x2d26b9\x2d1cece14a6858.mount: Deactivated successfully. Apr 30 13:58:02.949494 systemd[1]: run-netns-cni\x2d730dcda5\x2dc30f\x2d305b\x2d7a29\x2dd78e446819f6.mount: Deactivated successfully. Apr 30 13:58:02.949542 containerd[1823]: time="2025-04-30T13:58:02.949494854Z" level=info msg="TearDown network for sandbox \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\" successfully" Apr 30 13:58:02.949542 containerd[1823]: time="2025-04-30T13:58:02.949502252Z" level=info msg="StopPodSandbox for \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\" returns successfully" Apr 30 13:58:02.949619 containerd[1823]: time="2025-04-30T13:58:02.949608637Z" level=info msg="StopPodSandbox for \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\"" Apr 30 13:58:02.949651 containerd[1823]: time="2025-04-30T13:58:02.949644388Z" level=info msg="TearDown network for sandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\" successfully" Apr 30 13:58:02.949651 containerd[1823]: time="2025-04-30T13:58:02.949650427Z" level=info msg="StopPodSandbox for \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\" returns successfully" Apr 30 13:58:02.949708 kubelet[3137]: I0430 13:58:02.949657 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268" Apr 30 13:58:02.949832 containerd[1823]: time="2025-04-30T13:58:02.949822324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-fp9s8,Uid:4722e80e-9f8b-4616-8557-09179829c5a7,Namespace:calico-apiserver,Attempt:2,}" Apr 30 13:58:02.949888 containerd[1823]: time="2025-04-30T13:58:02.949877462Z" level=info msg="StopPodSandbox for \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\"" Apr 30 13:58:02.949998 containerd[1823]: time="2025-04-30T13:58:02.949988035Z" level=info msg="Ensure that sandbox 603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268 in task-service has been cleanup successfully" Apr 30 13:58:02.950079 kubelet[3137]: I0430 13:58:02.950070 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f" Apr 30 13:58:02.950113 containerd[1823]: time="2025-04-30T13:58:02.950074642Z" level=info msg="TearDown network for sandbox \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\" successfully" Apr 30 13:58:02.950113 containerd[1823]: time="2025-04-30T13:58:02.950091470Z" level=info msg="StopPodSandbox for \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\" returns successfully" Apr 30 13:58:02.950209 containerd[1823]: time="2025-04-30T13:58:02.950192453Z" level=info msg="StopPodSandbox for \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\"" Apr 30 13:58:02.950261 containerd[1823]: time="2025-04-30T13:58:02.950251049Z" level=info msg="TearDown network for sandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\" successfully" Apr 30 13:58:02.950282 containerd[1823]: time="2025-04-30T13:58:02.950259517Z" level=info msg="StopPodSandbox for \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\"" Apr 30 13:58:02.950343 containerd[1823]: time="2025-04-30T13:58:02.950261820Z" level=info msg="StopPodSandbox for \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\" returns successfully" Apr 30 13:58:02.950387 containerd[1823]: time="2025-04-30T13:58:02.950353914Z" level=info msg="Ensure that sandbox 8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f in task-service has been cleanup successfully" Apr 30 13:58:02.950436 containerd[1823]: time="2025-04-30T13:58:02.950428548Z" level=info msg="TearDown network for sandbox \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\" successfully" Apr 30 13:58:02.950464 containerd[1823]: time="2025-04-30T13:58:02.950435768Z" level=info msg="StopPodSandbox for \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\" returns successfully" Apr 30 13:58:02.950516 containerd[1823]: time="2025-04-30T13:58:02.950506750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vfzr4,Uid:ab8656fb-99ce-48bb-acd9-1526ae955046,Namespace:kube-system,Attempt:2,}" Apr 30 13:58:02.950582 containerd[1823]: time="2025-04-30T13:58:02.950572233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gc5mn,Uid:f155cb52-e455-42c3-b112-e9f1dc1f3da7,Namespace:calico-system,Attempt:1,}" Apr 30 13:58:02.987440 containerd[1823]: time="2025-04-30T13:58:02.987401962Z" level=error msg="Failed to destroy network for sandbox \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.987666 containerd[1823]: time="2025-04-30T13:58:02.987647437Z" level=error msg="encountered an error cleaning up failed sandbox \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.987717 containerd[1823]: time="2025-04-30T13:58:02.987701000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79b68cbf8-dwcxs,Uid:12494da2-c4f7-4daa-8094-3e959335689c,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.987898 kubelet[3137]: E0430 13:58:02.987865 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.987955 kubelet[3137]: E0430 13:58:02.987929 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" Apr 30 13:58:02.987955 kubelet[3137]: E0430 13:58:02.987947 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" Apr 30 13:58:02.988011 kubelet[3137]: E0430 13:58:02.987989 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79b68cbf8-dwcxs_calico-system(12494da2-c4f7-4daa-8094-3e959335689c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79b68cbf8-dwcxs_calico-system(12494da2-c4f7-4daa-8094-3e959335689c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" podUID="12494da2-c4f7-4daa-8094-3e959335689c" Apr 30 13:58:02.996100 containerd[1823]: time="2025-04-30T13:58:02.996069576Z" level=error msg="Failed to destroy network for sandbox \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996206 containerd[1823]: time="2025-04-30T13:58:02.996073004Z" level=error msg="Failed to destroy network for sandbox \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996206 containerd[1823]: time="2025-04-30T13:58:02.996178412Z" level=error msg="Failed to destroy network for sandbox \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996324 containerd[1823]: time="2025-04-30T13:58:02.996308566Z" level=error msg="encountered an error cleaning up failed sandbox \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996364 containerd[1823]: time="2025-04-30T13:58:02.996335645Z" level=error msg="encountered an error cleaning up failed sandbox \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996364 containerd[1823]: time="2025-04-30T13:58:02.996349963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5qrfx,Uid:08e4b302-96b3-481a-a360-157647755634,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996422 containerd[1823]: time="2025-04-30T13:58:02.996361011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vfzr4,Uid:ab8656fb-99ce-48bb-acd9-1526ae955046,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996422 containerd[1823]: time="2025-04-30T13:58:02.996351331Z" level=error msg="encountered an error cleaning up failed sandbox \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996422 containerd[1823]: time="2025-04-30T13:58:02.996405061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-fp9s8,Uid:4722e80e-9f8b-4616-8557-09179829c5a7,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996507 kubelet[3137]: E0430 13:58:02.996488 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996537 kubelet[3137]: E0430 13:58:02.996524 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" Apr 30 13:58:02.996555 kubelet[3137]: E0430 13:58:02.996539 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" Apr 30 13:58:02.996576 kubelet[3137]: E0430 13:58:02.996488 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996576 kubelet[3137]: E0430 13:58:02.996564 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5869549b56-fp9s8_calico-apiserver(4722e80e-9f8b-4616-8557-09179829c5a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5869549b56-fp9s8_calico-apiserver(4722e80e-9f8b-4616-8557-09179829c5a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" podUID="4722e80e-9f8b-4616-8557-09179829c5a7" Apr 30 13:58:02.996624 kubelet[3137]: E0430 13:58:02.996582 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5qrfx" Apr 30 13:58:02.996624 kubelet[3137]: E0430 13:58:02.996593 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5qrfx" Apr 30 13:58:02.996624 kubelet[3137]: E0430 13:58:02.996613 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-5qrfx_kube-system(08e4b302-96b3-481a-a360-157647755634)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-5qrfx_kube-system(08e4b302-96b3-481a-a360-157647755634)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5qrfx" podUID="08e4b302-96b3-481a-a360-157647755634" Apr 30 13:58:02.996692 kubelet[3137]: E0430 13:58:02.996488 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996692 kubelet[3137]: E0430 13:58:02.996632 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vfzr4" Apr 30 13:58:02.996692 kubelet[3137]: E0430 13:58:02.996641 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vfzr4" Apr 30 13:58:02.996781 kubelet[3137]: E0430 13:58:02.996654 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-vfzr4_kube-system(ab8656fb-99ce-48bb-acd9-1526ae955046)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-vfzr4_kube-system(ab8656fb-99ce-48bb-acd9-1526ae955046)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vfzr4" podUID="ab8656fb-99ce-48bb-acd9-1526ae955046" Apr 30 13:58:02.996823 containerd[1823]: time="2025-04-30T13:58:02.996704230Z" level=error msg="Failed to destroy network for sandbox \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996851 containerd[1823]: time="2025-04-30T13:58:02.996838912Z" level=error msg="encountered an error cleaning up failed sandbox \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996874 containerd[1823]: time="2025-04-30T13:58:02.996861718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-tp96n,Uid:1b537974-b389-45c2-aa3e-aa0f95c40835,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996941 kubelet[3137]: E0430 13:58:02.996929 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.996967 kubelet[3137]: E0430 13:58:02.996948 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" Apr 30 13:58:02.996967 kubelet[3137]: E0430 13:58:02.996958 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" Apr 30 13:58:02.997007 kubelet[3137]: E0430 13:58:02.996977 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5869549b56-tp96n_calico-apiserver(1b537974-b389-45c2-aa3e-aa0f95c40835)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5869549b56-tp96n_calico-apiserver(1b537974-b389-45c2-aa3e-aa0f95c40835)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" podUID="1b537974-b389-45c2-aa3e-aa0f95c40835" Apr 30 13:58:02.999308 containerd[1823]: time="2025-04-30T13:58:02.999270924Z" level=error msg="Failed to destroy network for sandbox \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.999447 containerd[1823]: time="2025-04-30T13:58:02.999413770Z" level=error msg="encountered an error cleaning up failed sandbox \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.999447 containerd[1823]: time="2025-04-30T13:58:02.999440066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gc5mn,Uid:f155cb52-e455-42c3-b112-e9f1dc1f3da7,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.999592 kubelet[3137]: E0430 13:58:02.999546 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:02.999592 kubelet[3137]: E0430 13:58:02.999569 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gc5mn" Apr 30 13:58:02.999592 kubelet[3137]: E0430 13:58:02.999580 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gc5mn" Apr 30 13:58:02.999679 kubelet[3137]: E0430 13:58:02.999598 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gc5mn_calico-system(f155cb52-e455-42c3-b112-e9f1dc1f3da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gc5mn_calico-system(f155cb52-e455-42c3-b112-e9f1dc1f3da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gc5mn" podUID="f155cb52-e455-42c3-b112-e9f1dc1f3da7" Apr 30 13:58:03.360721 systemd[1]: run-netns-cni\x2d9a59b7c6\x2d1f09\x2d06f7\x2d2d12\x2d80d75b60d213.mount: Deactivated successfully. Apr 30 13:58:03.360791 systemd[1]: run-netns-cni\x2d7f2b92e3\x2da5b4\x2de6b0\x2dd006\x2d9f0174963453.mount: Deactivated successfully. Apr 30 13:58:03.360839 systemd[1]: run-netns-cni\x2dcc6d966a\x2d00ba\x2de342\x2de121\x2da08af4103d97.mount: Deactivated successfully. Apr 30 13:58:03.360885 systemd[1]: run-netns-cni\x2d67528eaa\x2dba7e\x2dbe60\x2dbd78\x2dd9ffe2f57f39.mount: Deactivated successfully. Apr 30 13:58:03.956974 kubelet[3137]: I0430 13:58:03.956906 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1" Apr 30 13:58:03.958409 containerd[1823]: time="2025-04-30T13:58:03.958335862Z" level=info msg="StopPodSandbox for \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\"" Apr 30 13:58:03.959295 containerd[1823]: time="2025-04-30T13:58:03.958838428Z" level=info msg="Ensure that sandbox b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1 in task-service has been cleanup successfully" Apr 30 13:58:03.959451 containerd[1823]: time="2025-04-30T13:58:03.959299587Z" level=info msg="TearDown network for sandbox \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\" successfully" Apr 30 13:58:03.959451 containerd[1823]: time="2025-04-30T13:58:03.959348967Z" level=info msg="StopPodSandbox for \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\" returns successfully" Apr 30 13:58:03.959817 kubelet[3137]: I0430 13:58:03.959758 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323" Apr 30 13:58:03.960188 containerd[1823]: time="2025-04-30T13:58:03.960112271Z" level=info msg="StopPodSandbox for \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\"" Apr 30 13:58:03.960526 containerd[1823]: time="2025-04-30T13:58:03.960452117Z" level=info msg="TearDown network for sandbox \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\" successfully" Apr 30 13:58:03.960526 containerd[1823]: time="2025-04-30T13:58:03.960512189Z" level=info msg="StopPodSandbox for \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\" returns successfully" Apr 30 13:58:03.961255 containerd[1823]: time="2025-04-30T13:58:03.961186499Z" level=info msg="StopPodSandbox for \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\"" Apr 30 13:58:03.961403 containerd[1823]: time="2025-04-30T13:58:03.961225771Z" level=info msg="StopPodSandbox for \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\"" Apr 30 13:58:03.961524 containerd[1823]: time="2025-04-30T13:58:03.961493814Z" level=info msg="TearDown network for sandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\" successfully" Apr 30 13:58:03.961712 containerd[1823]: time="2025-04-30T13:58:03.961531113Z" level=info msg="StopPodSandbox for \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\" returns successfully" Apr 30 13:58:03.961863 containerd[1823]: time="2025-04-30T13:58:03.961774243Z" level=info msg="Ensure that sandbox 64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323 in task-service has been cleanup successfully" Apr 30 13:58:03.962269 containerd[1823]: time="2025-04-30T13:58:03.962188949Z" level=info msg="TearDown network for sandbox \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\" successfully" Apr 30 13:58:03.962500 containerd[1823]: time="2025-04-30T13:58:03.962284115Z" level=info msg="StopPodSandbox for \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\" returns successfully" Apr 30 13:58:03.962635 containerd[1823]: time="2025-04-30T13:58:03.962555167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vfzr4,Uid:ab8656fb-99ce-48bb-acd9-1526ae955046,Namespace:kube-system,Attempt:3,}" Apr 30 13:58:03.963015 containerd[1823]: time="2025-04-30T13:58:03.962955810Z" level=info msg="StopPodSandbox for \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\"" Apr 30 13:58:03.963195 containerd[1823]: time="2025-04-30T13:58:03.963150624Z" level=info msg="TearDown network for sandbox \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\" successfully" Apr 30 13:58:03.963345 containerd[1823]: time="2025-04-30T13:58:03.963190359Z" level=info msg="StopPodSandbox for \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\" returns successfully" Apr 30 13:58:03.963550 kubelet[3137]: I0430 13:58:03.963539 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f" Apr 30 13:58:03.963634 containerd[1823]: time="2025-04-30T13:58:03.963624230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gc5mn,Uid:f155cb52-e455-42c3-b112-e9f1dc1f3da7,Namespace:calico-system,Attempt:2,}" Apr 30 13:58:03.963782 containerd[1823]: time="2025-04-30T13:58:03.963771113Z" level=info msg="StopPodSandbox for \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\"" Apr 30 13:58:03.963872 containerd[1823]: time="2025-04-30T13:58:03.963861262Z" level=info msg="Ensure that sandbox 5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f in task-service has been cleanup successfully" Apr 30 13:58:03.963950 containerd[1823]: time="2025-04-30T13:58:03.963938439Z" level=info msg="TearDown network for sandbox \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\" successfully" Apr 30 13:58:03.963975 containerd[1823]: time="2025-04-30T13:58:03.963949102Z" level=info msg="StopPodSandbox for \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\" returns successfully" Apr 30 13:58:03.964046 containerd[1823]: time="2025-04-30T13:58:03.964038355Z" level=info msg="StopPodSandbox for \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\"" Apr 30 13:58:03.964086 containerd[1823]: time="2025-04-30T13:58:03.964078807Z" level=info msg="TearDown network for sandbox \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\" successfully" Apr 30 13:58:03.964109 containerd[1823]: time="2025-04-30T13:58:03.964085444Z" level=info msg="StopPodSandbox for \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\" returns successfully" Apr 30 13:58:03.964170 kubelet[3137]: I0430 13:58:03.964161 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff" Apr 30 13:58:03.964172 systemd[1]: run-netns-cni\x2d6a7ddb9a\x2d9972\x2daa19\x2d716d\x2d3349bc508c84.mount: Deactivated successfully. Apr 30 13:58:03.964339 containerd[1823]: time="2025-04-30T13:58:03.964200672Z" level=info msg="StopPodSandbox for \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\"" Apr 30 13:58:03.964339 containerd[1823]: time="2025-04-30T13:58:03.964232867Z" level=info msg="TearDown network for sandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\" successfully" Apr 30 13:58:03.964339 containerd[1823]: time="2025-04-30T13:58:03.964245538Z" level=info msg="StopPodSandbox for \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\" returns successfully" Apr 30 13:58:03.964410 containerd[1823]: time="2025-04-30T13:58:03.964383110Z" level=info msg="StopPodSandbox for \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\"" Apr 30 13:58:03.964410 containerd[1823]: time="2025-04-30T13:58:03.964402247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5qrfx,Uid:08e4b302-96b3-481a-a360-157647755634,Namespace:kube-system,Attempt:3,}" Apr 30 13:58:03.964483 containerd[1823]: time="2025-04-30T13:58:03.964474813Z" level=info msg="Ensure that sandbox 735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff in task-service has been cleanup successfully" Apr 30 13:58:03.964555 containerd[1823]: time="2025-04-30T13:58:03.964545211Z" level=info msg="TearDown network for sandbox \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\" successfully" Apr 30 13:58:03.964577 containerd[1823]: time="2025-04-30T13:58:03.964554837Z" level=info msg="StopPodSandbox for \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\" returns successfully" Apr 30 13:58:03.964665 containerd[1823]: time="2025-04-30T13:58:03.964656377Z" level=info msg="StopPodSandbox for \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\"" Apr 30 13:58:03.964719 containerd[1823]: time="2025-04-30T13:58:03.964710664Z" level=info msg="TearDown network for sandbox \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\" successfully" Apr 30 13:58:03.964719 containerd[1823]: time="2025-04-30T13:58:03.964717438Z" level=info msg="StopPodSandbox for \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\" returns successfully" Apr 30 13:58:03.964805 containerd[1823]: time="2025-04-30T13:58:03.964797543Z" level=info msg="StopPodSandbox for \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\"" Apr 30 13:58:03.964825 kubelet[3137]: I0430 13:58:03.964811 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687" Apr 30 13:58:03.964844 containerd[1823]: time="2025-04-30T13:58:03.964827733Z" level=info msg="TearDown network for sandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\" successfully" Apr 30 13:58:03.964844 containerd[1823]: time="2025-04-30T13:58:03.964833051Z" level=info msg="StopPodSandbox for \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\" returns successfully" Apr 30 13:58:03.964988 containerd[1823]: time="2025-04-30T13:58:03.964978393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79b68cbf8-dwcxs,Uid:12494da2-c4f7-4daa-8094-3e959335689c,Namespace:calico-system,Attempt:3,}" Apr 30 13:58:03.965069 containerd[1823]: time="2025-04-30T13:58:03.965058550Z" level=info msg="StopPodSandbox for \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\"" Apr 30 13:58:03.965165 containerd[1823]: time="2025-04-30T13:58:03.965155080Z" level=info msg="Ensure that sandbox cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687 in task-service has been cleanup successfully" Apr 30 13:58:03.965250 containerd[1823]: time="2025-04-30T13:58:03.965240490Z" level=info msg="TearDown network for sandbox \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\" successfully" Apr 30 13:58:03.965290 containerd[1823]: time="2025-04-30T13:58:03.965249921Z" level=info msg="StopPodSandbox for \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\" returns successfully" Apr 30 13:58:03.965410 containerd[1823]: time="2025-04-30T13:58:03.965400880Z" level=info msg="StopPodSandbox for \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\"" Apr 30 13:58:03.965462 containerd[1823]: time="2025-04-30T13:58:03.965452148Z" level=info msg="TearDown network for sandbox \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\" successfully" Apr 30 13:58:03.965482 kubelet[3137]: I0430 13:58:03.965454 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248" Apr 30 13:58:03.965507 containerd[1823]: time="2025-04-30T13:58:03.965463928Z" level=info msg="StopPodSandbox for \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\" returns successfully" Apr 30 13:58:03.965592 containerd[1823]: time="2025-04-30T13:58:03.965578255Z" level=info msg="StopPodSandbox for \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\"" Apr 30 13:58:03.965637 containerd[1823]: time="2025-04-30T13:58:03.965622472Z" level=info msg="TearDown network for sandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\" successfully" Apr 30 13:58:03.965637 containerd[1823]: time="2025-04-30T13:58:03.965621970Z" level=info msg="StopPodSandbox for \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\"" Apr 30 13:58:03.965704 containerd[1823]: time="2025-04-30T13:58:03.965630044Z" level=info msg="StopPodSandbox for \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\" returns successfully" Apr 30 13:58:03.965735 containerd[1823]: time="2025-04-30T13:58:03.965720615Z" level=info msg="Ensure that sandbox 454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248 in task-service has been cleanup successfully" Apr 30 13:58:03.965798 containerd[1823]: time="2025-04-30T13:58:03.965788736Z" level=info msg="TearDown network for sandbox \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\" successfully" Apr 30 13:58:03.965798 containerd[1823]: time="2025-04-30T13:58:03.965796833Z" level=info msg="StopPodSandbox for \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\" returns successfully" Apr 30 13:58:03.965878 containerd[1823]: time="2025-04-30T13:58:03.965864880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-tp96n,Uid:1b537974-b389-45c2-aa3e-aa0f95c40835,Namespace:calico-apiserver,Attempt:3,}" Apr 30 13:58:03.965914 containerd[1823]: time="2025-04-30T13:58:03.965901343Z" level=info msg="StopPodSandbox for \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\"" Apr 30 13:58:03.965944 containerd[1823]: time="2025-04-30T13:58:03.965935414Z" level=info msg="TearDown network for sandbox \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\" successfully" Apr 30 13:58:03.965969 containerd[1823]: time="2025-04-30T13:58:03.965944609Z" level=info msg="StopPodSandbox for \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\" returns successfully" Apr 30 13:58:03.966002 systemd[1]: run-netns-cni\x2d1165af20\x2d7117\x2d53a9\x2d8e4b\x2d556d7654418c.mount: Deactivated successfully. Apr 30 13:58:03.966064 containerd[1823]: time="2025-04-30T13:58:03.966039061Z" level=info msg="StopPodSandbox for \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\"" Apr 30 13:58:03.966097 containerd[1823]: time="2025-04-30T13:58:03.966083103Z" level=info msg="TearDown network for sandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\" successfully" Apr 30 13:58:03.966097 containerd[1823]: time="2025-04-30T13:58:03.966089131Z" level=info msg="StopPodSandbox for \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\" returns successfully" Apr 30 13:58:03.966074 systemd[1]: run-netns-cni\x2d08a450fc\x2deddf\x2d9ac2\x2d2f62\x2db91b177e045d.mount: Deactivated successfully. Apr 30 13:58:03.966133 systemd[1]: run-netns-cni\x2d8333cf78\x2d8692\x2dd700\x2d800b\x2dde793a21c1b0.mount: Deactivated successfully. Apr 30 13:58:03.966262 containerd[1823]: time="2025-04-30T13:58:03.966251021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-fp9s8,Uid:4722e80e-9f8b-4616-8557-09179829c5a7,Namespace:calico-apiserver,Attempt:3,}" Apr 30 13:58:03.967952 systemd[1]: run-netns-cni\x2df84698fa\x2db1f0\x2d4ca4\x2dd062\x2dde3d9ca1e76d.mount: Deactivated successfully. Apr 30 13:58:03.968019 systemd[1]: run-netns-cni\x2ddb7faa8d\x2d264c\x2d1514\x2d4696\x2d55dbdf882964.mount: Deactivated successfully. Apr 30 13:58:04.162988 containerd[1823]: time="2025-04-30T13:58:04.162947624Z" level=error msg="Failed to destroy network for sandbox \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.163153 containerd[1823]: time="2025-04-30T13:58:04.163130727Z" level=error msg="Failed to destroy network for sandbox \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.163211 containerd[1823]: time="2025-04-30T13:58:04.163196081Z" level=error msg="encountered an error cleaning up failed sandbox \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.163284 containerd[1823]: time="2025-04-30T13:58:04.163246029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5qrfx,Uid:08e4b302-96b3-481a-a360-157647755634,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.163322 containerd[1823]: time="2025-04-30T13:58:04.163291409Z" level=error msg="Failed to destroy network for sandbox \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.163353 containerd[1823]: time="2025-04-30T13:58:04.163321768Z" level=error msg="encountered an error cleaning up failed sandbox \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.163384 containerd[1823]: time="2025-04-30T13:58:04.163354014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vfzr4,Uid:ab8656fb-99ce-48bb-acd9-1526ae955046,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.163421 kubelet[3137]: E0430 13:58:04.163395 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.163467 kubelet[3137]: E0430 13:58:04.163422 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.163467 kubelet[3137]: E0430 13:58:04.163443 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5qrfx" Apr 30 13:58:04.163467 kubelet[3137]: E0430 13:58:04.163447 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vfzr4" Apr 30 13:58:04.163467 kubelet[3137]: E0430 13:58:04.163456 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5qrfx" Apr 30 13:58:04.163584 kubelet[3137]: E0430 13:58:04.163459 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vfzr4" Apr 30 13:58:04.163584 kubelet[3137]: E0430 13:58:04.163491 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-5qrfx_kube-system(08e4b302-96b3-481a-a360-157647755634)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-5qrfx_kube-system(08e4b302-96b3-481a-a360-157647755634)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5qrfx" podUID="08e4b302-96b3-481a-a360-157647755634" Apr 30 13:58:04.163584 kubelet[3137]: E0430 13:58:04.163489 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-vfzr4_kube-system(ab8656fb-99ce-48bb-acd9-1526ae955046)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-vfzr4_kube-system(ab8656fb-99ce-48bb-acd9-1526ae955046)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vfzr4" podUID="ab8656fb-99ce-48bb-acd9-1526ae955046" Apr 30 13:58:04.163688 containerd[1823]: time="2025-04-30T13:58:04.163477428Z" level=error msg="encountered an error cleaning up failed sandbox \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.163688 containerd[1823]: time="2025-04-30T13:58:04.163516163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gc5mn,Uid:f155cb52-e455-42c3-b112-e9f1dc1f3da7,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.163743 kubelet[3137]: E0430 13:58:04.163589 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.163743 kubelet[3137]: E0430 13:58:04.163605 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gc5mn" Apr 30 13:58:04.163743 kubelet[3137]: E0430 13:58:04.163616 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gc5mn" Apr 30 13:58:04.163797 kubelet[3137]: E0430 13:58:04.163635 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gc5mn_calico-system(f155cb52-e455-42c3-b112-e9f1dc1f3da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gc5mn_calico-system(f155cb52-e455-42c3-b112-e9f1dc1f3da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gc5mn" podUID="f155cb52-e455-42c3-b112-e9f1dc1f3da7" Apr 30 13:58:04.164201 containerd[1823]: time="2025-04-30T13:58:04.164189754Z" level=error msg="Failed to destroy network for sandbox \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.164336 containerd[1823]: time="2025-04-30T13:58:04.164324429Z" level=error msg="encountered an error cleaning up failed sandbox \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.164369 containerd[1823]: time="2025-04-30T13:58:04.164351760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79b68cbf8-dwcxs,Uid:12494da2-c4f7-4daa-8094-3e959335689c,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.164430 kubelet[3137]: E0430 13:58:04.164419 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.164459 kubelet[3137]: E0430 13:58:04.164437 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" Apr 30 13:58:04.164459 kubelet[3137]: E0430 13:58:04.164447 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" Apr 30 13:58:04.164525 kubelet[3137]: E0430 13:58:04.164468 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79b68cbf8-dwcxs_calico-system(12494da2-c4f7-4daa-8094-3e959335689c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79b68cbf8-dwcxs_calico-system(12494da2-c4f7-4daa-8094-3e959335689c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" podUID="12494da2-c4f7-4daa-8094-3e959335689c" Apr 30 13:58:04.165047 containerd[1823]: time="2025-04-30T13:58:04.165030016Z" level=error msg="Failed to destroy network for sandbox \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.165194 containerd[1823]: time="2025-04-30T13:58:04.165181855Z" level=error msg="encountered an error cleaning up failed sandbox \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.165226 containerd[1823]: time="2025-04-30T13:58:04.165206675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-fp9s8,Uid:4722e80e-9f8b-4616-8557-09179829c5a7,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.165297 kubelet[3137]: E0430 13:58:04.165283 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.165325 kubelet[3137]: E0430 13:58:04.165305 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" Apr 30 13:58:04.165325 kubelet[3137]: E0430 13:58:04.165315 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" Apr 30 13:58:04.165370 kubelet[3137]: E0430 13:58:04.165336 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5869549b56-fp9s8_calico-apiserver(4722e80e-9f8b-4616-8557-09179829c5a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5869549b56-fp9s8_calico-apiserver(4722e80e-9f8b-4616-8557-09179829c5a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" podUID="4722e80e-9f8b-4616-8557-09179829c5a7" Apr 30 13:58:04.166301 containerd[1823]: time="2025-04-30T13:58:04.166288543Z" level=error msg="Failed to destroy network for sandbox \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.166432 containerd[1823]: time="2025-04-30T13:58:04.166417214Z" level=error msg="encountered an error cleaning up failed sandbox \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.166461 containerd[1823]: time="2025-04-30T13:58:04.166443706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-tp96n,Uid:1b537974-b389-45c2-aa3e-aa0f95c40835,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.166512 kubelet[3137]: E0430 13:58:04.166501 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:04.166541 kubelet[3137]: E0430 13:58:04.166518 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" Apr 30 13:58:04.166541 kubelet[3137]: E0430 13:58:04.166528 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" Apr 30 13:58:04.166576 kubelet[3137]: E0430 13:58:04.166544 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5869549b56-tp96n_calico-apiserver(1b537974-b389-45c2-aa3e-aa0f95c40835)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5869549b56-tp96n_calico-apiserver(1b537974-b389-45c2-aa3e-aa0f95c40835)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" podUID="1b537974-b389-45c2-aa3e-aa0f95c40835" Apr 30 13:58:04.967349 kubelet[3137]: I0430 13:58:04.967333 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098" Apr 30 13:58:04.967630 containerd[1823]: time="2025-04-30T13:58:04.967607653Z" level=info msg="StopPodSandbox for \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\"" Apr 30 13:58:04.967821 containerd[1823]: time="2025-04-30T13:58:04.967739148Z" level=info msg="Ensure that sandbox f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098 in task-service has been cleanup successfully" Apr 30 13:58:04.967853 containerd[1823]: time="2025-04-30T13:58:04.967838303Z" level=info msg="TearDown network for sandbox \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\" successfully" Apr 30 13:58:04.967853 containerd[1823]: time="2025-04-30T13:58:04.967849207Z" level=info msg="StopPodSandbox for \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\" returns successfully" Apr 30 13:58:04.967959 containerd[1823]: time="2025-04-30T13:58:04.967945705Z" level=info msg="StopPodSandbox for \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\"" Apr 30 13:58:04.968026 containerd[1823]: time="2025-04-30T13:58:04.967997847Z" level=info msg="TearDown network for sandbox \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\" successfully" Apr 30 13:58:04.968062 containerd[1823]: time="2025-04-30T13:58:04.968025570Z" level=info msg="StopPodSandbox for \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\" returns successfully" Apr 30 13:58:04.968088 kubelet[3137]: I0430 13:58:04.968047 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4" Apr 30 13:58:04.968188 containerd[1823]: time="2025-04-30T13:58:04.968173578Z" level=info msg="StopPodSandbox for \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\"" Apr 30 13:58:04.968271 containerd[1823]: time="2025-04-30T13:58:04.968241801Z" level=info msg="TearDown network for sandbox \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\" successfully" Apr 30 13:58:04.968306 containerd[1823]: time="2025-04-30T13:58:04.968271545Z" level=info msg="StopPodSandbox for \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\" returns successfully" Apr 30 13:58:04.968306 containerd[1823]: time="2025-04-30T13:58:04.968294192Z" level=info msg="StopPodSandbox for \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\"" Apr 30 13:58:04.968414 containerd[1823]: time="2025-04-30T13:58:04.968404567Z" level=info msg="StopPodSandbox for \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\"" Apr 30 13:58:04.968449 containerd[1823]: time="2025-04-30T13:58:04.968411791Z" level=info msg="Ensure that sandbox 957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4 in task-service has been cleanup successfully" Apr 30 13:58:04.968449 containerd[1823]: time="2025-04-30T13:58:04.968442826Z" level=info msg="TearDown network for sandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\" successfully" Apr 30 13:58:04.968502 containerd[1823]: time="2025-04-30T13:58:04.968450679Z" level=info msg="StopPodSandbox for \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\" returns successfully" Apr 30 13:58:04.968533 containerd[1823]: time="2025-04-30T13:58:04.968505283Z" level=info msg="TearDown network for sandbox \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\" successfully" Apr 30 13:58:04.968533 containerd[1823]: time="2025-04-30T13:58:04.968512606Z" level=info msg="StopPodSandbox for \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\" returns successfully" Apr 30 13:58:04.968614 containerd[1823]: time="2025-04-30T13:58:04.968606248Z" level=info msg="StopPodSandbox for \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\"" Apr 30 13:58:04.968645 containerd[1823]: time="2025-04-30T13:58:04.968639028Z" level=info msg="TearDown network for sandbox \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\" successfully" Apr 30 13:58:04.968706 containerd[1823]: time="2025-04-30T13:58:04.968644903Z" level=info msg="StopPodSandbox for \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\" returns successfully" Apr 30 13:58:04.968736 containerd[1823]: time="2025-04-30T13:58:04.968720105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vfzr4,Uid:ab8656fb-99ce-48bb-acd9-1526ae955046,Namespace:kube-system,Attempt:4,}" Apr 30 13:58:04.968767 containerd[1823]: time="2025-04-30T13:58:04.968743302Z" level=info msg="StopPodSandbox for \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\"" Apr 30 13:58:04.968800 containerd[1823]: time="2025-04-30T13:58:04.968790117Z" level=info msg="TearDown network for sandbox \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\" successfully" Apr 30 13:58:04.968832 containerd[1823]: time="2025-04-30T13:58:04.968799778Z" level=info msg="StopPodSandbox for \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\" returns successfully" Apr 30 13:58:04.968938 kubelet[3137]: I0430 13:58:04.968928 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0" Apr 30 13:58:04.969046 containerd[1823]: time="2025-04-30T13:58:04.969037520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gc5mn,Uid:f155cb52-e455-42c3-b112-e9f1dc1f3da7,Namespace:calico-system,Attempt:3,}" Apr 30 13:58:04.969175 containerd[1823]: time="2025-04-30T13:58:04.969160136Z" level=info msg="StopPodSandbox for \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\"" Apr 30 13:58:04.969305 containerd[1823]: time="2025-04-30T13:58:04.969292085Z" level=info msg="Ensure that sandbox eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0 in task-service has been cleanup successfully" Apr 30 13:58:04.969438 containerd[1823]: time="2025-04-30T13:58:04.969396030Z" level=info msg="TearDown network for sandbox \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\" successfully" Apr 30 13:58:04.969438 containerd[1823]: time="2025-04-30T13:58:04.969407735Z" level=info msg="StopPodSandbox for \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\" returns successfully" Apr 30 13:58:04.969413 systemd[1]: run-netns-cni\x2d32725b0f\x2d6799\x2da707\x2d65d1\x2db507b87c8374.mount: Deactivated successfully. Apr 30 13:58:04.969634 containerd[1823]: time="2025-04-30T13:58:04.969536395Z" level=info msg="StopPodSandbox for \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\"" Apr 30 13:58:04.969634 containerd[1823]: time="2025-04-30T13:58:04.969588414Z" level=info msg="TearDown network for sandbox \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\" successfully" Apr 30 13:58:04.969634 containerd[1823]: time="2025-04-30T13:58:04.969597960Z" level=info msg="StopPodSandbox for \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\" returns successfully" Apr 30 13:58:04.969759 containerd[1823]: time="2025-04-30T13:58:04.969745224Z" level=info msg="StopPodSandbox for \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\"" Apr 30 13:58:04.969829 containerd[1823]: time="2025-04-30T13:58:04.969800333Z" level=info msg="TearDown network for sandbox \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\" successfully" Apr 30 13:58:04.969852 containerd[1823]: time="2025-04-30T13:58:04.969829098Z" level=info msg="StopPodSandbox for \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\" returns successfully" Apr 30 13:58:04.969878 kubelet[3137]: I0430 13:58:04.969837 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e" Apr 30 13:58:04.969981 containerd[1823]: time="2025-04-30T13:58:04.969969956Z" level=info msg="StopPodSandbox for \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\"" Apr 30 13:58:04.970054 containerd[1823]: time="2025-04-30T13:58:04.970046041Z" level=info msg="TearDown network for sandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\" successfully" Apr 30 13:58:04.970077 containerd[1823]: time="2025-04-30T13:58:04.970053854Z" level=info msg="StopPodSandbox for \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\" returns successfully" Apr 30 13:58:04.970113 containerd[1823]: time="2025-04-30T13:58:04.970106187Z" level=info msg="StopPodSandbox for \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\"" Apr 30 13:58:04.970229 containerd[1823]: time="2025-04-30T13:58:04.970217562Z" level=info msg="Ensure that sandbox 6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e in task-service has been cleanup successfully" Apr 30 13:58:04.970316 containerd[1823]: time="2025-04-30T13:58:04.970300226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79b68cbf8-dwcxs,Uid:12494da2-c4f7-4daa-8094-3e959335689c,Namespace:calico-system,Attempt:4,}" Apr 30 13:58:04.970344 containerd[1823]: time="2025-04-30T13:58:04.970324815Z" level=info msg="TearDown network for sandbox \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\" successfully" Apr 30 13:58:04.970344 containerd[1823]: time="2025-04-30T13:58:04.970333546Z" level=info msg="StopPodSandbox for \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\" returns successfully" Apr 30 13:58:04.970474 containerd[1823]: time="2025-04-30T13:58:04.970462640Z" level=info msg="StopPodSandbox for \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\"" Apr 30 13:58:04.970523 containerd[1823]: time="2025-04-30T13:58:04.970513211Z" level=info msg="TearDown network for sandbox \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\" successfully" Apr 30 13:58:04.970556 containerd[1823]: time="2025-04-30T13:58:04.970522608Z" level=info msg="StopPodSandbox for \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\" returns successfully" Apr 30 13:58:04.970696 containerd[1823]: time="2025-04-30T13:58:04.970683430Z" level=info msg="StopPodSandbox for \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\"" Apr 30 13:58:04.970752 containerd[1823]: time="2025-04-30T13:58:04.970740816Z" level=info msg="TearDown network for sandbox \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\" successfully" Apr 30 13:58:04.970784 containerd[1823]: time="2025-04-30T13:58:04.970751466Z" level=info msg="StopPodSandbox for \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\" returns successfully" Apr 30 13:58:04.970817 kubelet[3137]: I0430 13:58:04.970769 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb" Apr 30 13:58:04.970933 containerd[1823]: time="2025-04-30T13:58:04.970920838Z" level=info msg="StopPodSandbox for \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\"" Apr 30 13:58:04.970977 containerd[1823]: time="2025-04-30T13:58:04.970967544Z" level=info msg="TearDown network for sandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\" successfully" Apr 30 13:58:04.971001 containerd[1823]: time="2025-04-30T13:58:04.970978077Z" level=info msg="StopPodSandbox for \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\" returns successfully" Apr 30 13:58:04.971034 containerd[1823]: time="2025-04-30T13:58:04.971015999Z" level=info msg="StopPodSandbox for \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\"" Apr 30 13:58:04.971113 containerd[1823]: time="2025-04-30T13:58:04.971104709Z" level=info msg="Ensure that sandbox 71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb in task-service has been cleanup successfully" Apr 30 13:58:04.971170 containerd[1823]: time="2025-04-30T13:58:04.971157190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5qrfx,Uid:08e4b302-96b3-481a-a360-157647755634,Namespace:kube-system,Attempt:4,}" Apr 30 13:58:04.971211 containerd[1823]: time="2025-04-30T13:58:04.971201361Z" level=info msg="TearDown network for sandbox \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\" successfully" Apr 30 13:58:04.971233 containerd[1823]: time="2025-04-30T13:58:04.971213345Z" level=info msg="StopPodSandbox for \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\" returns successfully" Apr 30 13:58:04.971347 containerd[1823]: time="2025-04-30T13:58:04.971333348Z" level=info msg="StopPodSandbox for \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\"" Apr 30 13:58:04.972080 containerd[1823]: time="2025-04-30T13:58:04.971380961Z" level=info msg="TearDown network for sandbox \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\" successfully" Apr 30 13:58:04.972080 containerd[1823]: time="2025-04-30T13:58:04.971389310Z" level=info msg="StopPodSandbox for \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\" returns successfully" Apr 30 13:58:04.972080 containerd[1823]: time="2025-04-30T13:58:04.971490754Z" level=info msg="StopPodSandbox for \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\"" Apr 30 13:58:04.972080 containerd[1823]: time="2025-04-30T13:58:04.971542401Z" level=info msg="TearDown network for sandbox \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\" successfully" Apr 30 13:58:04.972080 containerd[1823]: time="2025-04-30T13:58:04.971552595Z" level=info msg="StopPodSandbox for \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\" returns successfully" Apr 30 13:58:04.972080 containerd[1823]: time="2025-04-30T13:58:04.971715212Z" level=info msg="StopPodSandbox for \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\"" Apr 30 13:58:04.972080 containerd[1823]: time="2025-04-30T13:58:04.971758647Z" level=info msg="TearDown network for sandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\" successfully" Apr 30 13:58:04.972080 containerd[1823]: time="2025-04-30T13:58:04.971765351Z" level=info msg="StopPodSandbox for \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\" returns successfully" Apr 30 13:58:04.972080 containerd[1823]: time="2025-04-30T13:58:04.971814639Z" level=info msg="StopPodSandbox for \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\"" Apr 30 13:58:04.972080 containerd[1823]: time="2025-04-30T13:58:04.971903128Z" level=info msg="Ensure that sandbox ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e in task-service has been cleanup successfully" Apr 30 13:58:04.972080 containerd[1823]: time="2025-04-30T13:58:04.971940503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-tp96n,Uid:1b537974-b389-45c2-aa3e-aa0f95c40835,Namespace:calico-apiserver,Attempt:4,}" Apr 30 13:58:04.972080 containerd[1823]: time="2025-04-30T13:58:04.971989361Z" level=info msg="TearDown network for sandbox \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\" successfully" Apr 30 13:58:04.972080 containerd[1823]: time="2025-04-30T13:58:04.971998464Z" level=info msg="StopPodSandbox for \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\" returns successfully" Apr 30 13:58:04.972299 kubelet[3137]: I0430 13:58:04.971614 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e" Apr 30 13:58:04.971471 systemd[1]: run-netns-cni\x2d19529821\x2da184\x2d4571\x2d42f9\x2d57744dec1e60.mount: Deactivated successfully. Apr 30 13:58:04.972354 containerd[1823]: time="2025-04-30T13:58:04.972098995Z" level=info msg="StopPodSandbox for \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\"" Apr 30 13:58:04.972354 containerd[1823]: time="2025-04-30T13:58:04.972136911Z" level=info msg="TearDown network for sandbox \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\" successfully" Apr 30 13:58:04.972354 containerd[1823]: time="2025-04-30T13:58:04.972143037Z" level=info msg="StopPodSandbox for \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\" returns successfully" Apr 30 13:58:04.972354 containerd[1823]: time="2025-04-30T13:58:04.972249494Z" level=info msg="StopPodSandbox for \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\"" Apr 30 13:58:04.972354 containerd[1823]: time="2025-04-30T13:58:04.972284287Z" level=info msg="TearDown network for sandbox \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\" successfully" Apr 30 13:58:04.972354 containerd[1823]: time="2025-04-30T13:58:04.972289697Z" level=info msg="StopPodSandbox for \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\" returns successfully" Apr 30 13:58:04.971529 systemd[1]: run-netns-cni\x2de0b84e41\x2d178d\x2ddebf\x2da28f\x2d9894a091f41c.mount: Deactivated successfully. Apr 30 13:58:04.972479 containerd[1823]: time="2025-04-30T13:58:04.972380931Z" level=info msg="StopPodSandbox for \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\"" Apr 30 13:58:04.972479 containerd[1823]: time="2025-04-30T13:58:04.972419802Z" level=info msg="TearDown network for sandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\" successfully" Apr 30 13:58:04.972479 containerd[1823]: time="2025-04-30T13:58:04.972426314Z" level=info msg="StopPodSandbox for \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\" returns successfully" Apr 30 13:58:04.972579 containerd[1823]: time="2025-04-30T13:58:04.972570075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-fp9s8,Uid:4722e80e-9f8b-4616-8557-09179829c5a7,Namespace:calico-apiserver,Attempt:4,}" Apr 30 13:58:04.973297 systemd[1]: run-netns-cni\x2d449a1ce1\x2d74b5\x2da63e\x2d5336\x2d2715094f72ef.mount: Deactivated successfully. Apr 30 13:58:04.973364 systemd[1]: run-netns-cni\x2d2eb74f20\x2d60cc\x2da27e\x2dfba7\x2d2de823f828b2.mount: Deactivated successfully. Apr 30 13:58:04.973420 systemd[1]: run-netns-cni\x2dd25b0383\x2dbab8\x2d75d0\x2d7fe8\x2dee95d9a27df5.mount: Deactivated successfully. Apr 30 13:58:05.010630 containerd[1823]: time="2025-04-30T13:58:05.010585415Z" level=error msg="Failed to destroy network for sandbox \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.010875 containerd[1823]: time="2025-04-30T13:58:05.010847560Z" level=error msg="encountered an error cleaning up failed sandbox \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.010949 containerd[1823]: time="2025-04-30T13:58:05.010922824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vfzr4,Uid:ab8656fb-99ce-48bb-acd9-1526ae955046,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.011146 kubelet[3137]: E0430 13:58:05.011114 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.011208 kubelet[3137]: E0430 13:58:05.011170 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vfzr4" Apr 30 13:58:05.011208 kubelet[3137]: E0430 13:58:05.011192 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vfzr4" Apr 30 13:58:05.011274 kubelet[3137]: E0430 13:58:05.011232 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-vfzr4_kube-system(ab8656fb-99ce-48bb-acd9-1526ae955046)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-vfzr4_kube-system(ab8656fb-99ce-48bb-acd9-1526ae955046)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vfzr4" podUID="ab8656fb-99ce-48bb-acd9-1526ae955046" Apr 30 13:58:05.016458 containerd[1823]: time="2025-04-30T13:58:05.016420169Z" level=error msg="Failed to destroy network for sandbox \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.016560 containerd[1823]: time="2025-04-30T13:58:05.016458585Z" level=error msg="Failed to destroy network for sandbox \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.016560 containerd[1823]: time="2025-04-30T13:58:05.016438824Z" level=error msg="Failed to destroy network for sandbox \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.016662 containerd[1823]: time="2025-04-30T13:58:05.016648989Z" level=error msg="encountered an error cleaning up failed sandbox \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.016684 containerd[1823]: time="2025-04-30T13:58:05.016665130Z" level=error msg="encountered an error cleaning up failed sandbox \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.016702 containerd[1823]: time="2025-04-30T13:58:05.016679253Z" level=error msg="encountered an error cleaning up failed sandbox \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.016723 containerd[1823]: time="2025-04-30T13:58:05.016699696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gc5mn,Uid:f155cb52-e455-42c3-b112-e9f1dc1f3da7,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.016744 containerd[1823]: time="2025-04-30T13:58:05.016716268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79b68cbf8-dwcxs,Uid:12494da2-c4f7-4daa-8094-3e959335689c,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.016784 containerd[1823]: time="2025-04-30T13:58:05.016686660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-tp96n,Uid:1b537974-b389-45c2-aa3e-aa0f95c40835,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.016875 kubelet[3137]: E0430 13:58:05.016857 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.016905 kubelet[3137]: E0430 13:58:05.016893 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" Apr 30 13:58:05.016929 kubelet[3137]: E0430 13:58:05.016908 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" Apr 30 13:58:05.016953 kubelet[3137]: E0430 13:58:05.016857 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.016953 kubelet[3137]: E0430 13:58:05.016933 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5869549b56-tp96n_calico-apiserver(1b537974-b389-45c2-aa3e-aa0f95c40835)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5869549b56-tp96n_calico-apiserver(1b537974-b389-45c2-aa3e-aa0f95c40835)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" podUID="1b537974-b389-45c2-aa3e-aa0f95c40835" Apr 30 13:58:05.016953 kubelet[3137]: E0430 13:58:05.016858 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.017068 kubelet[3137]: E0430 13:58:05.016961 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gc5mn" Apr 30 13:58:05.017068 kubelet[3137]: E0430 13:58:05.016970 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" Apr 30 13:58:05.017068 kubelet[3137]: E0430 13:58:05.016980 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gc5mn" Apr 30 13:58:05.017068 kubelet[3137]: E0430 13:58:05.016989 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" Apr 30 13:58:05.017171 kubelet[3137]: E0430 13:58:05.017011 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gc5mn_calico-system(f155cb52-e455-42c3-b112-e9f1dc1f3da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gc5mn_calico-system(f155cb52-e455-42c3-b112-e9f1dc1f3da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gc5mn" podUID="f155cb52-e455-42c3-b112-e9f1dc1f3da7" Apr 30 13:58:05.017171 kubelet[3137]: E0430 13:58:05.017020 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79b68cbf8-dwcxs_calico-system(12494da2-c4f7-4daa-8094-3e959335689c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79b68cbf8-dwcxs_calico-system(12494da2-c4f7-4daa-8094-3e959335689c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" podUID="12494da2-c4f7-4daa-8094-3e959335689c" Apr 30 13:58:05.017603 containerd[1823]: time="2025-04-30T13:58:05.017585029Z" level=error msg="Failed to destroy network for sandbox \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.017741 containerd[1823]: time="2025-04-30T13:58:05.017727295Z" level=error msg="encountered an error cleaning up failed sandbox \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.017779 containerd[1823]: time="2025-04-30T13:58:05.017753857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5qrfx,Uid:08e4b302-96b3-481a-a360-157647755634,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.017840 kubelet[3137]: E0430 13:58:05.017825 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.017877 kubelet[3137]: E0430 13:58:05.017848 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5qrfx" Apr 30 13:58:05.017877 kubelet[3137]: E0430 13:58:05.017860 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5qrfx" Apr 30 13:58:05.017943 kubelet[3137]: E0430 13:58:05.017878 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-5qrfx_kube-system(08e4b302-96b3-481a-a360-157647755634)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-5qrfx_kube-system(08e4b302-96b3-481a-a360-157647755634)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5qrfx" podUID="08e4b302-96b3-481a-a360-157647755634" Apr 30 13:58:05.018506 containerd[1823]: time="2025-04-30T13:58:05.018493359Z" level=error msg="Failed to destroy network for sandbox \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.018635 containerd[1823]: time="2025-04-30T13:58:05.018623384Z" level=error msg="encountered an error cleaning up failed sandbox \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.018658 containerd[1823]: time="2025-04-30T13:58:05.018645501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-fp9s8,Uid:4722e80e-9f8b-4616-8557-09179829c5a7,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.018719 kubelet[3137]: E0430 13:58:05.018708 3137 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:58:05.018742 kubelet[3137]: E0430 13:58:05.018725 3137 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" Apr 30 13:58:05.018742 kubelet[3137]: E0430 13:58:05.018736 3137 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" Apr 30 13:58:05.018782 kubelet[3137]: E0430 13:58:05.018752 3137 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5869549b56-fp9s8_calico-apiserver(4722e80e-9f8b-4616-8557-09179829c5a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5869549b56-fp9s8_calico-apiserver(4722e80e-9f8b-4616-8557-09179829c5a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" podUID="4722e80e-9f8b-4616-8557-09179829c5a7" Apr 30 13:58:05.314333 containerd[1823]: time="2025-04-30T13:58:05.314284586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:05.314579 containerd[1823]: time="2025-04-30T13:58:05.314528424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 13:58:05.314993 containerd[1823]: time="2025-04-30T13:58:05.314947513Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:05.315875 containerd[1823]: time="2025-04-30T13:58:05.315829290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:05.316299 containerd[1823]: time="2025-04-30T13:58:05.316253555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 3.370020648s" Apr 30 13:58:05.316299 containerd[1823]: time="2025-04-30T13:58:05.316266360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 13:58:05.319551 containerd[1823]: time="2025-04-30T13:58:05.319499098Z" level=info msg="CreateContainer within sandbox \"160431594b299defc2d943e6ea193b5524f05a34783160b4b9519fa7f5d15a89\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 13:58:05.329415 containerd[1823]: time="2025-04-30T13:58:05.329372512Z" level=info msg="CreateContainer within sandbox \"160431594b299defc2d943e6ea193b5524f05a34783160b4b9519fa7f5d15a89\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"efc24e9534afab35feef6152d7e4cae5058109b50561df3a4b3c00ac7821ec2d\"" Apr 30 13:58:05.329639 containerd[1823]: time="2025-04-30T13:58:05.329606563Z" level=info msg="StartContainer for \"efc24e9534afab35feef6152d7e4cae5058109b50561df3a4b3c00ac7821ec2d\"" Apr 30 13:58:05.352566 systemd[1]: Started cri-containerd-efc24e9534afab35feef6152d7e4cae5058109b50561df3a4b3c00ac7821ec2d.scope - libcontainer container efc24e9534afab35feef6152d7e4cae5058109b50561df3a4b3c00ac7821ec2d. Apr 30 13:58:05.361780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217-shm.mount: Deactivated successfully. Apr 30 13:58:05.361838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount33527762.mount: Deactivated successfully. Apr 30 13:58:05.370303 containerd[1823]: time="2025-04-30T13:58:05.370253797Z" level=info msg="StartContainer for \"efc24e9534afab35feef6152d7e4cae5058109b50561df3a4b3c00ac7821ec2d\" returns successfully" Apr 30 13:58:05.428226 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 13:58:05.428290 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 13:58:05.979791 kubelet[3137]: I0430 13:58:05.979674 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217" Apr 30 13:58:05.980894 containerd[1823]: time="2025-04-30T13:58:05.980770023Z" level=info msg="StopPodSandbox for \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\"" Apr 30 13:58:05.981630 containerd[1823]: time="2025-04-30T13:58:05.981507606Z" level=info msg="Ensure that sandbox 219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217 in task-service has been cleanup successfully" Apr 30 13:58:05.982125 containerd[1823]: time="2025-04-30T13:58:05.982031135Z" level=info msg="TearDown network for sandbox \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\" successfully" Apr 30 13:58:05.982125 containerd[1823]: time="2025-04-30T13:58:05.982080687Z" level=info msg="StopPodSandbox for \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\" returns successfully" Apr 30 13:58:05.982787 containerd[1823]: time="2025-04-30T13:58:05.982693532Z" level=info msg="StopPodSandbox for \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\"" Apr 30 13:58:05.982980 containerd[1823]: time="2025-04-30T13:58:05.982917621Z" level=info msg="TearDown network for sandbox \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\" successfully" Apr 30 13:58:05.982980 containerd[1823]: time="2025-04-30T13:58:05.982954780Z" level=info msg="StopPodSandbox for \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\" returns successfully" Apr 30 13:58:05.983725 containerd[1823]: time="2025-04-30T13:58:05.983643122Z" level=info msg="StopPodSandbox for \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\"" Apr 30 13:58:05.984093 containerd[1823]: time="2025-04-30T13:58:05.983951196Z" level=info msg="TearDown network for sandbox \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\" successfully" Apr 30 13:58:05.984354 containerd[1823]: time="2025-04-30T13:58:05.984085664Z" level=info msg="StopPodSandbox for \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\" returns successfully" Apr 30 13:58:05.984866 containerd[1823]: time="2025-04-30T13:58:05.984753637Z" level=info msg="StopPodSandbox for \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\"" Apr 30 13:58:05.985071 kubelet[3137]: I0430 13:58:05.984801 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200" Apr 30 13:58:05.985196 containerd[1823]: time="2025-04-30T13:58:05.985004886Z" level=info msg="TearDown network for sandbox \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\" successfully" Apr 30 13:58:05.985196 containerd[1823]: time="2025-04-30T13:58:05.985052294Z" level=info msg="StopPodSandbox for \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\" returns successfully" Apr 30 13:58:05.985744 containerd[1823]: time="2025-04-30T13:58:05.985680506Z" level=info msg="StopPodSandbox for \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\"" Apr 30 13:58:05.985959 containerd[1823]: time="2025-04-30T13:58:05.985908883Z" level=info msg="TearDown network for sandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\" successfully" Apr 30 13:58:05.985959 containerd[1823]: time="2025-04-30T13:58:05.985948835Z" level=info msg="StopPodSandbox for \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\" returns successfully" Apr 30 13:58:05.986224 containerd[1823]: time="2025-04-30T13:58:05.986058077Z" level=info msg="StopPodSandbox for \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\"" Apr 30 13:58:05.986692 containerd[1823]: time="2025-04-30T13:58:05.986635214Z" level=info msg="Ensure that sandbox 48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200 in task-service has been cleanup successfully" Apr 30 13:58:05.987105 containerd[1823]: time="2025-04-30T13:58:05.987054761Z" level=info msg="TearDown network for sandbox \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\" successfully" Apr 30 13:58:05.987105 containerd[1823]: time="2025-04-30T13:58:05.987099173Z" level=info msg="StopPodSandbox for \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\" returns successfully" Apr 30 13:58:05.987185 containerd[1823]: time="2025-04-30T13:58:05.987075526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vfzr4,Uid:ab8656fb-99ce-48bb-acd9-1526ae955046,Namespace:kube-system,Attempt:5,}" Apr 30 13:58:05.987369 containerd[1823]: time="2025-04-30T13:58:05.987337820Z" level=info msg="StopPodSandbox for \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\"" Apr 30 13:58:05.987420 containerd[1823]: time="2025-04-30T13:58:05.987397464Z" level=info msg="TearDown network for sandbox \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\" successfully" Apr 30 13:58:05.987420 containerd[1823]: time="2025-04-30T13:58:05.987417625Z" level=info msg="StopPodSandbox for \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\" returns successfully" Apr 30 13:58:05.987552 containerd[1823]: time="2025-04-30T13:58:05.987542202Z" level=info msg="StopPodSandbox for \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\"" Apr 30 13:58:05.987631 containerd[1823]: time="2025-04-30T13:58:05.987620958Z" level=info msg="TearDown network for sandbox \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\" successfully" Apr 30 13:58:05.987654 containerd[1823]: time="2025-04-30T13:58:05.987631048Z" level=info msg="StopPodSandbox for \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\" returns successfully" Apr 30 13:58:05.987667 systemd[1]: run-netns-cni\x2d3aab0828\x2d38f1\x2db100\x2da55d\x2d19854f466b13.mount: Deactivated successfully. Apr 30 13:58:05.987787 containerd[1823]: time="2025-04-30T13:58:05.987774390Z" level=info msg="StopPodSandbox for \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\"" Apr 30 13:58:05.987835 containerd[1823]: time="2025-04-30T13:58:05.987825265Z" level=info msg="TearDown network for sandbox \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\" successfully" Apr 30 13:58:05.987854 kubelet[3137]: I0430 13:58:05.987827 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7" Apr 30 13:58:05.987877 containerd[1823]: time="2025-04-30T13:58:05.987836466Z" level=info msg="StopPodSandbox for \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\" returns successfully" Apr 30 13:58:05.987978 containerd[1823]: time="2025-04-30T13:58:05.987967639Z" level=info msg="StopPodSandbox for \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\"" Apr 30 13:58:05.988016 containerd[1823]: time="2025-04-30T13:58:05.988009134Z" level=info msg="TearDown network for sandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\" successfully" Apr 30 13:58:05.988038 containerd[1823]: time="2025-04-30T13:58:05.988016463Z" level=info msg="StopPodSandbox for \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\" returns successfully" Apr 30 13:58:05.988069 containerd[1823]: time="2025-04-30T13:58:05.988061526Z" level=info msg="StopPodSandbox for \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\"" Apr 30 13:58:05.988174 containerd[1823]: time="2025-04-30T13:58:05.988162846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-tp96n,Uid:1b537974-b389-45c2-aa3e-aa0f95c40835,Namespace:calico-apiserver,Attempt:5,}" Apr 30 13:58:05.988247 containerd[1823]: time="2025-04-30T13:58:05.988164243Z" level=info msg="Ensure that sandbox 2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7 in task-service has been cleanup successfully" Apr 30 13:58:05.988333 containerd[1823]: time="2025-04-30T13:58:05.988323993Z" level=info msg="TearDown network for sandbox \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\" successfully" Apr 30 13:58:05.988369 containerd[1823]: time="2025-04-30T13:58:05.988333927Z" level=info msg="StopPodSandbox for \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\" returns successfully" Apr 30 13:58:05.988446 containerd[1823]: time="2025-04-30T13:58:05.988433582Z" level=info msg="StopPodSandbox for \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\"" Apr 30 13:58:05.988494 containerd[1823]: time="2025-04-30T13:58:05.988483959Z" level=info msg="TearDown network for sandbox \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\" successfully" Apr 30 13:58:05.988519 containerd[1823]: time="2025-04-30T13:58:05.988494758Z" level=info msg="StopPodSandbox for \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\" returns successfully" Apr 30 13:58:05.988653 containerd[1823]: time="2025-04-30T13:58:05.988637791Z" level=info msg="StopPodSandbox for \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\"" Apr 30 13:58:05.988732 containerd[1823]: time="2025-04-30T13:58:05.988700337Z" level=info msg="TearDown network for sandbox \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\" successfully" Apr 30 13:58:05.988782 containerd[1823]: time="2025-04-30T13:58:05.988733606Z" level=info msg="StopPodSandbox for \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\" returns successfully" Apr 30 13:58:05.988808 kubelet[3137]: I0430 13:58:05.988739 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b" Apr 30 13:58:05.988878 containerd[1823]: time="2025-04-30T13:58:05.988868717Z" level=info msg="StopPodSandbox for \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\"" Apr 30 13:58:05.988921 containerd[1823]: time="2025-04-30T13:58:05.988911178Z" level=info msg="TearDown network for sandbox \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\" successfully" Apr 30 13:58:05.988921 containerd[1823]: time="2025-04-30T13:58:05.988919328Z" level=info msg="StopPodSandbox for \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\" returns successfully" Apr 30 13:58:05.989016 containerd[1823]: time="2025-04-30T13:58:05.989006094Z" level=info msg="StopPodSandbox for \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\"" Apr 30 13:58:05.989050 containerd[1823]: time="2025-04-30T13:58:05.989040142Z" level=info msg="StopPodSandbox for \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\"" Apr 30 13:58:05.989085 containerd[1823]: time="2025-04-30T13:58:05.989065080Z" level=info msg="TearDown network for sandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\" successfully" Apr 30 13:58:05.989085 containerd[1823]: time="2025-04-30T13:58:05.989076760Z" level=info msg="StopPodSandbox for \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\" returns successfully" Apr 30 13:58:05.989137 containerd[1823]: time="2025-04-30T13:58:05.989127993Z" level=info msg="Ensure that sandbox 200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b in task-service has been cleanup successfully" Apr 30 13:58:05.989243 containerd[1823]: time="2025-04-30T13:58:05.989228316Z" level=info msg="TearDown network for sandbox \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\" successfully" Apr 30 13:58:05.989275 containerd[1823]: time="2025-04-30T13:58:05.989242717Z" level=info msg="StopPodSandbox for \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\" returns successfully" Apr 30 13:58:05.989304 containerd[1823]: time="2025-04-30T13:58:05.989270184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-fp9s8,Uid:4722e80e-9f8b-4616-8557-09179829c5a7,Namespace:calico-apiserver,Attempt:5,}" Apr 30 13:58:05.989533 containerd[1823]: time="2025-04-30T13:58:05.989346019Z" level=info msg="StopPodSandbox for \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\"" Apr 30 13:58:05.989533 containerd[1823]: time="2025-04-30T13:58:05.989384458Z" level=info msg="TearDown network for sandbox \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\" successfully" Apr 30 13:58:05.989533 containerd[1823]: time="2025-04-30T13:58:05.989390490Z" level=info msg="StopPodSandbox for \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\" returns successfully" Apr 30 13:58:05.989533 containerd[1823]: time="2025-04-30T13:58:05.989473448Z" level=info msg="StopPodSandbox for \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\"" Apr 30 13:58:05.989533 containerd[1823]: time="2025-04-30T13:58:05.989515406Z" level=info msg="TearDown network for sandbox \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\" successfully" Apr 30 13:58:05.989533 containerd[1823]: time="2025-04-30T13:58:05.989522740Z" level=info msg="StopPodSandbox for \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\" returns successfully" Apr 30 13:58:05.989683 containerd[1823]: time="2025-04-30T13:58:05.989625248Z" level=info msg="StopPodSandbox for \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\"" Apr 30 13:58:05.989683 containerd[1823]: time="2025-04-30T13:58:05.989667424Z" level=info msg="TearDown network for sandbox \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\" successfully" Apr 30 13:58:05.989683 containerd[1823]: time="2025-04-30T13:58:05.989674884Z" level=info msg="StopPodSandbox for \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\" returns successfully" Apr 30 13:58:05.989636 systemd[1]: run-netns-cni\x2ddc3a5df2\x2d61fd\x2d1a79\x2d88e9\x2d407ab93eea77.mount: Deactivated successfully. Apr 30 13:58:05.989695 systemd[1]: run-netns-cni\x2d024114ab\x2dcec1\x2df77f\x2d88e7\x2d80de4b8022b9.mount: Deactivated successfully. Apr 30 13:58:05.989924 containerd[1823]: time="2025-04-30T13:58:05.989913233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gc5mn,Uid:f155cb52-e455-42c3-b112-e9f1dc1f3da7,Namespace:calico-system,Attempt:4,}" Apr 30 13:58:05.990598 kubelet[3137]: I0430 13:58:05.990582 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112" Apr 30 13:58:05.990869 containerd[1823]: time="2025-04-30T13:58:05.990852110Z" level=info msg="StopPodSandbox for \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\"" Apr 30 13:58:05.990990 containerd[1823]: time="2025-04-30T13:58:05.990973519Z" level=info msg="Ensure that sandbox fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112 in task-service has been cleanup successfully" Apr 30 13:58:05.991086 containerd[1823]: time="2025-04-30T13:58:05.991076811Z" level=info msg="TearDown network for sandbox \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\" successfully" Apr 30 13:58:05.991086 containerd[1823]: time="2025-04-30T13:58:05.991085742Z" level=info msg="StopPodSandbox for \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\" returns successfully" Apr 30 13:58:05.991226 containerd[1823]: time="2025-04-30T13:58:05.991215440Z" level=info msg="StopPodSandbox for \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\"" Apr 30 13:58:05.991282 containerd[1823]: time="2025-04-30T13:58:05.991272639Z" level=info msg="TearDown network for sandbox \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\" successfully" Apr 30 13:58:05.991324 containerd[1823]: time="2025-04-30T13:58:05.991282997Z" level=info msg="StopPodSandbox for \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\" returns successfully" Apr 30 13:58:05.991863 containerd[1823]: time="2025-04-30T13:58:05.991592507Z" level=info msg="StopPodSandbox for \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\"" Apr 30 13:58:05.991863 containerd[1823]: time="2025-04-30T13:58:05.991655835Z" level=info msg="TearDown network for sandbox \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\" successfully" Apr 30 13:58:05.991863 containerd[1823]: time="2025-04-30T13:58:05.991666617Z" level=info msg="StopPodSandbox for \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\" returns successfully" Apr 30 13:58:05.992165 systemd[1]: run-netns-cni\x2d559706f2\x2d6ef1\x2d8191\x2de657\x2ddaf5c0b619da.mount: Deactivated successfully. Apr 30 13:58:05.992628 containerd[1823]: time="2025-04-30T13:58:05.992612580Z" level=info msg="StopPodSandbox for \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\"" Apr 30 13:58:05.992685 containerd[1823]: time="2025-04-30T13:58:05.992674875Z" level=info msg="TearDown network for sandbox \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\" successfully" Apr 30 13:58:05.992718 containerd[1823]: time="2025-04-30T13:58:05.992685407Z" level=info msg="StopPodSandbox for \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\" returns successfully" Apr 30 13:58:05.992922 kubelet[3137]: I0430 13:58:05.992906 3137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304" Apr 30 13:58:05.992971 containerd[1823]: time="2025-04-30T13:58:05.992958521Z" level=info msg="StopPodSandbox for \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\"" Apr 30 13:58:05.993030 containerd[1823]: time="2025-04-30T13:58:05.993020331Z" level=info msg="TearDown network for sandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\" successfully" Apr 30 13:58:05.993054 containerd[1823]: time="2025-04-30T13:58:05.993031329Z" level=info msg="StopPodSandbox for \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\" returns successfully" Apr 30 13:58:05.993221 containerd[1823]: time="2025-04-30T13:58:05.993211456Z" level=info msg="StopPodSandbox for \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\"" Apr 30 13:58:05.993309 containerd[1823]: time="2025-04-30T13:58:05.993293233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79b68cbf8-dwcxs,Uid:12494da2-c4f7-4daa-8094-3e959335689c,Namespace:calico-system,Attempt:5,}" Apr 30 13:58:05.993343 containerd[1823]: time="2025-04-30T13:58:05.993335106Z" level=info msg="Ensure that sandbox 9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304 in task-service has been cleanup successfully" Apr 30 13:58:05.993433 containerd[1823]: time="2025-04-30T13:58:05.993423129Z" level=info msg="TearDown network for sandbox \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\" successfully" Apr 30 13:58:05.993452 containerd[1823]: time="2025-04-30T13:58:05.993434030Z" level=info msg="StopPodSandbox for \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\" returns successfully" Apr 30 13:58:05.993568 containerd[1823]: time="2025-04-30T13:58:05.993559088Z" level=info msg="StopPodSandbox for \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\"" Apr 30 13:58:05.993606 containerd[1823]: time="2025-04-30T13:58:05.993599568Z" level=info msg="TearDown network for sandbox \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\" successfully" Apr 30 13:58:05.993625 containerd[1823]: time="2025-04-30T13:58:05.993606395Z" level=info msg="StopPodSandbox for \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\" returns successfully" Apr 30 13:58:05.993747 containerd[1823]: time="2025-04-30T13:58:05.993738665Z" level=info msg="StopPodSandbox for \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\"" Apr 30 13:58:05.993784 containerd[1823]: time="2025-04-30T13:58:05.993777297Z" level=info msg="TearDown network for sandbox \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\" successfully" Apr 30 13:58:05.993802 containerd[1823]: time="2025-04-30T13:58:05.993784763Z" level=info msg="StopPodSandbox for \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\" returns successfully" Apr 30 13:58:05.993896 containerd[1823]: time="2025-04-30T13:58:05.993886666Z" level=info msg="StopPodSandbox for \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\"" Apr 30 13:58:05.993929 containerd[1823]: time="2025-04-30T13:58:05.993922798Z" level=info msg="TearDown network for sandbox \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\" successfully" Apr 30 13:58:05.993951 containerd[1823]: time="2025-04-30T13:58:05.993929154Z" level=info msg="StopPodSandbox for \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\" returns successfully" Apr 30 13:58:05.994023 containerd[1823]: time="2025-04-30T13:58:05.994013528Z" level=info msg="StopPodSandbox for \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\"" Apr 30 13:58:05.994058 containerd[1823]: time="2025-04-30T13:58:05.994052173Z" level=info msg="TearDown network for sandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\" successfully" Apr 30 13:58:05.994075 containerd[1823]: time="2025-04-30T13:58:05.994058823Z" level=info msg="StopPodSandbox for \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\" returns successfully" Apr 30 13:58:05.994243 containerd[1823]: time="2025-04-30T13:58:05.994225510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5qrfx,Uid:08e4b302-96b3-481a-a360-157647755634,Namespace:kube-system,Attempt:5,}" Apr 30 13:58:05.995511 systemd[1]: run-netns-cni\x2db86120cb\x2de5ed\x2df619\x2dd486\x2d7ebab3e6df0b.mount: Deactivated successfully. Apr 30 13:58:05.999596 kubelet[3137]: I0430 13:58:05.999538 3137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ftgx7" podStartSLOduration=1.21138724 podStartE2EDuration="12.999517896s" podCreationTimestamp="2025-04-30 13:57:53 +0000 UTC" firstStartedPulling="2025-04-30 13:57:53.528475789 +0000 UTC m=+12.695604494" lastFinishedPulling="2025-04-30 13:58:05.31660644 +0000 UTC m=+24.483735150" observedRunningTime="2025-04-30 13:58:05.999022496 +0000 UTC m=+25.166151202" watchObservedRunningTime="2025-04-30 13:58:05.999517896 +0000 UTC m=+25.166646600" Apr 30 13:58:06.063889 systemd-networkd[1731]: calibcffb765081: Link UP Apr 30 13:58:06.063988 systemd-networkd[1731]: calibcffb765081: Gained carrier Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.007 [INFO][5508] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.017 [INFO][5508] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--vfzr4-eth0 coredns-6f6b679f8f- kube-system ab8656fb-99ce-48bb-acd9-1526ae955046 650 0 2025-04-30 13:57:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230.1.1-a-07b90b6465 coredns-6f6b679f8f-vfzr4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibcffb765081 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" Namespace="kube-system" Pod="coredns-6f6b679f8f-vfzr4" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--vfzr4-" Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.017 [INFO][5508] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" Namespace="kube-system" Pod="coredns-6f6b679f8f-vfzr4" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--vfzr4-eth0" Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.037 [INFO][5635] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" HandleID="k8s-pod-network.fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" Workload="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--vfzr4-eth0" Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.042 [INFO][5635] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" HandleID="k8s-pod-network.fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" Workload="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--vfzr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051960), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230.1.1-a-07b90b6465", "pod":"coredns-6f6b679f8f-vfzr4", "timestamp":"2025-04-30 13:58:06.037540183 +0000 UTC"}, Hostname:"ci-4230.1.1-a-07b90b6465", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.042 [INFO][5635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.042 [INFO][5635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.042 [INFO][5635] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-a-07b90b6465' Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.043 [INFO][5635] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.047 [INFO][5635] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.050 [INFO][5635] ipam/ipam.go 489: Trying affinity for 192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.051 [INFO][5635] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.052 [INFO][5635] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.052 [INFO][5635] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.053 [INFO][5635] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9 Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.055 [INFO][5635] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.057 [INFO][5635] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.65/26] block=192.168.80.64/26 handle="k8s-pod-network.fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.057 [INFO][5635] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.65/26] handle="k8s-pod-network.fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.057 [INFO][5635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 13:58:06.069398 containerd[1823]: 2025-04-30 13:58:06.057 [INFO][5635] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.65/26] IPv6=[] ContainerID="fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" HandleID="k8s-pod-network.fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" Workload="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--vfzr4-eth0" Apr 30 13:58:06.069947 containerd[1823]: 2025-04-30 13:58:06.059 [INFO][5508] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" Namespace="kube-system" Pod="coredns-6f6b679f8f-vfzr4" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--vfzr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--vfzr4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ab8656fb-99ce-48bb-acd9-1526ae955046", ResourceVersion:"650", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-a-07b90b6465", ContainerID:"", Pod:"coredns-6f6b679f8f-vfzr4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibcffb765081", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:58:06.069947 containerd[1823]: 2025-04-30 13:58:06.060 [INFO][5508] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.65/32] ContainerID="fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" Namespace="kube-system" Pod="coredns-6f6b679f8f-vfzr4" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--vfzr4-eth0" Apr 30 13:58:06.069947 containerd[1823]: 2025-04-30 13:58:06.060 [INFO][5508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibcffb765081 ContainerID="fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" Namespace="kube-system" Pod="coredns-6f6b679f8f-vfzr4" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--vfzr4-eth0" Apr 30 13:58:06.069947 containerd[1823]: 2025-04-30 13:58:06.063 [INFO][5508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" Namespace="kube-system" Pod="coredns-6f6b679f8f-vfzr4" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--vfzr4-eth0" Apr 30 13:58:06.069947 containerd[1823]: 2025-04-30 13:58:06.064 [INFO][5508] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" Namespace="kube-system" Pod="coredns-6f6b679f8f-vfzr4" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--vfzr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--vfzr4-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ab8656fb-99ce-48bb-acd9-1526ae955046", ResourceVersion:"650", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-a-07b90b6465", ContainerID:"fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9", Pod:"coredns-6f6b679f8f-vfzr4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibcffb765081", MAC:"ae:e0:df:b6:e7:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:58:06.069947 containerd[1823]: 2025-04-30 13:58:06.068 [INFO][5508] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9" Namespace="kube-system" Pod="coredns-6f6b679f8f-vfzr4" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--vfzr4-eth0" Apr 30 13:58:06.079658 containerd[1823]: time="2025-04-30T13:58:06.079411635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:58:06.079658 containerd[1823]: time="2025-04-30T13:58:06.079646682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:58:06.079658 containerd[1823]: time="2025-04-30T13:58:06.079654729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:58:06.079776 containerd[1823]: time="2025-04-30T13:58:06.079697250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:58:06.102531 systemd[1]: Started cri-containerd-fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9.scope - libcontainer container fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9. Apr 30 13:58:06.126776 containerd[1823]: time="2025-04-30T13:58:06.126723525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vfzr4,Uid:ab8656fb-99ce-48bb-acd9-1526ae955046,Namespace:kube-system,Attempt:5,} returns sandbox id \"fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9\"" Apr 30 13:58:06.127984 containerd[1823]: time="2025-04-30T13:58:06.127968137Z" level=info msg="CreateContainer within sandbox \"fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 13:58:06.132026 containerd[1823]: time="2025-04-30T13:58:06.132012215Z" level=info msg="CreateContainer within sandbox \"fb466b3d5ac70e530ceb6ab665eda1c1dc850565c4a493faba3f2bcdfa09c9a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb87b3101eb3f8d9743bb02c93ec862add022dadeb4938b0731d84a6b383d4a7\"" Apr 30 13:58:06.132177 containerd[1823]: time="2025-04-30T13:58:06.132165591Z" level=info msg="StartContainer for \"bb87b3101eb3f8d9743bb02c93ec862add022dadeb4938b0731d84a6b383d4a7\"" Apr 30 13:58:06.150379 systemd[1]: Started cri-containerd-bb87b3101eb3f8d9743bb02c93ec862add022dadeb4938b0731d84a6b383d4a7.scope - libcontainer container bb87b3101eb3f8d9743bb02c93ec862add022dadeb4938b0731d84a6b383d4a7. Apr 30 13:58:06.160045 systemd-networkd[1731]: calib9696b13e8d: Link UP Apr 30 13:58:06.160159 systemd-networkd[1731]: calib9696b13e8d: Gained carrier Apr 30 13:58:06.163861 containerd[1823]: time="2025-04-30T13:58:06.163813048Z" level=info msg="StartContainer for \"bb87b3101eb3f8d9743bb02c93ec862add022dadeb4938b0731d84a6b383d4a7\" returns successfully" Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.012 [INFO][5527] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.018 [INFO][5527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--tp96n-eth0 calico-apiserver-5869549b56- calico-apiserver 1b537974-b389-45c2-aa3e-aa0f95c40835 655 0 2025-04-30 13:57:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5869549b56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230.1.1-a-07b90b6465 calico-apiserver-5869549b56-tp96n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib9696b13e8d [] []}} ContainerID="7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-tp96n" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--tp96n-" Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.019 [INFO][5527] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-tp96n" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--tp96n-eth0" Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.036 [INFO][5646] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" HandleID="k8s-pod-network.7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" Workload="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--tp96n-eth0" Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.043 [INFO][5646] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" HandleID="k8s-pod-network.7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" Workload="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--tp96n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bcb20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230.1.1-a-07b90b6465", "pod":"calico-apiserver-5869549b56-tp96n", "timestamp":"2025-04-30 13:58:06.036028062 +0000 UTC"}, Hostname:"ci-4230.1.1-a-07b90b6465", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.043 [INFO][5646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.057 [INFO][5646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.057 [INFO][5646] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-a-07b90b6465' Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.144 [INFO][5646] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.147 [INFO][5646] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.150 [INFO][5646] ipam/ipam.go 489: Trying affinity for 192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.151 [INFO][5646] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.152 [INFO][5646] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.152 [INFO][5646] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.153 [INFO][5646] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642 Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.155 [INFO][5646] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.158 [INFO][5646] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.66/26] block=192.168.80.64/26 handle="k8s-pod-network.7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.158 [INFO][5646] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.66/26] handle="k8s-pod-network.7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.158 [INFO][5646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 13:58:06.165458 containerd[1823]: 2025-04-30 13:58:06.158 [INFO][5646] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.66/26] IPv6=[] ContainerID="7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" HandleID="k8s-pod-network.7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" Workload="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--tp96n-eth0" Apr 30 13:58:06.165923 containerd[1823]: 2025-04-30 13:58:06.159 [INFO][5527] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-tp96n" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--tp96n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--tp96n-eth0", GenerateName:"calico-apiserver-5869549b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b537974-b389-45c2-aa3e-aa0f95c40835", ResourceVersion:"655", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5869549b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-a-07b90b6465", ContainerID:"", Pod:"calico-apiserver-5869549b56-tp96n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9696b13e8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:58:06.165923 containerd[1823]: 2025-04-30 13:58:06.159 [INFO][5527] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.66/32] ContainerID="7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-tp96n" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--tp96n-eth0" Apr 30 13:58:06.165923 containerd[1823]: 2025-04-30 13:58:06.159 [INFO][5527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9696b13e8d ContainerID="7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-tp96n" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--tp96n-eth0" Apr 30 13:58:06.165923 containerd[1823]: 2025-04-30 13:58:06.160 [INFO][5527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-tp96n" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--tp96n-eth0" Apr 30 13:58:06.165923 containerd[1823]: 2025-04-30 13:58:06.160 [INFO][5527] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-tp96n" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--tp96n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--tp96n-eth0", GenerateName:"calico-apiserver-5869549b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b537974-b389-45c2-aa3e-aa0f95c40835", ResourceVersion:"655", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5869549b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-a-07b90b6465", ContainerID:"7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642", Pod:"calico-apiserver-5869549b56-tp96n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9696b13e8d", MAC:"06:ad:96:8b:f7:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:58:06.165923 containerd[1823]: 2025-04-30 13:58:06.164 [INFO][5527] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-tp96n" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--tp96n-eth0" Apr 30 13:58:06.174876 containerd[1823]: time="2025-04-30T13:58:06.174826674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:58:06.175081 containerd[1823]: time="2025-04-30T13:58:06.174871509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:58:06.175101 containerd[1823]: time="2025-04-30T13:58:06.175079160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:58:06.175131 containerd[1823]: time="2025-04-30T13:58:06.175121764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:58:06.194530 systemd[1]: Started cri-containerd-7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642.scope - libcontainer container 7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642. Apr 30 13:58:06.219137 containerd[1823]: time="2025-04-30T13:58:06.219114352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-tp96n,Uid:1b537974-b389-45c2-aa3e-aa0f95c40835,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642\"" Apr 30 13:58:06.219848 containerd[1823]: time="2025-04-30T13:58:06.219836911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 13:58:06.287754 systemd-networkd[1731]: cali993114eb6a0: Link UP Apr 30 13:58:06.287933 systemd-networkd[1731]: cali993114eb6a0: Gained carrier Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.019 [INFO][5549] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.026 [INFO][5549] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--fp9s8-eth0 calico-apiserver-5869549b56- calico-apiserver 4722e80e-9f8b-4616-8557-09179829c5a7 654 0 2025-04-30 13:57:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5869549b56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230.1.1-a-07b90b6465 calico-apiserver-5869549b56-fp9s8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali993114eb6a0 [] []}} ContainerID="8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-fp9s8" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--fp9s8-" Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.027 [INFO][5549] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-fp9s8" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--fp9s8-eth0" Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.043 [INFO][5671] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" HandleID="k8s-pod-network.8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" Workload="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--fp9s8-eth0" Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.048 [INFO][5671] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" HandleID="k8s-pod-network.8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" Workload="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--fp9s8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000133f50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230.1.1-a-07b90b6465", "pod":"calico-apiserver-5869549b56-fp9s8", "timestamp":"2025-04-30 13:58:06.043987772 +0000 UTC"}, Hostname:"ci-4230.1.1-a-07b90b6465", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.048 [INFO][5671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.158 [INFO][5671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.158 [INFO][5671] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-a-07b90b6465' Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.246 [INFO][5671] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.255 [INFO][5671] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.263 [INFO][5671] ipam/ipam.go 489: Trying affinity for 192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.267 [INFO][5671] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.271 [INFO][5671] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.271 [INFO][5671] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.275 [INFO][5671] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985 Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.282 [INFO][5671] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.285 [INFO][5671] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.67/26] block=192.168.80.64/26 handle="k8s-pod-network.8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.285 [INFO][5671] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.67/26] handle="k8s-pod-network.8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.285 [INFO][5671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 13:58:06.293475 containerd[1823]: 2025-04-30 13:58:06.285 [INFO][5671] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.67/26] IPv6=[] ContainerID="8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" HandleID="k8s-pod-network.8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" Workload="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--fp9s8-eth0" Apr 30 13:58:06.294019 containerd[1823]: 2025-04-30 13:58:06.286 [INFO][5549] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-fp9s8" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--fp9s8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--fp9s8-eth0", GenerateName:"calico-apiserver-5869549b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"4722e80e-9f8b-4616-8557-09179829c5a7", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5869549b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-a-07b90b6465", ContainerID:"", Pod:"calico-apiserver-5869549b56-fp9s8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali993114eb6a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:58:06.294019 containerd[1823]: 2025-04-30 13:58:06.286 [INFO][5549] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.67/32] ContainerID="8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-fp9s8" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--fp9s8-eth0" Apr 30 13:58:06.294019 containerd[1823]: 2025-04-30 13:58:06.286 [INFO][5549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali993114eb6a0 ContainerID="8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-fp9s8" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--fp9s8-eth0" Apr 30 13:58:06.294019 containerd[1823]: 2025-04-30 13:58:06.287 [INFO][5549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-fp9s8" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--fp9s8-eth0" Apr 30 13:58:06.294019 containerd[1823]: 2025-04-30 13:58:06.288 [INFO][5549] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-fp9s8" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--fp9s8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--fp9s8-eth0", GenerateName:"calico-apiserver-5869549b56-", Namespace:"calico-apiserver", SelfLink:"", UID:"4722e80e-9f8b-4616-8557-09179829c5a7", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5869549b56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-a-07b90b6465", ContainerID:"8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985", Pod:"calico-apiserver-5869549b56-fp9s8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali993114eb6a0", MAC:"be:2e:ca:d1:71:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:58:06.294019 containerd[1823]: 2025-04-30 13:58:06.292 [INFO][5549] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985" Namespace="calico-apiserver" Pod="calico-apiserver-5869549b56-fp9s8" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--apiserver--5869549b56--fp9s8-eth0" Apr 30 13:58:06.304079 containerd[1823]: time="2025-04-30T13:58:06.304006398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:58:06.304079 containerd[1823]: time="2025-04-30T13:58:06.304036344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:58:06.304079 containerd[1823]: time="2025-04-30T13:58:06.304042942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:58:06.304178 containerd[1823]: time="2025-04-30T13:58:06.304084060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:58:06.324700 systemd[1]: Started cri-containerd-8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985.scope - libcontainer container 8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985. Apr 30 13:58:06.388872 systemd-networkd[1731]: calid3e6c7c0f07: Link UP Apr 30 13:58:06.389106 systemd-networkd[1731]: calid3e6c7c0f07: Gained carrier Apr 30 13:58:06.392746 systemd[1]: run-netns-cni\x2d23043112\x2dad76\x2d9855\x2d6944\x2d14e34e7bac71.mount: Deactivated successfully. Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.020 [INFO][5577] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.027 [INFO][5577] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--a--07b90b6465-k8s-calico--kube--controllers--79b68cbf8--dwcxs-eth0 calico-kube-controllers-79b68cbf8- calico-system 12494da2-c4f7-4daa-8094-3e959335689c 656 0 2025-04-30 13:57:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79b68cbf8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4230.1.1-a-07b90b6465 calico-kube-controllers-79b68cbf8-dwcxs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid3e6c7c0f07 [] []}} ContainerID="b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" Namespace="calico-system" Pod="calico-kube-controllers-79b68cbf8-dwcxs" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--kube--controllers--79b68cbf8--dwcxs-" Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.027 [INFO][5577] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" Namespace="calico-system" Pod="calico-kube-controllers-79b68cbf8-dwcxs" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--kube--controllers--79b68cbf8--dwcxs-eth0" Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.044 [INFO][5669] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" HandleID="k8s-pod-network.b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" Workload="ci--4230.1.1--a--07b90b6465-k8s-calico--kube--controllers--79b68cbf8--dwcxs-eth0" Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.049 [INFO][5669] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" HandleID="k8s-pod-network.b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" Workload="ci--4230.1.1--a--07b90b6465-k8s-calico--kube--controllers--79b68cbf8--dwcxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00022bf40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230.1.1-a-07b90b6465", "pod":"calico-kube-controllers-79b68cbf8-dwcxs", "timestamp":"2025-04-30 13:58:06.044035753 +0000 UTC"}, Hostname:"ci-4230.1.1-a-07b90b6465", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.049 [INFO][5669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.285 [INFO][5669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.285 [INFO][5669] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-a-07b90b6465' Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.347 [INFO][5669] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.358 [INFO][5669] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.368 [INFO][5669] ipam/ipam.go 489: Trying affinity for 192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.371 [INFO][5669] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.374 [INFO][5669] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.374 [INFO][5669] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.376 [INFO][5669] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.381 [INFO][5669] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.386 [INFO][5669] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.68/26] block=192.168.80.64/26 handle="k8s-pod-network.b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.386 [INFO][5669] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.68/26] handle="k8s-pod-network.b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.386 [INFO][5669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 13:58:06.395834 containerd[1823]: 2025-04-30 13:58:06.386 [INFO][5669] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.68/26] IPv6=[] ContainerID="b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" HandleID="k8s-pod-network.b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" Workload="ci--4230.1.1--a--07b90b6465-k8s-calico--kube--controllers--79b68cbf8--dwcxs-eth0" Apr 30 13:58:06.396410 containerd[1823]: 2025-04-30 13:58:06.387 [INFO][5577] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" Namespace="calico-system" Pod="calico-kube-controllers-79b68cbf8-dwcxs" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--kube--controllers--79b68cbf8--dwcxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--a--07b90b6465-k8s-calico--kube--controllers--79b68cbf8--dwcxs-eth0", GenerateName:"calico-kube-controllers-79b68cbf8-", Namespace:"calico-system", SelfLink:"", UID:"12494da2-c4f7-4daa-8094-3e959335689c", ResourceVersion:"656", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79b68cbf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-a-07b90b6465", ContainerID:"", Pod:"calico-kube-controllers-79b68cbf8-dwcxs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3e6c7c0f07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:58:06.396410 containerd[1823]: 2025-04-30 13:58:06.387 [INFO][5577] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.68/32] ContainerID="b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" Namespace="calico-system" Pod="calico-kube-controllers-79b68cbf8-dwcxs" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--kube--controllers--79b68cbf8--dwcxs-eth0" Apr 30 13:58:06.396410 containerd[1823]: 2025-04-30 13:58:06.387 [INFO][5577] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3e6c7c0f07 ContainerID="b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" Namespace="calico-system" Pod="calico-kube-controllers-79b68cbf8-dwcxs" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--kube--controllers--79b68cbf8--dwcxs-eth0" Apr 30 13:58:06.396410 containerd[1823]: 2025-04-30 13:58:06.389 [INFO][5577] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" Namespace="calico-system" Pod="calico-kube-controllers-79b68cbf8-dwcxs" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--kube--controllers--79b68cbf8--dwcxs-eth0" Apr 30 13:58:06.396410 containerd[1823]: 2025-04-30 13:58:06.389 [INFO][5577] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" Namespace="calico-system" Pod="calico-kube-controllers-79b68cbf8-dwcxs" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--kube--controllers--79b68cbf8--dwcxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--a--07b90b6465-k8s-calico--kube--controllers--79b68cbf8--dwcxs-eth0", GenerateName:"calico-kube-controllers-79b68cbf8-", Namespace:"calico-system", SelfLink:"", UID:"12494da2-c4f7-4daa-8094-3e959335689c", ResourceVersion:"656", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79b68cbf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-a-07b90b6465", ContainerID:"b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a", Pod:"calico-kube-controllers-79b68cbf8-dwcxs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3e6c7c0f07", MAC:"e6:cd:bc:fd:62:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:58:06.396410 containerd[1823]: 2025-04-30 13:58:06.394 [INFO][5577] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a" Namespace="calico-system" Pod="calico-kube-controllers-79b68cbf8-dwcxs" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-calico--kube--controllers--79b68cbf8--dwcxs-eth0" Apr 30 13:58:06.399225 containerd[1823]: time="2025-04-30T13:58:06.399204439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5869549b56-fp9s8,Uid:4722e80e-9f8b-4616-8557-09179829c5a7,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985\"" Apr 30 13:58:06.405852 containerd[1823]: time="2025-04-30T13:58:06.405776432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:58:06.405852 containerd[1823]: time="2025-04-30T13:58:06.405803873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:58:06.405852 containerd[1823]: time="2025-04-30T13:58:06.405810538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:58:06.405852 containerd[1823]: time="2025-04-30T13:58:06.405847438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:58:06.434570 systemd[1]: Started cri-containerd-b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a.scope - libcontainer container b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a. Apr 30 13:58:06.458157 containerd[1823]: time="2025-04-30T13:58:06.458134952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79b68cbf8-dwcxs,Uid:12494da2-c4f7-4daa-8094-3e959335689c,Namespace:calico-system,Attempt:5,} returns sandbox id \"b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a\"" Apr 30 13:58:06.475472 systemd-networkd[1731]: cali32d925ac595: Link UP Apr 30 13:58:06.475629 systemd-networkd[1731]: cali32d925ac595: Gained carrier Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.020 [INFO][5579] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.028 [INFO][5579] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--5qrfx-eth0 coredns-6f6b679f8f- kube-system 08e4b302-96b3-481a-a360-157647755634 653 0 2025-04-30 13:57:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230.1.1-a-07b90b6465 coredns-6f6b679f8f-5qrfx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali32d925ac595 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" Namespace="kube-system" Pod="coredns-6f6b679f8f-5qrfx" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--5qrfx-" Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.028 [INFO][5579] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" Namespace="kube-system" Pod="coredns-6f6b679f8f-5qrfx" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--5qrfx-eth0" Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.044 [INFO][5678] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" HandleID="k8s-pod-network.91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" Workload="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--5qrfx-eth0" Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.050 [INFO][5678] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" HandleID="k8s-pod-network.91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" Workload="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--5qrfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000219180), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230.1.1-a-07b90b6465", "pod":"coredns-6f6b679f8f-5qrfx", "timestamp":"2025-04-30 13:58:06.04415231 +0000 UTC"}, Hostname:"ci-4230.1.1-a-07b90b6465", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.050 [INFO][5678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.386 [INFO][5678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.386 [INFO][5678] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-a-07b90b6465' Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.446 [INFO][5678] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.458 [INFO][5678] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.464 [INFO][5678] ipam/ipam.go 489: Trying affinity for 192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.465 [INFO][5678] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.467 [INFO][5678] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.467 [INFO][5678] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.468 [INFO][5678] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691 Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.470 [INFO][5678] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.473 [INFO][5678] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.69/26] block=192.168.80.64/26 handle="k8s-pod-network.91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.473 [INFO][5678] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.69/26] handle="k8s-pod-network.91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.473 [INFO][5678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 13:58:06.480982 containerd[1823]: 2025-04-30 13:58:06.473 [INFO][5678] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.69/26] IPv6=[] ContainerID="91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" HandleID="k8s-pod-network.91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" Workload="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--5qrfx-eth0" Apr 30 13:58:06.481459 containerd[1823]: 2025-04-30 13:58:06.474 [INFO][5579] cni-plugin/k8s.go 386: Populated endpoint ContainerID="91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" Namespace="kube-system" Pod="coredns-6f6b679f8f-5qrfx" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--5qrfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--5qrfx-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"08e4b302-96b3-481a-a360-157647755634", ResourceVersion:"653", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-a-07b90b6465", ContainerID:"", Pod:"coredns-6f6b679f8f-5qrfx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32d925ac595", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:58:06.481459 containerd[1823]: 2025-04-30 13:58:06.474 [INFO][5579] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.69/32] ContainerID="91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" Namespace="kube-system" Pod="coredns-6f6b679f8f-5qrfx" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--5qrfx-eth0" Apr 30 13:58:06.481459 containerd[1823]: 2025-04-30 13:58:06.474 [INFO][5579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32d925ac595 ContainerID="91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" Namespace="kube-system" Pod="coredns-6f6b679f8f-5qrfx" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--5qrfx-eth0" Apr 30 13:58:06.481459 containerd[1823]: 2025-04-30 13:58:06.475 [INFO][5579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" Namespace="kube-system" Pod="coredns-6f6b679f8f-5qrfx" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--5qrfx-eth0" Apr 30 13:58:06.481459 containerd[1823]: 2025-04-30 13:58:06.475 [INFO][5579] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" Namespace="kube-system" Pod="coredns-6f6b679f8f-5qrfx" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--5qrfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--5qrfx-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"08e4b302-96b3-481a-a360-157647755634", ResourceVersion:"653", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-a-07b90b6465", ContainerID:"91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691", Pod:"coredns-6f6b679f8f-5qrfx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32d925ac595", MAC:"6a:23:2e:f4:01:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:58:06.481459 containerd[1823]: 2025-04-30 13:58:06.480 [INFO][5579] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691" Namespace="kube-system" Pod="coredns-6f6b679f8f-5qrfx" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-coredns--6f6b679f8f--5qrfx-eth0" Apr 30 13:58:06.490462 containerd[1823]: time="2025-04-30T13:58:06.490364564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:58:06.490462 containerd[1823]: time="2025-04-30T13:58:06.490392554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:58:06.490667 containerd[1823]: time="2025-04-30T13:58:06.490582359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:58:06.490667 containerd[1823]: time="2025-04-30T13:58:06.490628763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:58:06.509416 systemd[1]: Started cri-containerd-91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691.scope - libcontainer container 91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691. Apr 30 13:58:06.535079 containerd[1823]: time="2025-04-30T13:58:06.535054860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5qrfx,Uid:08e4b302-96b3-481a-a360-157647755634,Namespace:kube-system,Attempt:5,} returns sandbox id \"91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691\"" Apr 30 13:58:06.536331 containerd[1823]: time="2025-04-30T13:58:06.536315868Z" level=info msg="CreateContainer within sandbox \"91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 13:58:06.541274 containerd[1823]: time="2025-04-30T13:58:06.541200519Z" level=info msg="CreateContainer within sandbox \"91c66b178b355e94af8f9e09e9923beb91f67357e05abc18b819f8c328923691\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"671d16a508df06b5f8493feadfa71f7a5597051e92d6dcbd7a5d1b515d8b5138\"" Apr 30 13:58:06.541429 containerd[1823]: time="2025-04-30T13:58:06.541415131Z" level=info msg="StartContainer for \"671d16a508df06b5f8493feadfa71f7a5597051e92d6dcbd7a5d1b515d8b5138\"" Apr 30 13:58:06.566349 systemd[1]: Started cri-containerd-671d16a508df06b5f8493feadfa71f7a5597051e92d6dcbd7a5d1b515d8b5138.scope - libcontainer container 671d16a508df06b5f8493feadfa71f7a5597051e92d6dcbd7a5d1b515d8b5138. Apr 30 13:58:06.576729 systemd-networkd[1731]: cali66952bf45eb: Link UP Apr 30 13:58:06.576901 systemd-networkd[1731]: cali66952bf45eb: Gained carrier Apr 30 13:58:06.581394 containerd[1823]: time="2025-04-30T13:58:06.581367943Z" level=info msg="StartContainer for \"671d16a508df06b5f8493feadfa71f7a5597051e92d6dcbd7a5d1b515d8b5138\" returns successfully" Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.021 [INFO][5561] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.028 [INFO][5561] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230.1.1--a--07b90b6465-k8s-csi--node--driver--gc5mn-eth0 csi-node-driver- calico-system f155cb52-e455-42c3-b112-e9f1dc1f3da7 581 0 2025-04-30 13:57:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4230.1.1-a-07b90b6465 csi-node-driver-gc5mn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali66952bf45eb [] []}} ContainerID="e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" Namespace="calico-system" Pod="csi-node-driver-gc5mn" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-csi--node--driver--gc5mn-" Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.028 [INFO][5561] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" Namespace="calico-system" Pod="csi-node-driver-gc5mn" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-csi--node--driver--gc5mn-eth0" Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.046 [INFO][5686] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" HandleID="k8s-pod-network.e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" Workload="ci--4230.1.1--a--07b90b6465-k8s-csi--node--driver--gc5mn-eth0" Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.050 [INFO][5686] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" HandleID="k8s-pod-network.e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" Workload="ci--4230.1.1--a--07b90b6465-k8s-csi--node--driver--gc5mn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000219e70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230.1.1-a-07b90b6465", "pod":"csi-node-driver-gc5mn", "timestamp":"2025-04-30 13:58:06.046629212 +0000 UTC"}, Hostname:"ci-4230.1.1-a-07b90b6465", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.051 [INFO][5686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.473 [INFO][5686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.473 [INFO][5686] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230.1.1-a-07b90b6465' Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.546 [INFO][5686] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.557 [INFO][5686] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.565 [INFO][5686] ipam/ipam.go 489: Trying affinity for 192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.566 [INFO][5686] ipam/ipam.go 155: Attempting to load block cidr=192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.567 [INFO][5686] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.80.64/26 host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.567 [INFO][5686] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.80.64/26 handle="k8s-pod-network.e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.568 [INFO][5686] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.570 [INFO][5686] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.80.64/26 handle="k8s-pod-network.e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.574 [INFO][5686] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.80.70/26] block=192.168.80.64/26 handle="k8s-pod-network.e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.574 [INFO][5686] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.80.70/26] handle="k8s-pod-network.e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" host="ci-4230.1.1-a-07b90b6465" Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.574 [INFO][5686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 13:58:06.584252 containerd[1823]: 2025-04-30 13:58:06.574 [INFO][5686] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.80.70/26] IPv6=[] ContainerID="e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" HandleID="k8s-pod-network.e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" Workload="ci--4230.1.1--a--07b90b6465-k8s-csi--node--driver--gc5mn-eth0" Apr 30 13:58:06.584802 containerd[1823]: 2025-04-30 13:58:06.575 [INFO][5561] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" Namespace="calico-system" Pod="csi-node-driver-gc5mn" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-csi--node--driver--gc5mn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--a--07b90b6465-k8s-csi--node--driver--gc5mn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f155cb52-e455-42c3-b112-e9f1dc1f3da7", ResourceVersion:"581", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-a-07b90b6465", ContainerID:"", Pod:"csi-node-driver-gc5mn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66952bf45eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:58:06.584802 containerd[1823]: 2025-04-30 13:58:06.575 [INFO][5561] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.80.70/32] ContainerID="e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" Namespace="calico-system" Pod="csi-node-driver-gc5mn" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-csi--node--driver--gc5mn-eth0" Apr 30 13:58:06.584802 containerd[1823]: 2025-04-30 13:58:06.575 [INFO][5561] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66952bf45eb ContainerID="e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" Namespace="calico-system" Pod="csi-node-driver-gc5mn" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-csi--node--driver--gc5mn-eth0" Apr 30 13:58:06.584802 containerd[1823]: 2025-04-30 13:58:06.576 [INFO][5561] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" Namespace="calico-system" Pod="csi-node-driver-gc5mn" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-csi--node--driver--gc5mn-eth0" Apr 30 13:58:06.584802 containerd[1823]: 2025-04-30 13:58:06.576 [INFO][5561] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" Namespace="calico-system" Pod="csi-node-driver-gc5mn" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-csi--node--driver--gc5mn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230.1.1--a--07b90b6465-k8s-csi--node--driver--gc5mn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f155cb52-e455-42c3-b112-e9f1dc1f3da7", ResourceVersion:"581", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 57, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230.1.1-a-07b90b6465", ContainerID:"e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade", Pod:"csi-node-driver-gc5mn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66952bf45eb", MAC:"0a:5a:ac:05:37:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:58:06.584802 containerd[1823]: 2025-04-30 13:58:06.583 [INFO][5561] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade" Namespace="calico-system" Pod="csi-node-driver-gc5mn" WorkloadEndpoint="ci--4230.1.1--a--07b90b6465-k8s-csi--node--driver--gc5mn-eth0" Apr 30 13:58:06.594530 containerd[1823]: time="2025-04-30T13:58:06.594484521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:58:06.594530 containerd[1823]: time="2025-04-30T13:58:06.594517050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:58:06.594530 containerd[1823]: time="2025-04-30T13:58:06.594523661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:58:06.594659 containerd[1823]: time="2025-04-30T13:58:06.594566351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:58:06.617687 systemd[1]: Started cri-containerd-e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade.scope - libcontainer container e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade. Apr 30 13:58:06.634304 containerd[1823]: time="2025-04-30T13:58:06.634268838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gc5mn,Uid:f155cb52-e455-42c3-b112-e9f1dc1f3da7,Namespace:calico-system,Attempt:4,} returns sandbox id \"e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade\"" Apr 30 13:58:07.040446 kubelet[3137]: I0430 13:58:07.040312 3137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5qrfx" podStartSLOduration=20.040230482 podStartE2EDuration="20.040230482s" podCreationTimestamp="2025-04-30 13:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:58:07.039376598 +0000 UTC m=+26.206505401" watchObservedRunningTime="2025-04-30 13:58:07.040230482 +0000 UTC m=+26.207359240" Apr 30 13:58:07.070033 kubelet[3137]: I0430 13:58:07.069979 3137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vfzr4" podStartSLOduration=20.069960851 podStartE2EDuration="20.069960851s" podCreationTimestamp="2025-04-30 13:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:58:07.069680716 +0000 UTC m=+26.236809453" watchObservedRunningTime="2025-04-30 13:58:07.069960851 +0000 UTC m=+26.237089568" Apr 30 13:58:07.655340 systemd-networkd[1731]: calid3e6c7c0f07: Gained IPv6LL Apr 30 13:58:07.719459 systemd-networkd[1731]: cali993114eb6a0: Gained IPv6LL Apr 30 13:58:08.039326 systemd-networkd[1731]: calibcffb765081: Gained IPv6LL Apr 30 13:58:08.039604 systemd-networkd[1731]: calib9696b13e8d: Gained IPv6LL Apr 30 13:58:08.039785 systemd-networkd[1731]: cali66952bf45eb: Gained IPv6LL Apr 30 13:58:08.167469 systemd-networkd[1731]: cali32d925ac595: Gained IPv6LL Apr 30 13:58:08.896762 containerd[1823]: time="2025-04-30T13:58:08.896710319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:08.896983 containerd[1823]: time="2025-04-30T13:58:08.896823759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" Apr 30 13:58:08.897267 containerd[1823]: time="2025-04-30T13:58:08.897254639Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:08.898327 containerd[1823]: time="2025-04-30T13:58:08.898306148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:08.898697 containerd[1823]: time="2025-04-30T13:58:08.898681960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 2.67882973s" Apr 30 13:58:08.898746 containerd[1823]: time="2025-04-30T13:58:08.898697878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 13:58:08.899182 containerd[1823]: time="2025-04-30T13:58:08.899171604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" Apr 30 13:58:08.899669 containerd[1823]: time="2025-04-30T13:58:08.899656946Z" level=info msg="CreateContainer within sandbox \"7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 13:58:08.903511 containerd[1823]: time="2025-04-30T13:58:08.903494159Z" level=info msg="CreateContainer within sandbox \"7e148f9a4d3b95faabd962688a616872f596381681c94531d10c69fdae4f6642\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"83bdf2e95b8cd4cabbabaa809b23931a643a2a0a6376e690b4a4a27d00447ae0\"" Apr 30 13:58:08.903731 containerd[1823]: time="2025-04-30T13:58:08.903693694Z" level=info msg="StartContainer for \"83bdf2e95b8cd4cabbabaa809b23931a643a2a0a6376e690b4a4a27d00447ae0\"" Apr 30 13:58:08.930780 systemd[1]: Started cri-containerd-83bdf2e95b8cd4cabbabaa809b23931a643a2a0a6376e690b4a4a27d00447ae0.scope - libcontainer container 83bdf2e95b8cd4cabbabaa809b23931a643a2a0a6376e690b4a4a27d00447ae0. Apr 30 13:58:09.017102 containerd[1823]: time="2025-04-30T13:58:09.017069484Z" level=info msg="StartContainer for \"83bdf2e95b8cd4cabbabaa809b23931a643a2a0a6376e690b4a4a27d00447ae0\" returns successfully" Apr 30 13:58:09.040820 kubelet[3137]: I0430 13:58:09.040779 3137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5869549b56-tp96n" podStartSLOduration=13.361353912 podStartE2EDuration="16.04076624s" podCreationTimestamp="2025-04-30 13:57:53 +0000 UTC" firstStartedPulling="2025-04-30 13:58:06.21971199 +0000 UTC m=+25.386840698" lastFinishedPulling="2025-04-30 13:58:08.899124321 +0000 UTC m=+28.066253026" observedRunningTime="2025-04-30 13:58:09.040570786 +0000 UTC m=+28.207699494" watchObservedRunningTime="2025-04-30 13:58:09.04076624 +0000 UTC m=+28.207894945" Apr 30 13:58:09.859284 containerd[1823]: time="2025-04-30T13:58:09.859252824Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:09.859456 containerd[1823]: time="2025-04-30T13:58:09.859399437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" Apr 30 13:58:09.861336 containerd[1823]: time="2025-04-30T13:58:09.861300626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 962.114794ms" Apr 30 13:58:09.861336 containerd[1823]: time="2025-04-30T13:58:09.861319099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" Apr 30 13:58:09.861865 containerd[1823]: time="2025-04-30T13:58:09.861807695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" Apr 30 13:58:09.862378 containerd[1823]: time="2025-04-30T13:58:09.862330380Z" level=info msg="CreateContainer within sandbox \"8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 30 13:58:09.867309 containerd[1823]: time="2025-04-30T13:58:09.867267952Z" level=info msg="CreateContainer within sandbox \"8b9464fb1b2468954cf1ca1497fe9d044649c1b25fc3bb0877997b7cd2ff7985\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d1eacab85a62cacd1b2aff11a393d35d92ccc297f9fdd46e876e64f243727e99\"" Apr 30 13:58:09.867573 containerd[1823]: time="2025-04-30T13:58:09.867529277Z" level=info msg="StartContainer for \"d1eacab85a62cacd1b2aff11a393d35d92ccc297f9fdd46e876e64f243727e99\"" Apr 30 13:58:09.893450 systemd[1]: Started cri-containerd-d1eacab85a62cacd1b2aff11a393d35d92ccc297f9fdd46e876e64f243727e99.scope - libcontainer container d1eacab85a62cacd1b2aff11a393d35d92ccc297f9fdd46e876e64f243727e99. Apr 30 13:58:09.930165 containerd[1823]: time="2025-04-30T13:58:09.930108725Z" level=info msg="StartContainer for \"d1eacab85a62cacd1b2aff11a393d35d92ccc297f9fdd46e876e64f243727e99\" returns successfully" Apr 30 13:58:10.035789 kubelet[3137]: I0430 13:58:10.035773 3137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 13:58:10.041818 kubelet[3137]: I0430 13:58:10.041769 3137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5869549b56-fp9s8" podStartSLOduration=13.579814519 podStartE2EDuration="17.041756571s" podCreationTimestamp="2025-04-30 13:57:53 +0000 UTC" firstStartedPulling="2025-04-30 13:58:06.399793554 +0000 UTC m=+25.566922260" lastFinishedPulling="2025-04-30 13:58:09.861735606 +0000 UTC m=+29.028864312" observedRunningTime="2025-04-30 13:58:10.041574601 +0000 UTC m=+29.208703307" watchObservedRunningTime="2025-04-30 13:58:10.041756571 +0000 UTC m=+29.208885275" Apr 30 13:58:11.037480 kubelet[3137]: I0430 13:58:11.037464 3137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 13:58:12.716823 containerd[1823]: time="2025-04-30T13:58:12.716771689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:12.717058 containerd[1823]: time="2025-04-30T13:58:12.716914520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" Apr 30 13:58:12.717357 containerd[1823]: time="2025-04-30T13:58:12.717316547Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:12.718334 containerd[1823]: time="2025-04-30T13:58:12.718293583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:12.718755 containerd[1823]: time="2025-04-30T13:58:12.718713955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 2.856888229s" Apr 30 13:58:12.718755 containerd[1823]: time="2025-04-30T13:58:12.718730001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" Apr 30 13:58:12.719308 containerd[1823]: time="2025-04-30T13:58:12.719272156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 13:58:12.722260 containerd[1823]: time="2025-04-30T13:58:12.722244509Z" level=info msg="CreateContainer within sandbox \"b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 30 13:58:12.728506 containerd[1823]: time="2025-04-30T13:58:12.728463294Z" level=info msg="CreateContainer within sandbox \"b7d08d420f95c4b3a74ef79fa99903389384e74b85fb99c6aea2c6853558f95a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"974fb9aeb05878195187d72e201e8d787bbb13b144c14a6bbc71afad08d0e34d\"" Apr 30 13:58:12.728724 containerd[1823]: time="2025-04-30T13:58:12.728712081Z" level=info msg="StartContainer for \"974fb9aeb05878195187d72e201e8d787bbb13b144c14a6bbc71afad08d0e34d\"" Apr 30 13:58:12.753426 systemd[1]: Started cri-containerd-974fb9aeb05878195187d72e201e8d787bbb13b144c14a6bbc71afad08d0e34d.scope - libcontainer container 974fb9aeb05878195187d72e201e8d787bbb13b144c14a6bbc71afad08d0e34d. Apr 30 13:58:12.777319 containerd[1823]: time="2025-04-30T13:58:12.777298772Z" level=info msg="StartContainer for \"974fb9aeb05878195187d72e201e8d787bbb13b144c14a6bbc71afad08d0e34d\" returns successfully" Apr 30 13:58:13.079293 kubelet[3137]: I0430 13:58:13.079118 3137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79b68cbf8-dwcxs" podStartSLOduration=13.81855088 podStartE2EDuration="20.079079314s" podCreationTimestamp="2025-04-30 13:57:53 +0000 UTC" firstStartedPulling="2025-04-30 13:58:06.458678308 +0000 UTC m=+25.625807014" lastFinishedPulling="2025-04-30 13:58:12.719206743 +0000 UTC m=+31.886335448" observedRunningTime="2025-04-30 13:58:13.077799319 +0000 UTC m=+32.244928094" watchObservedRunningTime="2025-04-30 13:58:13.079079314 +0000 UTC m=+32.246208071" Apr 30 13:58:14.056990 kubelet[3137]: I0430 13:58:14.056937 3137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 13:58:14.403223 containerd[1823]: time="2025-04-30T13:58:14.403128820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:14.403422 containerd[1823]: time="2025-04-30T13:58:14.403242845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 13:58:14.403714 containerd[1823]: time="2025-04-30T13:58:14.403673710Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:14.404853 containerd[1823]: time="2025-04-30T13:58:14.404811827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:14.405351 containerd[1823]: time="2025-04-30T13:58:14.405288492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.686002154s" Apr 30 13:58:14.405351 containerd[1823]: time="2025-04-30T13:58:14.405303831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 13:58:14.406147 containerd[1823]: time="2025-04-30T13:58:14.406136285Z" level=info msg="CreateContainer within sandbox \"e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 13:58:14.411569 containerd[1823]: time="2025-04-30T13:58:14.411521735Z" level=info msg="CreateContainer within sandbox \"e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d07a70007f02f76ddeadf4c37c444ded83d46021f262fee91ec2a552b7edfbb3\"" Apr 30 13:58:14.411784 containerd[1823]: time="2025-04-30T13:58:14.411728028Z" level=info msg="StartContainer for \"d07a70007f02f76ddeadf4c37c444ded83d46021f262fee91ec2a552b7edfbb3\"" Apr 30 13:58:14.434383 systemd[1]: Started cri-containerd-d07a70007f02f76ddeadf4c37c444ded83d46021f262fee91ec2a552b7edfbb3.scope - libcontainer container d07a70007f02f76ddeadf4c37c444ded83d46021f262fee91ec2a552b7edfbb3. Apr 30 13:58:14.448343 containerd[1823]: time="2025-04-30T13:58:14.448321238Z" level=info msg="StartContainer for \"d07a70007f02f76ddeadf4c37c444ded83d46021f262fee91ec2a552b7edfbb3\" returns successfully" Apr 30 13:58:14.448878 containerd[1823]: time="2025-04-30T13:58:14.448863521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 13:58:14.878092 kubelet[3137]: I0430 13:58:14.878071 3137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 13:58:15.891248 kernel: bpftool[6855]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 13:58:15.970920 containerd[1823]: time="2025-04-30T13:58:15.970871217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:15.971126 containerd[1823]: time="2025-04-30T13:58:15.971101856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 13:58:15.971457 containerd[1823]: time="2025-04-30T13:58:15.971416445Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:15.972399 containerd[1823]: time="2025-04-30T13:58:15.972364010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:58:15.973143 containerd[1823]: time="2025-04-30T13:58:15.973100169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.524216512s" Apr 30 13:58:15.973143 containerd[1823]: time="2025-04-30T13:58:15.973116102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 13:58:15.974076 containerd[1823]: time="2025-04-30T13:58:15.974035997Z" level=info msg="CreateContainer within sandbox \"e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 13:58:15.981037 containerd[1823]: time="2025-04-30T13:58:15.980994055Z" level=info msg="CreateContainer within sandbox \"e3ff030db52fb52484c55fcceeb374931c3e72651124fa986c5055dd29edcade\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"65dd77286520763aae4a261f8f3dd9bd9f05ed8556e629b36adad4b88a9f149f\"" Apr 30 13:58:15.981275 containerd[1823]: time="2025-04-30T13:58:15.981231030Z" level=info msg="StartContainer for \"65dd77286520763aae4a261f8f3dd9bd9f05ed8556e629b36adad4b88a9f149f\"" Apr 30 13:58:16.013429 systemd[1]: Started cri-containerd-65dd77286520763aae4a261f8f3dd9bd9f05ed8556e629b36adad4b88a9f149f.scope - libcontainer container 65dd77286520763aae4a261f8f3dd9bd9f05ed8556e629b36adad4b88a9f149f. Apr 30 13:58:16.028368 containerd[1823]: time="2025-04-30T13:58:16.028346930Z" level=info msg="StartContainer for \"65dd77286520763aae4a261f8f3dd9bd9f05ed8556e629b36adad4b88a9f149f\" returns successfully" Apr 30 13:58:16.041113 systemd-networkd[1731]: vxlan.calico: Link UP Apr 30 13:58:16.041116 systemd-networkd[1731]: vxlan.calico: Gained carrier Apr 30 13:58:16.082569 kubelet[3137]: I0430 13:58:16.082476 3137 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gc5mn" podStartSLOduration=13.743976526 podStartE2EDuration="23.08246316s" podCreationTimestamp="2025-04-30 13:57:53 +0000 UTC" firstStartedPulling="2025-04-30 13:58:06.634981472 +0000 UTC m=+25.802110178" lastFinishedPulling="2025-04-30 13:58:15.973468106 +0000 UTC m=+35.140596812" observedRunningTime="2025-04-30 13:58:16.082251899 +0000 UTC m=+35.249380606" watchObservedRunningTime="2025-04-30 13:58:16.08246316 +0000 UTC m=+35.249591865" Apr 30 13:58:16.913412 kubelet[3137]: I0430 13:58:16.913312 3137 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 13:58:16.913412 kubelet[3137]: I0430 13:58:16.913375 3137 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 13:58:17.511581 systemd-networkd[1731]: vxlan.calico: Gained IPv6LL Apr 30 13:58:31.278924 kubelet[3137]: I0430 13:58:31.278779 3137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 13:58:35.754893 kubelet[3137]: I0430 13:58:35.754662 3137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 13:58:40.882127 containerd[1823]: time="2025-04-30T13:58:40.882046147Z" level=info msg="StopPodSandbox for \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\"" Apr 30 13:58:40.883209 containerd[1823]: time="2025-04-30T13:58:40.882332211Z" level=info msg="TearDown network for sandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\" successfully" Apr 30 13:58:40.883209 containerd[1823]: time="2025-04-30T13:58:40.882371868Z" level=info msg="StopPodSandbox for \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\" returns successfully" Apr 30 13:58:40.883493 containerd[1823]: time="2025-04-30T13:58:40.883283865Z" level=info msg="RemovePodSandbox for \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\"" Apr 30 13:58:40.883493 containerd[1823]: time="2025-04-30T13:58:40.883384186Z" level=info msg="Forcibly stopping sandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\"" Apr 30 13:58:40.883700 containerd[1823]: time="2025-04-30T13:58:40.883576906Z" level=info msg="TearDown network for sandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\" successfully" Apr 30 13:58:40.891945 containerd[1823]: time="2025-04-30T13:58:40.891910822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.891945 containerd[1823]: time="2025-04-30T13:58:40.891939926Z" level=info msg="RemovePodSandbox \"5d0112d925ed219d75423c3bc3a4731dd3358676a35e0c4d85fe6939dcda8ade\" returns successfully" Apr 30 13:58:40.892226 containerd[1823]: time="2025-04-30T13:58:40.892182255Z" level=info msg="StopPodSandbox for \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\"" Apr 30 13:58:40.892319 containerd[1823]: time="2025-04-30T13:58:40.892247761Z" level=info msg="TearDown network for sandbox \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\" successfully" Apr 30 13:58:40.892319 containerd[1823]: time="2025-04-30T13:58:40.892283225Z" level=info msg="StopPodSandbox for \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\" returns successfully" Apr 30 13:58:40.892446 containerd[1823]: time="2025-04-30T13:58:40.892404491Z" level=info msg="RemovePodSandbox for \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\"" Apr 30 13:58:40.892446 containerd[1823]: time="2025-04-30T13:58:40.892417927Z" level=info msg="Forcibly stopping sandbox \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\"" Apr 30 13:58:40.892499 containerd[1823]: time="2025-04-30T13:58:40.892453345Z" level=info msg="TearDown network for sandbox \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\" successfully" Apr 30 13:58:40.893679 containerd[1823]: time="2025-04-30T13:58:40.893626685Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.893679 containerd[1823]: time="2025-04-30T13:58:40.893645929Z" level=info msg="RemovePodSandbox \"23965347d0a6323cc922c97e63cf8c4313014c8f3931b78a006e92ecbf52589a\" returns successfully" Apr 30 13:58:40.893808 containerd[1823]: time="2025-04-30T13:58:40.893767735Z" level=info msg="StopPodSandbox for \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\"" Apr 30 13:58:40.893843 containerd[1823]: time="2025-04-30T13:58:40.893817949Z" level=info msg="TearDown network for sandbox \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\" successfully" Apr 30 13:58:40.893843 containerd[1823]: time="2025-04-30T13:58:40.893824363Z" level=info msg="StopPodSandbox for \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\" returns successfully" Apr 30 13:58:40.893958 containerd[1823]: time="2025-04-30T13:58:40.893919611Z" level=info msg="RemovePodSandbox for \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\"" Apr 30 13:58:40.893958 containerd[1823]: time="2025-04-30T13:58:40.893930647Z" level=info msg="Forcibly stopping sandbox \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\"" Apr 30 13:58:40.894010 containerd[1823]: time="2025-04-30T13:58:40.893958644Z" level=info msg="TearDown network for sandbox \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\" successfully" Apr 30 13:58:40.895100 containerd[1823]: time="2025-04-30T13:58:40.895060280Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.895100 containerd[1823]: time="2025-04-30T13:58:40.895078467Z" level=info msg="RemovePodSandbox \"5df52cf6ab26a6c0e33ce57e73a244646f9d010b8c926a88e85ebd114ae3af7f\" returns successfully" Apr 30 13:58:40.895220 containerd[1823]: time="2025-04-30T13:58:40.895208425Z" level=info msg="StopPodSandbox for \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\"" Apr 30 13:58:40.895269 containerd[1823]: time="2025-04-30T13:58:40.895256303Z" level=info msg="TearDown network for sandbox \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\" successfully" Apr 30 13:58:40.895269 containerd[1823]: time="2025-04-30T13:58:40.895263742Z" level=info msg="StopPodSandbox for \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\" returns successfully" Apr 30 13:58:40.895401 containerd[1823]: time="2025-04-30T13:58:40.895367699Z" level=info msg="RemovePodSandbox for \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\"" Apr 30 13:58:40.895401 containerd[1823]: time="2025-04-30T13:58:40.895379012Z" level=info msg="Forcibly stopping sandbox \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\"" Apr 30 13:58:40.895445 containerd[1823]: time="2025-04-30T13:58:40.895410739Z" level=info msg="TearDown network for sandbox \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\" successfully" Apr 30 13:58:40.896501 containerd[1823]: time="2025-04-30T13:58:40.896491464Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.896527 containerd[1823]: time="2025-04-30T13:58:40.896509822Z" level=info msg="RemovePodSandbox \"6de8f0f027e67f9ebb93e001d67614273d07608cad1fdf2ed457f1d05d2fba9e\" returns successfully" Apr 30 13:58:40.896666 containerd[1823]: time="2025-04-30T13:58:40.896627348Z" level=info msg="StopPodSandbox for \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\"" Apr 30 13:58:40.896696 containerd[1823]: time="2025-04-30T13:58:40.896670781Z" level=info msg="TearDown network for sandbox \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\" successfully" Apr 30 13:58:40.896696 containerd[1823]: time="2025-04-30T13:58:40.896677763Z" level=info msg="StopPodSandbox for \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\" returns successfully" Apr 30 13:58:40.896832 containerd[1823]: time="2025-04-30T13:58:40.896794750Z" level=info msg="RemovePodSandbox for \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\"" Apr 30 13:58:40.896832 containerd[1823]: time="2025-04-30T13:58:40.896804965Z" level=info msg="Forcibly stopping sandbox \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\"" Apr 30 13:58:40.896884 containerd[1823]: time="2025-04-30T13:58:40.896837503Z" level=info msg="TearDown network for sandbox \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\" successfully" Apr 30 13:58:40.898036 containerd[1823]: time="2025-04-30T13:58:40.898019023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.898073 containerd[1823]: time="2025-04-30T13:58:40.898046267Z" level=info msg="RemovePodSandbox \"9a93dc7e80307ef7c1bf874c204452b63bc96e6e9373eefeb3ad7ba716bcb304\" returns successfully" Apr 30 13:58:40.898178 containerd[1823]: time="2025-04-30T13:58:40.898167889Z" level=info msg="StopPodSandbox for \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\"" Apr 30 13:58:40.898216 containerd[1823]: time="2025-04-30T13:58:40.898205359Z" level=info msg="TearDown network for sandbox \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\" successfully" Apr 30 13:58:40.898216 containerd[1823]: time="2025-04-30T13:58:40.898211376Z" level=info msg="StopPodSandbox for \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\" returns successfully" Apr 30 13:58:40.898300 containerd[1823]: time="2025-04-30T13:58:40.898288986Z" level=info msg="RemovePodSandbox for \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\"" Apr 30 13:58:40.898335 containerd[1823]: time="2025-04-30T13:58:40.898300338Z" level=info msg="Forcibly stopping sandbox \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\"" Apr 30 13:58:40.898366 containerd[1823]: time="2025-04-30T13:58:40.898330940Z" level=info msg="TearDown network for sandbox \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\" successfully" Apr 30 13:58:40.899421 containerd[1823]: time="2025-04-30T13:58:40.899409301Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.899456 containerd[1823]: time="2025-04-30T13:58:40.899426694Z" level=info msg="RemovePodSandbox \"8d4dee837827ddfb6654687504c38af1e1d4d38fb37387edb3744e3d134c0d2f\" returns successfully" Apr 30 13:58:40.899541 containerd[1823]: time="2025-04-30T13:58:40.899530239Z" level=info msg="StopPodSandbox for \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\"" Apr 30 13:58:40.899575 containerd[1823]: time="2025-04-30T13:58:40.899568328Z" level=info msg="TearDown network for sandbox \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\" successfully" Apr 30 13:58:40.899597 containerd[1823]: time="2025-04-30T13:58:40.899574801Z" level=info msg="StopPodSandbox for \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\" returns successfully" Apr 30 13:58:40.899691 containerd[1823]: time="2025-04-30T13:58:40.899682983Z" level=info msg="RemovePodSandbox for \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\"" Apr 30 13:58:40.899715 containerd[1823]: time="2025-04-30T13:58:40.899693354Z" level=info msg="Forcibly stopping sandbox \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\"" Apr 30 13:58:40.899738 containerd[1823]: time="2025-04-30T13:58:40.899722884Z" level=info msg="TearDown network for sandbox \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\" successfully" Apr 30 13:58:40.900837 containerd[1823]: time="2025-04-30T13:58:40.900816188Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.900837 containerd[1823]: time="2025-04-30T13:58:40.900832948Z" level=info msg="RemovePodSandbox \"64c421fdbbd5408570b566923b8758e3fec3b8d04a2e245aa977b29c47248323\" returns successfully" Apr 30 13:58:40.900944 containerd[1823]: time="2025-04-30T13:58:40.900915627Z" level=info msg="StopPodSandbox for \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\"" Apr 30 13:58:40.900983 containerd[1823]: time="2025-04-30T13:58:40.900951937Z" level=info msg="TearDown network for sandbox \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\" successfully" Apr 30 13:58:40.900983 containerd[1823]: time="2025-04-30T13:58:40.900971366Z" level=info msg="StopPodSandbox for \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\" returns successfully" Apr 30 13:58:40.901093 containerd[1823]: time="2025-04-30T13:58:40.901054192Z" level=info msg="RemovePodSandbox for \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\"" Apr 30 13:58:40.901093 containerd[1823]: time="2025-04-30T13:58:40.901065484Z" level=info msg="Forcibly stopping sandbox \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\"" Apr 30 13:58:40.901137 containerd[1823]: time="2025-04-30T13:58:40.901098744Z" level=info msg="TearDown network for sandbox \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\" successfully" Apr 30 13:58:40.902222 containerd[1823]: time="2025-04-30T13:58:40.902179876Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.902222 containerd[1823]: time="2025-04-30T13:58:40.902207019Z" level=info msg="RemovePodSandbox \"957e5033cda6d409016cce65ceb773c8434c632c0e8cf8d174b798f0aa4ef7c4\" returns successfully" Apr 30 13:58:40.902299 containerd[1823]: time="2025-04-30T13:58:40.902290547Z" level=info msg="StopPodSandbox for \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\"" Apr 30 13:58:40.902327 containerd[1823]: time="2025-04-30T13:58:40.902322805Z" level=info msg="TearDown network for sandbox \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\" successfully" Apr 30 13:58:40.902357 containerd[1823]: time="2025-04-30T13:58:40.902328201Z" level=info msg="StopPodSandbox for \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\" returns successfully" Apr 30 13:58:40.902446 containerd[1823]: time="2025-04-30T13:58:40.902399197Z" level=info msg="RemovePodSandbox for \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\"" Apr 30 13:58:40.902446 containerd[1823]: time="2025-04-30T13:58:40.902407699Z" level=info msg="Forcibly stopping sandbox \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\"" Apr 30 13:58:40.902446 containerd[1823]: time="2025-04-30T13:58:40.902435301Z" level=info msg="TearDown network for sandbox \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\" successfully" Apr 30 13:58:40.903560 containerd[1823]: time="2025-04-30T13:58:40.903517766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.903560 containerd[1823]: time="2025-04-30T13:58:40.903535694Z" level=info msg="RemovePodSandbox \"200af8c23f5b98d30f6a4a3a624412c30826ddc73014a5bfd3e94e2c200bd34b\" returns successfully" Apr 30 13:58:40.903671 containerd[1823]: time="2025-04-30T13:58:40.903631355Z" level=info msg="StopPodSandbox for \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\"" Apr 30 13:58:40.903705 containerd[1823]: time="2025-04-30T13:58:40.903673833Z" level=info msg="TearDown network for sandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\" successfully" Apr 30 13:58:40.903705 containerd[1823]: time="2025-04-30T13:58:40.903680176Z" level=info msg="StopPodSandbox for \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\" returns successfully" Apr 30 13:58:40.903789 containerd[1823]: time="2025-04-30T13:58:40.903769774Z" level=info msg="RemovePodSandbox for \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\"" Apr 30 13:58:40.903789 containerd[1823]: time="2025-04-30T13:58:40.903780860Z" level=info msg="Forcibly stopping sandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\"" Apr 30 13:58:40.903843 containerd[1823]: time="2025-04-30T13:58:40.903812619Z" level=info msg="TearDown network for sandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\" successfully" Apr 30 13:58:40.904939 containerd[1823]: time="2025-04-30T13:58:40.904919530Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.904939 containerd[1823]: time="2025-04-30T13:58:40.904937174Z" level=info msg="RemovePodSandbox \"3acdb5dc81f806941b1ead7952cf3ddd71800177cf4d8c31d6c79b27eed26f35\" returns successfully" Apr 30 13:58:40.905070 containerd[1823]: time="2025-04-30T13:58:40.905046089Z" level=info msg="StopPodSandbox for \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\"" Apr 30 13:58:40.905114 containerd[1823]: time="2025-04-30T13:58:40.905093846Z" level=info msg="TearDown network for sandbox \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\" successfully" Apr 30 13:58:40.905139 containerd[1823]: time="2025-04-30T13:58:40.905113138Z" level=info msg="StopPodSandbox for \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\" returns successfully" Apr 30 13:58:40.905223 containerd[1823]: time="2025-04-30T13:58:40.905212370Z" level=info msg="RemovePodSandbox for \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\"" Apr 30 13:58:40.905273 containerd[1823]: time="2025-04-30T13:58:40.905225994Z" level=info msg="Forcibly stopping sandbox \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\"" Apr 30 13:58:40.905305 containerd[1823]: time="2025-04-30T13:58:40.905276871Z" level=info msg="TearDown network for sandbox \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\" successfully" Apr 30 13:58:40.906418 containerd[1823]: time="2025-04-30T13:58:40.906403012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.906463 containerd[1823]: time="2025-04-30T13:58:40.906427098Z" level=info msg="RemovePodSandbox \"e72b07751eac2472de9bd61cfd0d2f4c6b6b104043f3e61f32ed916bba852861\" returns successfully" Apr 30 13:58:40.906555 containerd[1823]: time="2025-04-30T13:58:40.906544930Z" level=info msg="StopPodSandbox for \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\"" Apr 30 13:58:40.906596 containerd[1823]: time="2025-04-30T13:58:40.906588036Z" level=info msg="TearDown network for sandbox \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\" successfully" Apr 30 13:58:40.906621 containerd[1823]: time="2025-04-30T13:58:40.906594907Z" level=info msg="StopPodSandbox for \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\" returns successfully" Apr 30 13:58:40.906697 containerd[1823]: time="2025-04-30T13:58:40.906687339Z" level=info msg="RemovePodSandbox for \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\"" Apr 30 13:58:40.906717 containerd[1823]: time="2025-04-30T13:58:40.906699852Z" level=info msg="Forcibly stopping sandbox \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\"" Apr 30 13:58:40.906764 containerd[1823]: time="2025-04-30T13:58:40.906745269Z" level=info msg="TearDown network for sandbox \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\" successfully" Apr 30 13:58:40.907879 containerd[1823]: time="2025-04-30T13:58:40.907862596Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.907918 containerd[1823]: time="2025-04-30T13:58:40.907880471Z" level=info msg="RemovePodSandbox \"735911ad360740bd3fa55229a1df566e7f4bb8999f5d5783b1d49a24ef80beff\" returns successfully" Apr 30 13:58:40.907994 containerd[1823]: time="2025-04-30T13:58:40.907983344Z" level=info msg="StopPodSandbox for \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\"" Apr 30 13:58:40.908040 containerd[1823]: time="2025-04-30T13:58:40.908032183Z" level=info msg="TearDown network for sandbox \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\" successfully" Apr 30 13:58:40.908040 containerd[1823]: time="2025-04-30T13:58:40.908039177Z" level=info msg="StopPodSandbox for \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\" returns successfully" Apr 30 13:58:40.908133 containerd[1823]: time="2025-04-30T13:58:40.908123947Z" level=info msg="RemovePodSandbox for \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\"" Apr 30 13:58:40.908153 containerd[1823]: time="2025-04-30T13:58:40.908135624Z" level=info msg="Forcibly stopping sandbox \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\"" Apr 30 13:58:40.908182 containerd[1823]: time="2025-04-30T13:58:40.908166859Z" level=info msg="TearDown network for sandbox \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\" successfully" Apr 30 13:58:40.909257 containerd[1823]: time="2025-04-30T13:58:40.909246378Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.909287 containerd[1823]: time="2025-04-30T13:58:40.909264111Z" level=info msg="RemovePodSandbox \"eb996e638b5d5ee4d37d7e3bca49521840cb38db9f8fcc2486a8ec2a6f4b0ea0\" returns successfully" Apr 30 13:58:40.909378 containerd[1823]: time="2025-04-30T13:58:40.909366623Z" level=info msg="StopPodSandbox for \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\"" Apr 30 13:58:40.909428 containerd[1823]: time="2025-04-30T13:58:40.909417566Z" level=info msg="TearDown network for sandbox \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\" successfully" Apr 30 13:58:40.909462 containerd[1823]: time="2025-04-30T13:58:40.909427819Z" level=info msg="StopPodSandbox for \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\" returns successfully" Apr 30 13:58:40.909581 containerd[1823]: time="2025-04-30T13:58:40.909571097Z" level=info msg="RemovePodSandbox for \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\"" Apr 30 13:58:40.909610 containerd[1823]: time="2025-04-30T13:58:40.909583472Z" level=info msg="Forcibly stopping sandbox \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\"" Apr 30 13:58:40.909636 containerd[1823]: time="2025-04-30T13:58:40.909618985Z" level=info msg="TearDown network for sandbox \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\" successfully" Apr 30 13:58:40.910743 containerd[1823]: time="2025-04-30T13:58:40.910703058Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.910743 containerd[1823]: time="2025-04-30T13:58:40.910721381Z" level=info msg="RemovePodSandbox \"fb00280c7b14b3650ae618c58000c82f0bce89f1c854bdb0e624919b472b5112\" returns successfully" Apr 30 13:58:40.910843 containerd[1823]: time="2025-04-30T13:58:40.910830137Z" level=info msg="StopPodSandbox for \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\"" Apr 30 13:58:40.910895 containerd[1823]: time="2025-04-30T13:58:40.910885414Z" level=info msg="TearDown network for sandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\" successfully" Apr 30 13:58:40.910935 containerd[1823]: time="2025-04-30T13:58:40.910893887Z" level=info msg="StopPodSandbox for \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\" returns successfully" Apr 30 13:58:40.911056 containerd[1823]: time="2025-04-30T13:58:40.911044952Z" level=info msg="RemovePodSandbox for \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\"" Apr 30 13:58:40.911083 containerd[1823]: time="2025-04-30T13:58:40.911057764Z" level=info msg="Forcibly stopping sandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\"" Apr 30 13:58:40.911105 containerd[1823]: time="2025-04-30T13:58:40.911091392Z" level=info msg="TearDown network for sandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\" successfully" Apr 30 13:58:40.912201 containerd[1823]: time="2025-04-30T13:58:40.912165011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.912201 containerd[1823]: time="2025-04-30T13:58:40.912181962Z" level=info msg="RemovePodSandbox \"e43f12cb8fce8a758f351dbe81a7c9152e9e3030c2db1d4c6963a0d800043323\" returns successfully" Apr 30 13:58:40.912350 containerd[1823]: time="2025-04-30T13:58:40.912290354Z" level=info msg="StopPodSandbox for \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\"" Apr 30 13:58:40.912350 containerd[1823]: time="2025-04-30T13:58:40.912331776Z" level=info msg="TearDown network for sandbox \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\" successfully" Apr 30 13:58:40.912350 containerd[1823]: time="2025-04-30T13:58:40.912338314Z" level=info msg="StopPodSandbox for \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\" returns successfully" Apr 30 13:58:40.912477 containerd[1823]: time="2025-04-30T13:58:40.912440824Z" level=info msg="RemovePodSandbox for \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\"" Apr 30 13:58:40.912477 containerd[1823]: time="2025-04-30T13:58:40.912452700Z" level=info msg="Forcibly stopping sandbox \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\"" Apr 30 13:58:40.912529 containerd[1823]: time="2025-04-30T13:58:40.912485191Z" level=info msg="TearDown network for sandbox \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\" successfully" Apr 30 13:58:40.913583 containerd[1823]: time="2025-04-30T13:58:40.913562124Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.913583 containerd[1823]: time="2025-04-30T13:58:40.913580563Z" level=info msg="RemovePodSandbox \"5bb68cb49a533167258ea8c494b3bf959d7b00389535955ed43a272d28ae2134\" returns successfully" Apr 30 13:58:40.913714 containerd[1823]: time="2025-04-30T13:58:40.913675628Z" level=info msg="StopPodSandbox for \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\"" Apr 30 13:58:40.913747 containerd[1823]: time="2025-04-30T13:58:40.913714544Z" level=info msg="TearDown network for sandbox \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\" successfully" Apr 30 13:58:40.913747 containerd[1823]: time="2025-04-30T13:58:40.913732992Z" level=info msg="StopPodSandbox for \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\" returns successfully" Apr 30 13:58:40.913862 containerd[1823]: time="2025-04-30T13:58:40.913816433Z" level=info msg="RemovePodSandbox for \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\"" Apr 30 13:58:40.913862 containerd[1823]: time="2025-04-30T13:58:40.913828593Z" level=info msg="Forcibly stopping sandbox \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\"" Apr 30 13:58:40.913911 containerd[1823]: time="2025-04-30T13:58:40.913861072Z" level=info msg="TearDown network for sandbox \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\" successfully" Apr 30 13:58:40.914959 containerd[1823]: time="2025-04-30T13:58:40.914920754Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.914959 containerd[1823]: time="2025-04-30T13:58:40.914938475Z" level=info msg="RemovePodSandbox \"454d172eb172faefe42bac07348146eb70b393712144f28f41216312d8cb5248\" returns successfully" Apr 30 13:58:40.915043 containerd[1823]: time="2025-04-30T13:58:40.915034636Z" level=info msg="StopPodSandbox for \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\"" Apr 30 13:58:40.915087 containerd[1823]: time="2025-04-30T13:58:40.915078509Z" level=info msg="TearDown network for sandbox \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\" successfully" Apr 30 13:58:40.915111 containerd[1823]: time="2025-04-30T13:58:40.915087198Z" level=info msg="StopPodSandbox for \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\" returns successfully" Apr 30 13:58:40.915180 containerd[1823]: time="2025-04-30T13:58:40.915171937Z" level=info msg="RemovePodSandbox for \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\"" Apr 30 13:58:40.915203 containerd[1823]: time="2025-04-30T13:58:40.915181759Z" level=info msg="Forcibly stopping sandbox \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\"" Apr 30 13:58:40.915227 containerd[1823]: time="2025-04-30T13:58:40.915213270Z" level=info msg="TearDown network for sandbox \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\" successfully" Apr 30 13:58:40.916300 containerd[1823]: time="2025-04-30T13:58:40.916279887Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.916300 containerd[1823]: time="2025-04-30T13:58:40.916297186Z" level=info msg="RemovePodSandbox \"ae4473bde2ede20571a5a7b300d2e2370eb4d37517e7c4f6b39e6e206e56828e\" returns successfully" Apr 30 13:58:40.916438 containerd[1823]: time="2025-04-30T13:58:40.916425877Z" level=info msg="StopPodSandbox for \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\"" Apr 30 13:58:40.916479 containerd[1823]: time="2025-04-30T13:58:40.916465769Z" level=info msg="TearDown network for sandbox \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\" successfully" Apr 30 13:58:40.916479 containerd[1823]: time="2025-04-30T13:58:40.916472324Z" level=info msg="StopPodSandbox for \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\" returns successfully" Apr 30 13:58:40.916570 containerd[1823]: time="2025-04-30T13:58:40.916561372Z" level=info msg="RemovePodSandbox for \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\"" Apr 30 13:58:40.916593 containerd[1823]: time="2025-04-30T13:58:40.916571341Z" level=info msg="Forcibly stopping sandbox \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\"" Apr 30 13:58:40.916616 containerd[1823]: time="2025-04-30T13:58:40.916601496Z" level=info msg="TearDown network for sandbox \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\" successfully" Apr 30 13:58:40.917660 containerd[1823]: time="2025-04-30T13:58:40.917638651Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.917660 containerd[1823]: time="2025-04-30T13:58:40.917655807Z" level=info msg="RemovePodSandbox \"2c85cc20a221b964ed330c0b8827ab0b55fbf4da17be9697fcebafd16a965fc7\" returns successfully" Apr 30 13:58:40.917829 containerd[1823]: time="2025-04-30T13:58:40.917790020Z" level=info msg="StopPodSandbox for \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\"" Apr 30 13:58:40.917852 containerd[1823]: time="2025-04-30T13:58:40.917830286Z" level=info msg="TearDown network for sandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\" successfully" Apr 30 13:58:40.917852 containerd[1823]: time="2025-04-30T13:58:40.917836593Z" level=info msg="StopPodSandbox for \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\" returns successfully" Apr 30 13:58:40.917945 containerd[1823]: time="2025-04-30T13:58:40.917929352Z" level=info msg="RemovePodSandbox for \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\"" Apr 30 13:58:40.917945 containerd[1823]: time="2025-04-30T13:58:40.917940512Z" level=info msg="Forcibly stopping sandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\"" Apr 30 13:58:40.917988 containerd[1823]: time="2025-04-30T13:58:40.917971988Z" level=info msg="TearDown network for sandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\" successfully" Apr 30 13:58:40.919092 containerd[1823]: time="2025-04-30T13:58:40.919056261Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.919092 containerd[1823]: time="2025-04-30T13:58:40.919074234Z" level=info msg="RemovePodSandbox \"0d089abebf48b6dcd467dce5eb91d6ca38a0b448897e6e8e0a34d3c6630d3bb8\" returns successfully" Apr 30 13:58:40.919212 containerd[1823]: time="2025-04-30T13:58:40.919204627Z" level=info msg="StopPodSandbox for \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\"" Apr 30 13:58:40.919253 containerd[1823]: time="2025-04-30T13:58:40.919246614Z" level=info msg="TearDown network for sandbox \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\" successfully" Apr 30 13:58:40.919273 containerd[1823]: time="2025-04-30T13:58:40.919253385Z" level=info msg="StopPodSandbox for \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\" returns successfully" Apr 30 13:58:40.919486 containerd[1823]: time="2025-04-30T13:58:40.919447336Z" level=info msg="RemovePodSandbox for \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\"" Apr 30 13:58:40.919486 containerd[1823]: time="2025-04-30T13:58:40.919458975Z" level=info msg="Forcibly stopping sandbox \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\"" Apr 30 13:58:40.919542 containerd[1823]: time="2025-04-30T13:58:40.919491414Z" level=info msg="TearDown network for sandbox \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\" successfully" Apr 30 13:58:40.920570 containerd[1823]: time="2025-04-30T13:58:40.920550144Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.920570 containerd[1823]: time="2025-04-30T13:58:40.920568351Z" level=info msg="RemovePodSandbox \"603f0fcd83b1713ce3edb785dbc05b6b44dea0ec540a313aceb8b4d537c34268\" returns successfully" Apr 30 13:58:40.920683 containerd[1823]: time="2025-04-30T13:58:40.920672744Z" level=info msg="StopPodSandbox for \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\"" Apr 30 13:58:40.920723 containerd[1823]: time="2025-04-30T13:58:40.920709250Z" level=info msg="TearDown network for sandbox \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\" successfully" Apr 30 13:58:40.920723 containerd[1823]: time="2025-04-30T13:58:40.920714787Z" level=info msg="StopPodSandbox for \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\" returns successfully" Apr 30 13:58:40.920827 containerd[1823]: time="2025-04-30T13:58:40.920815180Z" level=info msg="RemovePodSandbox for \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\"" Apr 30 13:58:40.920865 containerd[1823]: time="2025-04-30T13:58:40.920828035Z" level=info msg="Forcibly stopping sandbox \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\"" Apr 30 13:58:40.920901 containerd[1823]: time="2025-04-30T13:58:40.920879821Z" level=info msg="TearDown network for sandbox \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\" successfully" Apr 30 13:58:40.922037 containerd[1823]: time="2025-04-30T13:58:40.922022771Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.922079 containerd[1823]: time="2025-04-30T13:58:40.922047471Z" level=info msg="RemovePodSandbox \"b2e344a56563234f45435df7e4e814b193147b641c62b30cbab44e7be5eecee1\" returns successfully" Apr 30 13:58:40.922199 containerd[1823]: time="2025-04-30T13:58:40.922187631Z" level=info msg="StopPodSandbox for \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\"" Apr 30 13:58:40.922243 containerd[1823]: time="2025-04-30T13:58:40.922230884Z" level=info msg="TearDown network for sandbox \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\" successfully" Apr 30 13:58:40.922264 containerd[1823]: time="2025-04-30T13:58:40.922243708Z" level=info msg="StopPodSandbox for \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\" returns successfully" Apr 30 13:58:40.922357 containerd[1823]: time="2025-04-30T13:58:40.922347266Z" level=info msg="RemovePodSandbox for \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\"" Apr 30 13:58:40.922380 containerd[1823]: time="2025-04-30T13:58:40.922360782Z" level=info msg="Forcibly stopping sandbox \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\"" Apr 30 13:58:40.922411 containerd[1823]: time="2025-04-30T13:58:40.922395018Z" level=info msg="TearDown network for sandbox \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\" successfully" Apr 30 13:58:40.923515 containerd[1823]: time="2025-04-30T13:58:40.923494297Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.923515 containerd[1823]: time="2025-04-30T13:58:40.923511609Z" level=info msg="RemovePodSandbox \"f37777f9bc813da38b354ad32b29370e31afbde957795cc83f8b6050f9bb3098\" returns successfully" Apr 30 13:58:40.923673 containerd[1823]: time="2025-04-30T13:58:40.923661295Z" level=info msg="StopPodSandbox for \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\"" Apr 30 13:58:40.923726 containerd[1823]: time="2025-04-30T13:58:40.923716157Z" level=info msg="TearDown network for sandbox \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\" successfully" Apr 30 13:58:40.923760 containerd[1823]: time="2025-04-30T13:58:40.923724682Z" level=info msg="StopPodSandbox for \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\" returns successfully" Apr 30 13:58:40.923857 containerd[1823]: time="2025-04-30T13:58:40.923845468Z" level=info msg="RemovePodSandbox for \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\"" Apr 30 13:58:40.923890 containerd[1823]: time="2025-04-30T13:58:40.923860511Z" level=info msg="Forcibly stopping sandbox \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\"" Apr 30 13:58:40.923918 containerd[1823]: time="2025-04-30T13:58:40.923900649Z" level=info msg="TearDown network for sandbox \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\" successfully" Apr 30 13:58:40.925025 containerd[1823]: time="2025-04-30T13:58:40.924987411Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.925025 containerd[1823]: time="2025-04-30T13:58:40.925006745Z" level=info msg="RemovePodSandbox \"219213ff8749ce41bff9b8192fb7d2dde7b13549c6345b45d0040e72cbd39217\" returns successfully" Apr 30 13:58:40.925135 containerd[1823]: time="2025-04-30T13:58:40.925126301Z" level=info msg="StopPodSandbox for \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\"" Apr 30 13:58:40.925173 containerd[1823]: time="2025-04-30T13:58:40.925166815Z" level=info msg="TearDown network for sandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\" successfully" Apr 30 13:58:40.925197 containerd[1823]: time="2025-04-30T13:58:40.925173473Z" level=info msg="StopPodSandbox for \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\" returns successfully" Apr 30 13:58:40.925290 containerd[1823]: time="2025-04-30T13:58:40.925281249Z" level=info msg="RemovePodSandbox for \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\"" Apr 30 13:58:40.925319 containerd[1823]: time="2025-04-30T13:58:40.925293559Z" level=info msg="Forcibly stopping sandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\"" Apr 30 13:58:40.925342 containerd[1823]: time="2025-04-30T13:58:40.925326123Z" level=info msg="TearDown network for sandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\" successfully" Apr 30 13:58:40.926447 containerd[1823]: time="2025-04-30T13:58:40.926422670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.926447 containerd[1823]: time="2025-04-30T13:58:40.926444590Z" level=info msg="RemovePodSandbox \"5a33fe05b947640ec026aa7254f2f9da5dc8186188e5ed3335e4b669779e2a8d\" returns successfully" Apr 30 13:58:40.926567 containerd[1823]: time="2025-04-30T13:58:40.926548912Z" level=info msg="StopPodSandbox for \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\"" Apr 30 13:58:40.926602 containerd[1823]: time="2025-04-30T13:58:40.926596590Z" level=info msg="TearDown network for sandbox \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\" successfully" Apr 30 13:58:40.926626 containerd[1823]: time="2025-04-30T13:58:40.926603144Z" level=info msg="StopPodSandbox for \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\" returns successfully" Apr 30 13:58:40.926758 containerd[1823]: time="2025-04-30T13:58:40.926746448Z" level=info msg="RemovePodSandbox for \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\"" Apr 30 13:58:40.926796 containerd[1823]: time="2025-04-30T13:58:40.926760867Z" level=info msg="Forcibly stopping sandbox \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\"" Apr 30 13:58:40.926832 containerd[1823]: time="2025-04-30T13:58:40.926807372Z" level=info msg="TearDown network for sandbox \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\" successfully" Apr 30 13:58:40.927941 containerd[1823]: time="2025-04-30T13:58:40.927928224Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.927991 containerd[1823]: time="2025-04-30T13:58:40.927950438Z" level=info msg="RemovePodSandbox \"3187b7b813613f687712b90bf6099e6cc9ff2d54d60839d92e13cc4191340139\" returns successfully" Apr 30 13:58:40.928123 containerd[1823]: time="2025-04-30T13:58:40.928111888Z" level=info msg="StopPodSandbox for \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\"" Apr 30 13:58:40.928163 containerd[1823]: time="2025-04-30T13:58:40.928156195Z" level=info msg="TearDown network for sandbox \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\" successfully" Apr 30 13:58:40.928184 containerd[1823]: time="2025-04-30T13:58:40.928163255Z" level=info msg="StopPodSandbox for \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\" returns successfully" Apr 30 13:58:40.928295 containerd[1823]: time="2025-04-30T13:58:40.928262950Z" level=info msg="RemovePodSandbox for \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\"" Apr 30 13:58:40.928295 containerd[1823]: time="2025-04-30T13:58:40.928272852Z" level=info msg="Forcibly stopping sandbox \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\"" Apr 30 13:58:40.928353 containerd[1823]: time="2025-04-30T13:58:40.928301679Z" level=info msg="TearDown network for sandbox \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\" successfully" Apr 30 13:58:40.929420 containerd[1823]: time="2025-04-30T13:58:40.929378604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.929420 containerd[1823]: time="2025-04-30T13:58:40.929401944Z" level=info msg="RemovePodSandbox \"cca569c85c631223f340159a77d14211e6f017ed3ea1c1c07d3849909d1c9687\" returns successfully" Apr 30 13:58:40.929516 containerd[1823]: time="2025-04-30T13:58:40.929505307Z" level=info msg="StopPodSandbox for \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\"" Apr 30 13:58:40.929555 containerd[1823]: time="2025-04-30T13:58:40.929546376Z" level=info msg="TearDown network for sandbox \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\" successfully" Apr 30 13:58:40.929555 containerd[1823]: time="2025-04-30T13:58:40.929553343Z" level=info msg="StopPodSandbox for \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\" returns successfully" Apr 30 13:58:40.929689 containerd[1823]: time="2025-04-30T13:58:40.929677493Z" level=info msg="RemovePodSandbox for \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\"" Apr 30 13:58:40.929726 containerd[1823]: time="2025-04-30T13:58:40.929690262Z" level=info msg="Forcibly stopping sandbox \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\"" Apr 30 13:58:40.929769 containerd[1823]: time="2025-04-30T13:58:40.929758209Z" level=info msg="TearDown network for sandbox \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\" successfully" Apr 30 13:58:40.931505 containerd[1823]: time="2025-04-30T13:58:40.931489902Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.931541 containerd[1823]: time="2025-04-30T13:58:40.931513161Z" level=info msg="RemovePodSandbox \"71628cb2d6420c55ad391903b78fd5f342e0550daeef0346fa0cdad040639abb\" returns successfully" Apr 30 13:58:40.931679 containerd[1823]: time="2025-04-30T13:58:40.931668120Z" level=info msg="StopPodSandbox for \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\"" Apr 30 13:58:40.931740 containerd[1823]: time="2025-04-30T13:58:40.931718626Z" level=info msg="TearDown network for sandbox \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\" successfully" Apr 30 13:58:40.931765 containerd[1823]: time="2025-04-30T13:58:40.931740930Z" level=info msg="StopPodSandbox for \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\" returns successfully" Apr 30 13:58:40.931858 containerd[1823]: time="2025-04-30T13:58:40.931848152Z" level=info msg="RemovePodSandbox for \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\"" Apr 30 13:58:40.931882 containerd[1823]: time="2025-04-30T13:58:40.931859761Z" level=info msg="Forcibly stopping sandbox \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\"" Apr 30 13:58:40.931903 containerd[1823]: time="2025-04-30T13:58:40.931888780Z" level=info msg="TearDown network for sandbox \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\" successfully" Apr 30 13:58:40.932995 containerd[1823]: time="2025-04-30T13:58:40.932983892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:58:40.933027 containerd[1823]: time="2025-04-30T13:58:40.933002173Z" level=info msg="RemovePodSandbox \"48a60b7412eca62e7921840136e374737ceeecf02c6a0a50521e0c6973fb4200\" returns successfully" Apr 30 13:58:43.398886 kubelet[3137]: I0430 13:58:43.398759 3137 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 14:02:24.901418 update_engine[1810]: I20250430 14:02:24.901350 1810 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 14:02:24.901418 update_engine[1810]: I20250430 14:02:24.901386 1810 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 14:02:24.902085 update_engine[1810]: I20250430 14:02:24.901479 1810 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 14:02:24.902085 update_engine[1810]: I20250430 14:02:24.901692 1810 omaha_request_params.cc:62] Current group set to beta Apr 30 14:02:24.902085 update_engine[1810]: I20250430 14:02:24.901752 1810 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 14:02:24.902085 update_engine[1810]: I20250430 14:02:24.901758 1810 update_attempter.cc:643] Scheduling an action processor start. Apr 30 14:02:24.902085 update_engine[1810]: I20250430 14:02:24.901767 1810 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 14:02:24.902085 update_engine[1810]: I20250430 14:02:24.901782 1810 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 14:02:24.902085 update_engine[1810]: I20250430 14:02:24.901854 1810 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 14:02:24.902085 update_engine[1810]: I20250430 14:02:24.901859 1810 omaha_request_action.cc:272] Request: Apr 30 14:02:24.902085 update_engine[1810]: Apr 30 14:02:24.902085 update_engine[1810]: Apr 30 14:02:24.902085 update_engine[1810]: Apr 30 14:02:24.902085 update_engine[1810]: Apr 30 14:02:24.902085 update_engine[1810]: Apr 30 14:02:24.902085 update_engine[1810]: Apr 30 14:02:24.902085 update_engine[1810]: Apr 30 14:02:24.902085 update_engine[1810]: Apr 30 14:02:24.902085 update_engine[1810]: I20250430 14:02:24.901863 1810 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 14:02:24.902354 locksmithd[1860]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 14:02:24.902684 update_engine[1810]: I20250430 14:02:24.902644 1810 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 14:02:24.902854 update_engine[1810]: I20250430 14:02:24.902814 1810 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 14:02:24.915438 update_engine[1810]: E20250430 14:02:24.915326 1810 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 14:02:24.915632 update_engine[1810]: I20250430 14:02:24.915509 1810 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 14:02:34.818693 update_engine[1810]: I20250430 14:02:34.818522 1810 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 14:02:34.819718 update_engine[1810]: I20250430 14:02:34.819080 1810 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 14:02:34.819840 update_engine[1810]: I20250430 14:02:34.819711 1810 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 14:02:34.820297 update_engine[1810]: E20250430 14:02:34.820168 1810 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 14:02:34.820496 update_engine[1810]: I20250430 14:02:34.820356 1810 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 14:02:44.820949 update_engine[1810]: I20250430 14:02:44.820780 1810 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 14:02:44.821977 update_engine[1810]: I20250430 14:02:44.821385 1810 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 14:02:44.822099 update_engine[1810]: I20250430 14:02:44.821977 1810 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 14:02:44.822446 update_engine[1810]: E20250430 14:02:44.822352 1810 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 14:02:44.822640 update_engine[1810]: I20250430 14:02:44.822467 1810 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 14:02:54.821020 update_engine[1810]: I20250430 14:02:54.820847 1810 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 14:02:54.822057 update_engine[1810]: I20250430 14:02:54.821460 1810 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 14:02:54.822229 update_engine[1810]: I20250430 14:02:54.822046 1810 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 14:02:54.822425 update_engine[1810]: E20250430 14:02:54.822361 1810 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 14:02:54.822575 update_engine[1810]: I20250430 14:02:54.822460 1810 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 14:02:54.822575 update_engine[1810]: I20250430 14:02:54.822486 1810 omaha_request_action.cc:617] Omaha request response: Apr 30 14:02:54.822777 update_engine[1810]: E20250430 14:02:54.822643 1810 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 14:02:54.822777 update_engine[1810]: I20250430 14:02:54.822688 1810 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 14:02:54.822777 update_engine[1810]: I20250430 14:02:54.822707 1810 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 14:02:54.822777 update_engine[1810]: I20250430 14:02:54.822722 1810 update_attempter.cc:306] Processing Done. Apr 30 14:02:54.822777 update_engine[1810]: E20250430 14:02:54.822752 1810 update_attempter.cc:619] Update failed. Apr 30 14:02:54.822777 update_engine[1810]: I20250430 14:02:54.822769 1810 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 14:02:54.823344 update_engine[1810]: I20250430 14:02:54.822786 1810 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 14:02:54.823344 update_engine[1810]: I20250430 14:02:54.822804 1810 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 14:02:54.823344 update_engine[1810]: I20250430 14:02:54.822963 1810 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 14:02:54.823344 update_engine[1810]: I20250430 14:02:54.823048 1810 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 14:02:54.823344 update_engine[1810]: I20250430 14:02:54.823082 1810 omaha_request_action.cc:272] Request: Apr 30 14:02:54.823344 update_engine[1810]: Apr 30 14:02:54.823344 update_engine[1810]: Apr 30 14:02:54.823344 update_engine[1810]: Apr 30 14:02:54.823344 update_engine[1810]: Apr 30 14:02:54.823344 update_engine[1810]: Apr 30 14:02:54.823344 update_engine[1810]: Apr 30 14:02:54.823344 update_engine[1810]: I20250430 14:02:54.823110 1810 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 14:02:54.824375 update_engine[1810]: I20250430 14:02:54.823632 1810 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 14:02:54.824375 update_engine[1810]: I20250430 14:02:54.824076 1810 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 14:02:54.824576 locksmithd[1860]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 14:02:54.825200 update_engine[1810]: E20250430 14:02:54.824404 1810 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 14:02:54.825200 update_engine[1810]: I20250430 14:02:54.824502 1810 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 14:02:54.825200 update_engine[1810]: I20250430 14:02:54.824524 1810 omaha_request_action.cc:617] Omaha request response: Apr 30 14:02:54.825200 update_engine[1810]: I20250430 14:02:54.824543 1810 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 14:02:54.825200 update_engine[1810]: I20250430 14:02:54.824558 1810 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 14:02:54.825200 update_engine[1810]: I20250430 14:02:54.824573 1810 update_attempter.cc:306] Processing Done. Apr 30 14:02:54.825200 update_engine[1810]: I20250430 14:02:54.824589 1810 update_attempter.cc:310] Error event sent. Apr 30 14:02:54.825200 update_engine[1810]: I20250430 14:02:54.824611 1810 update_check_scheduler.cc:74] Next update check in 46m17s Apr 30 14:02:54.825983 locksmithd[1860]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 14:03:35.453615 systemd[1]: Started sshd@9-147.75.202.185:22-147.75.109.163:36330.service - OpenSSH per-connection server daemon (147.75.109.163:36330). Apr 30 14:03:35.479526 sshd[7830]: Accepted publickey for core from 147.75.109.163 port 36330 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:03:35.480200 sshd-session[7830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:03:35.483168 systemd-logind[1805]: New session 12 of user core. Apr 30 14:03:35.496537 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 14:03:35.591201 sshd[7832]: Connection closed by 147.75.109.163 port 36330 Apr 30 14:03:35.591395 sshd-session[7830]: pam_unix(sshd:session): session closed for user core Apr 30 14:03:35.593191 systemd[1]: sshd@9-147.75.202.185:22-147.75.109.163:36330.service: Deactivated successfully. Apr 30 14:03:35.594336 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 14:03:35.595128 systemd-logind[1805]: Session 12 logged out. Waiting for processes to exit. Apr 30 14:03:35.595810 systemd-logind[1805]: Removed session 12. Apr 30 14:03:37.717899 systemd[1]: Started sshd@10-147.75.202.185:22-159.223.22.227:37226.service - OpenSSH per-connection server daemon (159.223.22.227:37226). Apr 30 14:03:38.515688 sshd[7871]: Connection closed by authenticating user root 159.223.22.227 port 37226 [preauth] Apr 30 14:03:38.520824 systemd[1]: sshd@10-147.75.202.185:22-159.223.22.227:37226.service: Deactivated successfully. Apr 30 14:03:40.630526 systemd[1]: Started sshd@11-147.75.202.185:22-147.75.109.163:44380.service - OpenSSH per-connection server daemon (147.75.109.163:44380). Apr 30 14:03:40.656718 sshd[7877]: Accepted publickey for core from 147.75.109.163 port 44380 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:03:40.657410 sshd-session[7877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:03:40.660177 systemd-logind[1805]: New session 13 of user core. Apr 30 14:03:40.673485 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 14:03:40.760643 sshd[7879]: Connection closed by 147.75.109.163 port 44380 Apr 30 14:03:40.760818 sshd-session[7877]: pam_unix(sshd:session): session closed for user core Apr 30 14:03:40.762528 systemd[1]: sshd@11-147.75.202.185:22-147.75.109.163:44380.service: Deactivated successfully. Apr 30 14:03:40.763597 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 14:03:40.764305 systemd-logind[1805]: Session 13 logged out. Waiting for processes to exit. Apr 30 14:03:40.765005 systemd-logind[1805]: Removed session 13. Apr 30 14:03:45.795658 systemd[1]: Started sshd@12-147.75.202.185:22-147.75.109.163:44388.service - OpenSSH per-connection server daemon (147.75.109.163:44388). Apr 30 14:03:45.822950 sshd[7907]: Accepted publickey for core from 147.75.109.163 port 44388 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:03:45.823712 sshd-session[7907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:03:45.826925 systemd-logind[1805]: New session 14 of user core. Apr 30 14:03:45.835515 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 14:03:45.921895 sshd[7909]: Connection closed by 147.75.109.163 port 44388 Apr 30 14:03:45.922069 sshd-session[7907]: pam_unix(sshd:session): session closed for user core Apr 30 14:03:45.923516 systemd[1]: sshd@12-147.75.202.185:22-147.75.109.163:44388.service: Deactivated successfully. Apr 30 14:03:45.924547 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 14:03:45.925216 systemd-logind[1805]: Session 14 logged out. Waiting for processes to exit. Apr 30 14:03:45.925844 systemd-logind[1805]: Removed session 14. Apr 30 14:03:50.934942 systemd[1]: Started sshd@13-147.75.202.185:22-147.75.109.163:54942.service - OpenSSH per-connection server daemon (147.75.109.163:54942). Apr 30 14:03:50.966379 sshd[7937]: Accepted publickey for core from 147.75.109.163 port 54942 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:03:50.969772 sshd-session[7937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:03:50.981732 systemd-logind[1805]: New session 15 of user core. Apr 30 14:03:50.993777 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 14:03:51.090929 sshd[7939]: Connection closed by 147.75.109.163 port 54942 Apr 30 14:03:51.091132 sshd-session[7937]: pam_unix(sshd:session): session closed for user core Apr 30 14:03:51.110654 systemd[1]: sshd@13-147.75.202.185:22-147.75.109.163:54942.service: Deactivated successfully. Apr 30 14:03:51.111576 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 14:03:51.112333 systemd-logind[1805]: Session 15 logged out. Waiting for processes to exit. Apr 30 14:03:51.113083 systemd[1]: Started sshd@14-147.75.202.185:22-147.75.109.163:54958.service - OpenSSH per-connection server daemon (147.75.109.163:54958). Apr 30 14:03:51.113684 systemd-logind[1805]: Removed session 15. Apr 30 14:03:51.144721 sshd[7964]: Accepted publickey for core from 147.75.109.163 port 54958 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:03:51.148209 sshd-session[7964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:03:51.160556 systemd-logind[1805]: New session 16 of user core. Apr 30 14:03:51.188798 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 14:03:51.338348 sshd[7968]: Connection closed by 147.75.109.163 port 54958 Apr 30 14:03:51.338566 sshd-session[7964]: pam_unix(sshd:session): session closed for user core Apr 30 14:03:51.352515 systemd[1]: sshd@14-147.75.202.185:22-147.75.109.163:54958.service: Deactivated successfully. Apr 30 14:03:51.353404 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 14:03:51.354126 systemd-logind[1805]: Session 16 logged out. Waiting for processes to exit. Apr 30 14:03:51.354812 systemd[1]: Started sshd@15-147.75.202.185:22-147.75.109.163:54960.service - OpenSSH per-connection server daemon (147.75.109.163:54960). Apr 30 14:03:51.355185 systemd-logind[1805]: Removed session 16. Apr 30 14:03:51.382367 sshd[7990]: Accepted publickey for core from 147.75.109.163 port 54960 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:03:51.383025 sshd-session[7990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:03:51.385897 systemd-logind[1805]: New session 17 of user core. Apr 30 14:03:51.398486 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 14:03:51.542979 sshd[7995]: Connection closed by 147.75.109.163 port 54960 Apr 30 14:03:51.543194 sshd-session[7990]: pam_unix(sshd:session): session closed for user core Apr 30 14:03:51.545418 systemd[1]: sshd@15-147.75.202.185:22-147.75.109.163:54960.service: Deactivated successfully. Apr 30 14:03:51.546485 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 14:03:51.547016 systemd-logind[1805]: Session 17 logged out. Waiting for processes to exit. Apr 30 14:03:51.547874 systemd-logind[1805]: Removed session 17. Apr 30 14:03:56.557292 systemd[1]: Started sshd@16-147.75.202.185:22-147.75.109.163:54964.service - OpenSSH per-connection server daemon (147.75.109.163:54964). Apr 30 14:03:56.587328 sshd[8025]: Accepted publickey for core from 147.75.109.163 port 54964 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:03:56.588056 sshd-session[8025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:03:56.590961 systemd-logind[1805]: New session 18 of user core. Apr 30 14:03:56.602523 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 14:03:56.692532 sshd[8027]: Connection closed by 147.75.109.163 port 54964 Apr 30 14:03:56.692704 sshd-session[8025]: pam_unix(sshd:session): session closed for user core Apr 30 14:03:56.694216 systemd[1]: sshd@16-147.75.202.185:22-147.75.109.163:54964.service: Deactivated successfully. Apr 30 14:03:56.695264 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 14:03:56.696048 systemd-logind[1805]: Session 18 logged out. Waiting for processes to exit. Apr 30 14:03:56.696664 systemd-logind[1805]: Removed session 18. Apr 30 14:04:01.717857 systemd[1]: Started sshd@17-147.75.202.185:22-147.75.109.163:39642.service - OpenSSH per-connection server daemon (147.75.109.163:39642). Apr 30 14:04:01.745797 sshd[8073]: Accepted publickey for core from 147.75.109.163 port 39642 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:04:01.746470 sshd-session[8073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:04:01.749186 systemd-logind[1805]: New session 19 of user core. Apr 30 14:04:01.770461 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 14:04:01.859744 sshd[8075]: Connection closed by 147.75.109.163 port 39642 Apr 30 14:04:01.859946 sshd-session[8073]: pam_unix(sshd:session): session closed for user core Apr 30 14:04:01.861927 systemd[1]: sshd@17-147.75.202.185:22-147.75.109.163:39642.service: Deactivated successfully. Apr 30 14:04:01.862950 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 14:04:01.863483 systemd-logind[1805]: Session 19 logged out. Waiting for processes to exit. Apr 30 14:04:01.863952 systemd-logind[1805]: Removed session 19. Apr 30 14:04:06.872715 systemd[1]: Started sshd@18-147.75.202.185:22-147.75.109.163:55996.service - OpenSSH per-connection server daemon (147.75.109.163:55996). Apr 30 14:04:06.903945 sshd[8144]: Accepted publickey for core from 147.75.109.163 port 55996 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:04:06.907354 sshd-session[8144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:04:06.919789 systemd-logind[1805]: New session 20 of user core. Apr 30 14:04:06.930692 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 14:04:07.033455 sshd[8146]: Connection closed by 147.75.109.163 port 55996 Apr 30 14:04:07.034326 sshd-session[8144]: pam_unix(sshd:session): session closed for user core Apr 30 14:04:07.071396 systemd[1]: sshd@18-147.75.202.185:22-147.75.109.163:55996.service: Deactivated successfully. Apr 30 14:04:07.075580 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 14:04:07.079044 systemd-logind[1805]: Session 20 logged out. Waiting for processes to exit. Apr 30 14:04:07.096023 systemd[1]: Started sshd@19-147.75.202.185:22-147.75.109.163:56006.service - OpenSSH per-connection server daemon (147.75.109.163:56006). Apr 30 14:04:07.098794 systemd-logind[1805]: Removed session 20. Apr 30 14:04:07.151805 sshd[8171]: Accepted publickey for core from 147.75.109.163 port 56006 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:04:07.152865 sshd-session[8171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:04:07.156523 systemd-logind[1805]: New session 21 of user core. Apr 30 14:04:07.178556 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 14:04:07.321940 sshd[8176]: Connection closed by 147.75.109.163 port 56006 Apr 30 14:04:07.322118 sshd-session[8171]: pam_unix(sshd:session): session closed for user core Apr 30 14:04:07.344755 systemd[1]: sshd@19-147.75.202.185:22-147.75.109.163:56006.service: Deactivated successfully. Apr 30 14:04:07.345780 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 14:04:07.346608 systemd-logind[1805]: Session 21 logged out. Waiting for processes to exit. Apr 30 14:04:07.347499 systemd[1]: Started sshd@20-147.75.202.185:22-147.75.109.163:56020.service - OpenSSH per-connection server daemon (147.75.109.163:56020). Apr 30 14:04:07.348037 systemd-logind[1805]: Removed session 21. Apr 30 14:04:07.380576 sshd[8198]: Accepted publickey for core from 147.75.109.163 port 56020 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:04:07.381443 sshd-session[8198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:04:07.385132 systemd-logind[1805]: New session 22 of user core. Apr 30 14:04:07.404515 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 14:04:08.709861 sshd[8202]: Connection closed by 147.75.109.163 port 56020 Apr 30 14:04:08.710019 sshd-session[8198]: pam_unix(sshd:session): session closed for user core Apr 30 14:04:08.727529 systemd[1]: sshd@20-147.75.202.185:22-147.75.109.163:56020.service: Deactivated successfully. Apr 30 14:04:08.728425 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 14:04:08.728533 systemd[1]: session-22.scope: Consumed 509ms CPU time, 70.9M memory peak. Apr 30 14:04:08.729075 systemd-logind[1805]: Session 22 logged out. Waiting for processes to exit. Apr 30 14:04:08.729811 systemd[1]: Started sshd@21-147.75.202.185:22-147.75.109.163:56034.service - OpenSSH per-connection server daemon (147.75.109.163:56034). Apr 30 14:04:08.730355 systemd-logind[1805]: Removed session 22. Apr 30 14:04:08.759530 sshd[8229]: Accepted publickey for core from 147.75.109.163 port 56034 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:04:08.762861 sshd-session[8229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:04:08.775947 systemd-logind[1805]: New session 23 of user core. Apr 30 14:04:08.800706 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 14:04:08.983479 sshd[8235]: Connection closed by 147.75.109.163 port 56034 Apr 30 14:04:08.983647 sshd-session[8229]: pam_unix(sshd:session): session closed for user core Apr 30 14:04:08.993345 systemd[1]: sshd@21-147.75.202.185:22-147.75.109.163:56034.service: Deactivated successfully. Apr 30 14:04:08.994228 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 14:04:08.994991 systemd-logind[1805]: Session 23 logged out. Waiting for processes to exit. Apr 30 14:04:08.995666 systemd[1]: Started sshd@22-147.75.202.185:22-147.75.109.163:56040.service - OpenSSH per-connection server daemon (147.75.109.163:56040). Apr 30 14:04:08.996127 systemd-logind[1805]: Removed session 23. Apr 30 14:04:09.024198 sshd[8258]: Accepted publickey for core from 147.75.109.163 port 56040 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:04:09.024908 sshd-session[8258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:04:09.027893 systemd-logind[1805]: New session 24 of user core. Apr 30 14:04:09.041492 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 14:04:09.172611 sshd[8262]: Connection closed by 147.75.109.163 port 56040 Apr 30 14:04:09.172791 sshd-session[8258]: pam_unix(sshd:session): session closed for user core Apr 30 14:04:09.174507 systemd[1]: sshd@22-147.75.202.185:22-147.75.109.163:56040.service: Deactivated successfully. Apr 30 14:04:09.175483 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 14:04:09.176148 systemd-logind[1805]: Session 24 logged out. Waiting for processes to exit. Apr 30 14:04:09.176843 systemd-logind[1805]: Removed session 24. Apr 30 14:04:14.196900 systemd[1]: Started sshd@23-147.75.202.185:22-147.75.109.163:56056.service - OpenSSH per-connection server daemon (147.75.109.163:56056). Apr 30 14:04:14.225566 sshd[8291]: Accepted publickey for core from 147.75.109.163 port 56056 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:04:14.226403 sshd-session[8291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:04:14.229710 systemd-logind[1805]: New session 25 of user core. Apr 30 14:04:14.251596 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 14:04:14.340494 sshd[8293]: Connection closed by 147.75.109.163 port 56056 Apr 30 14:04:14.340690 sshd-session[8291]: pam_unix(sshd:session): session closed for user core Apr 30 14:04:14.342401 systemd[1]: sshd@23-147.75.202.185:22-147.75.109.163:56056.service: Deactivated successfully. Apr 30 14:04:14.343455 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 14:04:14.344220 systemd-logind[1805]: Session 25 logged out. Waiting for processes to exit. Apr 30 14:04:14.344885 systemd-logind[1805]: Removed session 25. Apr 30 14:04:19.370447 systemd[1]: Started sshd@24-147.75.202.185:22-147.75.109.163:33974.service - OpenSSH per-connection server daemon (147.75.109.163:33974). Apr 30 14:04:19.396383 sshd[8320]: Accepted publickey for core from 147.75.109.163 port 33974 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:04:19.397135 sshd-session[8320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:04:19.400201 systemd-logind[1805]: New session 26 of user core. Apr 30 14:04:19.410382 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 14:04:19.494936 sshd[8322]: Connection closed by 147.75.109.163 port 33974 Apr 30 14:04:19.495132 sshd-session[8320]: pam_unix(sshd:session): session closed for user core Apr 30 14:04:19.496756 systemd[1]: sshd@24-147.75.202.185:22-147.75.109.163:33974.service: Deactivated successfully. Apr 30 14:04:19.497732 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 14:04:19.498531 systemd-logind[1805]: Session 26 logged out. Waiting for processes to exit. Apr 30 14:04:19.499139 systemd-logind[1805]: Removed session 26. Apr 30 14:04:24.538661 systemd[1]: Started sshd@25-147.75.202.185:22-147.75.109.163:33986.service - OpenSSH per-connection server daemon (147.75.109.163:33986). Apr 30 14:04:24.571179 sshd[8348]: Accepted publickey for core from 147.75.109.163 port 33986 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:04:24.572284 sshd-session[8348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:04:24.576922 systemd-logind[1805]: New session 27 of user core. Apr 30 14:04:24.590624 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 14:04:24.686092 sshd[8350]: Connection closed by 147.75.109.163 port 33986 Apr 30 14:04:24.686324 sshd-session[8348]: pam_unix(sshd:session): session closed for user core Apr 30 14:04:24.688311 systemd[1]: sshd@25-147.75.202.185:22-147.75.109.163:33986.service: Deactivated successfully. Apr 30 14:04:24.689497 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 14:04:24.690487 systemd-logind[1805]: Session 27 logged out. Waiting for processes to exit. Apr 30 14:04:24.691251 systemd-logind[1805]: Removed session 27. Apr 30 14:04:29.719571 systemd[1]: Started sshd@26-147.75.202.185:22-147.75.109.163:53492.service - OpenSSH per-connection server daemon (147.75.109.163:53492). Apr 30 14:04:29.744752 sshd[8388]: Accepted publickey for core from 147.75.109.163 port 53492 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 14:04:29.745477 sshd-session[8388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 14:04:29.748174 systemd-logind[1805]: New session 28 of user core. Apr 30 14:04:29.768571 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 14:04:29.860702 sshd[8390]: Connection closed by 147.75.109.163 port 53492 Apr 30 14:04:29.860880 sshd-session[8388]: pam_unix(sshd:session): session closed for user core Apr 30 14:04:29.862816 systemd[1]: sshd@26-147.75.202.185:22-147.75.109.163:53492.service: Deactivated successfully. Apr 30 14:04:29.863814 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 14:04:29.864232 systemd-logind[1805]: Session 28 logged out. Waiting for processes to exit. Apr 30 14:04:29.864941 systemd-logind[1805]: Removed session 28.