Jan 30 13:50:54.463381 kernel: microcode: updated early: 0xf4 -> 0x100, date = 2024-02-05 Jan 30 13:50:54.463400 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:50:54.463409 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:50:54.463416 kernel: BIOS-provided physical RAM map: Jan 30 13:50:54.463421 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 30 13:50:54.463426 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 30 13:50:54.463433 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 30 13:50:54.463438 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 30 13:50:54.463444 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 30 13:50:54.463449 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819ccfff] usable Jan 30 13:50:54.463454 kernel: BIOS-e820: [mem 0x00000000819cd000-0x00000000819cdfff] ACPI NVS Jan 30 13:50:54.463460 kernel: BIOS-e820: [mem 0x00000000819ce000-0x00000000819cefff] reserved Jan 30 13:50:54.463466 kernel: BIOS-e820: [mem 0x00000000819cf000-0x000000008afccfff] usable Jan 30 13:50:54.463472 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 30 13:50:54.463479 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 30 13:50:54.463485 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 30 13:50:54.463492 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 30 13:50:54.463498 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 30 13:50:54.463504 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 30 13:50:54.463510 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 13:50:54.463516 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 30 13:50:54.463522 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 30 13:50:54.463528 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 30 13:50:54.463534 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 30 13:50:54.463540 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 30 13:50:54.463546 kernel: NX (Execute Disable) protection: active Jan 30 13:50:54.463552 kernel: APIC: Static calls initialized Jan 30 13:50:54.463558 kernel: SMBIOS 3.2.1 present. Jan 30 13:50:54.463565 kernel: DMI: Supermicro SYS-5019C-MR/X11SCM-F, BIOS 1.9 09/16/2022 Jan 30 13:50:54.463572 kernel: tsc: Detected 3400.000 MHz processor Jan 30 13:50:54.463578 kernel: tsc: Detected 3399.906 MHz TSC Jan 30 13:50:54.463584 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:50:54.463591 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:50:54.463597 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 30 13:50:54.463603 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 30 13:50:54.463610 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:50:54.463616 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 30 13:50:54.463622 kernel: Using GB pages for direct mapping Jan 30 13:50:54.463629 kernel: ACPI: Early table checksum verification disabled Jan 30 13:50:54.463636 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 30 13:50:54.463645 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 30 13:50:54.463652 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 30 13:50:54.463658 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 30 13:50:54.463665 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 30 13:50:54.463673 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 30 13:50:54.463679 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 30 13:50:54.463686 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 30 13:50:54.463693 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 30 13:50:54.463699 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 30 13:50:54.463706 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 30 13:50:54.463712 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 30 13:50:54.463719 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 30 13:50:54.463727 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:50:54.463733 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 30 13:50:54.463740 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 30 13:50:54.463747 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:50:54.463753 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:50:54.463760 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 30 13:50:54.463766 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 30 13:50:54.463773 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:50:54.463781 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:50:54.463787 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 30 13:50:54.463794 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:50:54.463801 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 30 13:50:54.463807 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 30 13:50:54.463814 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 30 13:50:54.463820 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 30 13:50:54.463827 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 30 13:50:54.463834 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 30 13:50:54.463842 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 30 13:50:54.463848 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 30 13:50:54.463855 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 30 13:50:54.463861 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 30 13:50:54.463868 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 30 13:50:54.463874 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 30 13:50:54.463881 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 30 13:50:54.463888 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 30 13:50:54.463895 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 30 13:50:54.463902 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 30 13:50:54.463909 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 30 13:50:54.463915 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 30 13:50:54.463922 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 30 13:50:54.463928 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 30 13:50:54.463935 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 30 13:50:54.463941 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 30 13:50:54.463948 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 30 13:50:54.463954 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 30 13:50:54.463962 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 30 13:50:54.463969 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 30 13:50:54.463975 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 30 13:50:54.463982 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 30 13:50:54.463988 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 30 13:50:54.463995 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 30 13:50:54.464001 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 30 13:50:54.464008 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 30 13:50:54.464014 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 30 13:50:54.464022 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 30 13:50:54.464029 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 30 13:50:54.464035 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 30 13:50:54.464041 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 30 13:50:54.464048 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 30 13:50:54.464055 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 30 13:50:54.464061 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 30 13:50:54.464068 kernel: No NUMA configuration found Jan 30 13:50:54.464074 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 30 13:50:54.464082 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 30 13:50:54.464089 kernel: Zone ranges: Jan 30 13:50:54.464096 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:50:54.464102 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:50:54.464109 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 30 13:50:54.464115 kernel: Movable zone start for each node Jan 30 13:50:54.464122 kernel: Early memory node ranges Jan 30 13:50:54.464128 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 30 13:50:54.464135 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 30 13:50:54.464141 kernel: node 0: [mem 0x0000000040400000-0x00000000819ccfff] Jan 30 13:50:54.464149 kernel: node 0: [mem 0x00000000819cf000-0x000000008afccfff] Jan 30 13:50:54.464155 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 30 13:50:54.464162 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 30 13:50:54.464173 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 30 13:50:54.464182 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 30 13:50:54.464188 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:50:54.464196 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 30 13:50:54.464204 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 30 13:50:54.464211 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 30 13:50:54.464218 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 30 13:50:54.464225 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 30 13:50:54.464232 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 30 13:50:54.464239 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 30 13:50:54.464246 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 30 13:50:54.464253 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 30 13:50:54.464260 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 30 13:50:54.464268 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 30 13:50:54.464275 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 30 13:50:54.464282 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 30 13:50:54.464289 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 30 13:50:54.464296 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 30 13:50:54.464303 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 30 13:50:54.464310 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 30 13:50:54.464320 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 30 13:50:54.464327 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 30 13:50:54.464334 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 30 13:50:54.464342 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 30 13:50:54.464349 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 30 13:50:54.464356 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 30 13:50:54.464363 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 30 13:50:54.464370 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 30 13:50:54.464377 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:50:54.464384 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:50:54.464391 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:50:54.464398 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:50:54.464406 kernel: TSC deadline timer available Jan 30 13:50:54.464413 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 30 13:50:54.464420 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 30 13:50:54.464427 kernel: Booting paravirtualized kernel on bare hardware Jan 30 13:50:54.464434 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:50:54.464442 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 13:50:54.464449 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 13:50:54.464456 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 13:50:54.464463 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 13:50:54.464472 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:50:54.464479 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:50:54.464486 kernel: random: crng init done Jan 30 13:50:54.464493 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 30 13:50:54.464500 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 30 13:50:54.464507 kernel: Fallback order for Node 0: 0 Jan 30 13:50:54.464514 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 30 13:50:54.464521 kernel: Policy zone: Normal Jan 30 13:50:54.464529 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:50:54.464536 kernel: software IO TLB: area num 16. Jan 30 13:50:54.464544 kernel: Memory: 32718256K/33452980K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 734464K reserved, 0K cma-reserved) Jan 30 13:50:54.464551 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 13:50:54.464558 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:50:54.464565 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:50:54.464572 kernel: Dynamic Preempt: voluntary Jan 30 13:50:54.464579 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:50:54.464587 kernel: rcu: RCU event tracing is enabled. Jan 30 13:50:54.464595 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 13:50:54.464602 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:50:54.464610 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:50:54.464616 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:50:54.464623 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:50:54.464630 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 13:50:54.464637 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 30 13:50:54.464644 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:50:54.464651 kernel: Console: colour VGA+ 80x25 Jan 30 13:50:54.464659 kernel: printk: console [tty0] enabled Jan 30 13:50:54.464666 kernel: printk: console [ttyS1] enabled Jan 30 13:50:54.464674 kernel: ACPI: Core revision 20230628 Jan 30 13:50:54.464681 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 30 13:50:54.464688 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:50:54.464695 kernel: DMAR: Host address width 39 Jan 30 13:50:54.464702 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 30 13:50:54.464709 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 30 13:50:54.464716 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 30 13:50:54.464725 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 30 13:50:54.464732 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 30 13:50:54.464739 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 30 13:50:54.464746 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 30 13:50:54.464753 kernel: x2apic enabled Jan 30 13:50:54.464760 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 30 13:50:54.464767 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 30 13:50:54.464774 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 30 13:50:54.464781 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 30 13:50:54.464790 kernel: process: using mwait in idle threads Jan 30 13:50:54.464796 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:50:54.464803 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:50:54.464810 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:50:54.464817 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 13:50:54.464824 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 13:50:54.464831 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 13:50:54.464838 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:50:54.464845 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 13:50:54.464852 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 13:50:54.464859 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:50:54.464867 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:50:54.464874 kernel: TAA: Mitigation: TSX disabled Jan 30 13:50:54.464881 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 30 13:50:54.464888 kernel: SRBDS: Mitigation: Microcode Jan 30 13:50:54.464895 kernel: GDS: Mitigation: Microcode Jan 30 13:50:54.464902 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:50:54.464909 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:50:54.464916 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:50:54.464922 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 13:50:54.464929 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 13:50:54.464936 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:50:54.464944 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 13:50:54.464951 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 13:50:54.464958 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 30 13:50:54.464965 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:50:54.464972 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:50:54.464979 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:50:54.464986 kernel: landlock: Up and running. Jan 30 13:50:54.464993 kernel: SELinux: Initializing. Jan 30 13:50:54.465000 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:50:54.465007 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:50:54.465014 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 30 13:50:54.465021 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:50:54.465029 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:50:54.465036 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:50:54.465043 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 30 13:50:54.465050 kernel: ... version: 4 Jan 30 13:50:54.465057 kernel: ... bit width: 48 Jan 30 13:50:54.465065 kernel: ... generic registers: 4 Jan 30 13:50:54.465072 kernel: ... value mask: 0000ffffffffffff Jan 30 13:50:54.465079 kernel: ... max period: 00007fffffffffff Jan 30 13:50:54.465086 kernel: ... fixed-purpose events: 3 Jan 30 13:50:54.465094 kernel: ... event mask: 000000070000000f Jan 30 13:50:54.465101 kernel: signal: max sigframe size: 2032 Jan 30 13:50:54.465108 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 30 13:50:54.465115 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:50:54.465122 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:50:54.465129 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 30 13:50:54.465136 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:50:54.465143 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:50:54.465150 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 30 13:50:54.465159 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:50:54.465166 kernel: smp: Brought up 1 node, 16 CPUs Jan 30 13:50:54.465173 kernel: smpboot: Max logical packages: 1 Jan 30 13:50:54.465180 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 30 13:50:54.465187 kernel: devtmpfs: initialized Jan 30 13:50:54.465194 kernel: x86/mm: Memory block size: 128MB Jan 30 13:50:54.465201 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819cd000-0x819cdfff] (4096 bytes) Jan 30 13:50:54.465208 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 30 13:50:54.465217 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:50:54.465224 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 13:50:54.465231 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:50:54.465238 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:50:54.465245 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:50:54.465252 kernel: audit: type=2000 audit(1738245049.042:1): state=initialized audit_enabled=0 res=1 Jan 30 13:50:54.465259 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:50:54.465266 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:50:54.465273 kernel: cpuidle: using governor menu Jan 30 13:50:54.465282 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:50:54.465289 kernel: dca service started, version 1.12.1 Jan 30 13:50:54.465296 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 13:50:54.465303 kernel: PCI: Using configuration type 1 for base access Jan 30 13:50:54.465310 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 30 13:50:54.465317 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:50:54.465326 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:50:54.465333 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:50:54.465340 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:50:54.465348 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:50:54.465355 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:50:54.465362 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:50:54.465369 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:50:54.465376 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:50:54.465383 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 30 13:50:54.465390 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:50:54.465397 kernel: ACPI: SSDT 0xFFFF8BE801EC1800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 30 13:50:54.465404 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:50:54.465413 kernel: ACPI: SSDT 0xFFFF8BE801EBD800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 30 13:50:54.465420 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:50:54.465427 kernel: ACPI: SSDT 0xFFFF8BE801569400 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 30 13:50:54.465434 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:50:54.465441 kernel: ACPI: SSDT 0xFFFF8BE801EB8800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 30 13:50:54.465448 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:50:54.465454 kernel: ACPI: SSDT 0xFFFF8BE801ECF000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 30 13:50:54.465461 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:50:54.465469 kernel: ACPI: SSDT 0xFFFF8BE800E39400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 30 13:50:54.465476 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 30 13:50:54.465484 kernel: ACPI: Interpreter enabled Jan 30 13:50:54.465491 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:50:54.465498 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:50:54.465505 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 30 13:50:54.465512 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 30 13:50:54.465519 kernel: HEST: Table parsing has been initialized. Jan 30 13:50:54.465526 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 30 13:50:54.465533 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:50:54.465540 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:50:54.465549 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 30 13:50:54.465556 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 30 13:50:54.465563 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 30 13:50:54.465570 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 30 13:50:54.465577 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 30 13:50:54.465584 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 30 13:50:54.465592 kernel: ACPI: \_TZ_.FN00: New power resource Jan 30 13:50:54.465599 kernel: ACPI: \_TZ_.FN01: New power resource Jan 30 13:50:54.465606 kernel: ACPI: \_TZ_.FN02: New power resource Jan 30 13:50:54.465614 kernel: ACPI: \_TZ_.FN03: New power resource Jan 30 13:50:54.465621 kernel: ACPI: \_TZ_.FN04: New power resource Jan 30 13:50:54.465628 kernel: ACPI: \PIN_: New power resource Jan 30 13:50:54.465635 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 30 13:50:54.465729 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:50:54.465799 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 30 13:50:54.465865 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 30 13:50:54.465877 kernel: PCI host bridge to bus 0000:00 Jan 30 13:50:54.465941 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:50:54.465998 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:50:54.466053 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:50:54.466107 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 30 13:50:54.466163 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 30 13:50:54.466218 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 30 13:50:54.466296 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 30 13:50:54.466377 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 30 13:50:54.466444 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.466512 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 30 13:50:54.466577 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 30 13:50:54.466644 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 30 13:50:54.466713 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 30 13:50:54.466782 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 30 13:50:54.466846 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 30 13:50:54.466908 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 30 13:50:54.466975 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 30 13:50:54.467038 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 30 13:50:54.467105 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 30 13:50:54.467172 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 30 13:50:54.467236 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:50:54.467306 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 30 13:50:54.467374 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:50:54.467442 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 30 13:50:54.467508 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 30 13:50:54.467574 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 30 13:50:54.467650 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 30 13:50:54.467718 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 30 13:50:54.467782 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 30 13:50:54.467852 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 30 13:50:54.467915 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 30 13:50:54.467981 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 30 13:50:54.468048 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 30 13:50:54.468113 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 30 13:50:54.468175 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 30 13:50:54.468238 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 30 13:50:54.468301 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 30 13:50:54.468367 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 30 13:50:54.468435 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 30 13:50:54.468499 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 30 13:50:54.468568 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 30 13:50:54.468632 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.468705 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 30 13:50:54.468772 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.468842 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 30 13:50:54.468906 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.468975 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 30 13:50:54.469038 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.469110 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 30 13:50:54.469175 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.469244 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 30 13:50:54.469308 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:50:54.469380 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 30 13:50:54.469449 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 30 13:50:54.469516 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 30 13:50:54.469581 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 30 13:50:54.469651 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 30 13:50:54.469715 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 30 13:50:54.469787 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 30 13:50:54.469853 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 30 13:50:54.469918 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 30 13:50:54.469986 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 30 13:50:54.470051 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 13:50:54.470120 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 13:50:54.470193 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 30 13:50:54.470339 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 30 13:50:54.470406 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 30 13:50:54.470471 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 30 13:50:54.470540 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 13:50:54.470604 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 13:50:54.470670 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:50:54.470733 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 13:50:54.470798 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:50:54.470863 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 13:50:54.470932 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 30 13:50:54.471002 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 30 13:50:54.471067 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 30 13:50:54.471132 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 30 13:50:54.471196 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 30 13:50:54.471262 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.471329 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 13:50:54.471394 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 13:50:54.471532 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 13:50:54.471602 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 30 13:50:54.471669 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 30 13:50:54.471733 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 30 13:50:54.471799 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 30 13:50:54.471863 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 30 13:50:54.471928 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.471991 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 13:50:54.472059 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 13:50:54.472121 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 13:50:54.472185 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 13:50:54.472256 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 30 13:50:54.472328 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 30 13:50:54.472396 kernel: pci 0000:06:00.0: supports D1 D2 Jan 30 13:50:54.472460 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:50:54.472528 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 13:50:54.472591 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 13:50:54.472655 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:50:54.472727 kernel: pci_bus 0000:07: extended config space not accessible Jan 30 13:50:54.472801 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 30 13:50:54.472940 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 30 13:50:54.473009 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 30 13:50:54.473081 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 30 13:50:54.473148 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:50:54.473216 kernel: pci 0000:07:00.0: supports D1 D2 Jan 30 13:50:54.473284 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:50:54.473353 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 13:50:54.473419 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 13:50:54.473485 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:50:54.473495 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 30 13:50:54.473506 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 30 13:50:54.473513 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 30 13:50:54.473521 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 30 13:50:54.473528 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 30 13:50:54.473536 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 30 13:50:54.473543 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 30 13:50:54.473551 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 30 13:50:54.473558 kernel: iommu: Default domain type: Translated Jan 30 13:50:54.473566 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:50:54.473575 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:50:54.473582 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:50:54.473590 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 30 13:50:54.473597 kernel: e820: reserve RAM buffer [mem 0x819cd000-0x83ffffff] Jan 30 13:50:54.473605 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 30 13:50:54.473612 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 30 13:50:54.473619 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 30 13:50:54.473626 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 30 13:50:54.473693 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 30 13:50:54.473765 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 30 13:50:54.473834 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:50:54.473845 kernel: vgaarb: loaded Jan 30 13:50:54.473853 kernel: clocksource: Switched to clocksource tsc-early Jan 30 13:50:54.473860 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:50:54.473868 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:50:54.473876 kernel: pnp: PnP ACPI init Jan 30 13:50:54.473939 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 30 13:50:54.474004 kernel: pnp 00:02: [dma 0 disabled] Jan 30 13:50:54.474067 kernel: pnp 00:03: [dma 0 disabled] Jan 30 13:50:54.474132 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 30 13:50:54.474190 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 30 13:50:54.474324 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 30 13:50:54.474387 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 30 13:50:54.474448 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 30 13:50:54.474506 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 30 13:50:54.474562 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 30 13:50:54.474625 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 30 13:50:54.474695 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 30 13:50:54.474747 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 30 13:50:54.474800 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 30 13:50:54.474859 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 30 13:50:54.474913 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 30 13:50:54.474964 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 30 13:50:54.475015 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 30 13:50:54.475066 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 30 13:50:54.475117 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 30 13:50:54.475168 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 30 13:50:54.475228 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 30 13:50:54.475238 kernel: pnp: PnP ACPI: found 10 devices Jan 30 13:50:54.475246 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:50:54.475253 kernel: NET: Registered PF_INET protocol family Jan 30 13:50:54.475259 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:50:54.475266 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 30 13:50:54.475273 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:50:54.475280 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:50:54.475289 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:50:54.475296 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 30 13:50:54.475302 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:50:54.475309 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:50:54.475316 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:50:54.475375 kernel: NET: Registered PF_XDP protocol family Jan 30 13:50:54.475456 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 30 13:50:54.475528 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 30 13:50:54.475588 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 30 13:50:54.475739 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 13:50:54.475804 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 13:50:54.475854 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 13:50:54.475903 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 13:50:54.475953 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:50:54.476002 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 13:50:54.476050 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:50:54.476099 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 13:50:54.476150 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 13:50:54.476198 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 13:50:54.476247 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 13:50:54.476296 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 13:50:54.476374 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 13:50:54.476439 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 13:50:54.476488 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 13:50:54.476538 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 13:50:54.476588 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 13:50:54.476636 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:50:54.476685 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 13:50:54.476734 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 13:50:54.476783 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:50:54.476830 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 30 13:50:54.476876 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:50:54.476919 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:50:54.476963 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:50:54.477019 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 30 13:50:54.477076 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 30 13:50:54.477125 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 30 13:50:54.477171 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:50:54.477222 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 30 13:50:54.477267 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 30 13:50:54.477315 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 13:50:54.477399 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 30 13:50:54.477450 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 30 13:50:54.477494 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 30 13:50:54.477544 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 30 13:50:54.477590 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 30 13:50:54.477599 kernel: PCI: CLS 64 bytes, default 64 Jan 30 13:50:54.477605 kernel: DMAR: No ATSR found Jan 30 13:50:54.477610 kernel: DMAR: No SATC found Jan 30 13:50:54.477616 kernel: DMAR: dmar0: Using Queued invalidation Jan 30 13:50:54.477665 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 30 13:50:54.477714 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 30 13:50:54.477766 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 30 13:50:54.477815 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 30 13:50:54.477863 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 30 13:50:54.477911 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 30 13:50:54.477959 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 30 13:50:54.478006 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 30 13:50:54.478055 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 30 13:50:54.478103 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 30 13:50:54.478155 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 30 13:50:54.478204 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 30 13:50:54.478252 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 30 13:50:54.478301 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 30 13:50:54.478373 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 30 13:50:54.478436 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 30 13:50:54.478484 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 30 13:50:54.478532 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 30 13:50:54.478582 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 30 13:50:54.478631 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 30 13:50:54.478682 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 30 13:50:54.478813 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 30 13:50:54.478949 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 30 13:50:54.479091 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 30 13:50:54.479181 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 30 13:50:54.479231 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 30 13:50:54.479286 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 30 13:50:54.479295 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 30 13:50:54.479301 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:50:54.479307 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 30 13:50:54.479313 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 30 13:50:54.479332 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 30 13:50:54.479339 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 30 13:50:54.479345 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 30 13:50:54.479404 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 30 13:50:54.479416 kernel: Initialise system trusted keyrings Jan 30 13:50:54.479422 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 30 13:50:54.479428 kernel: Key type asymmetric registered Jan 30 13:50:54.479434 kernel: Asymmetric key parser 'x509' registered Jan 30 13:50:54.479439 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:50:54.479445 kernel: io scheduler mq-deadline registered Jan 30 13:50:54.479451 kernel: io scheduler kyber registered Jan 30 13:50:54.479457 kernel: io scheduler bfq registered Jan 30 13:50:54.479507 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 30 13:50:54.479562 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 30 13:50:54.479613 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 30 13:50:54.479663 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 30 13:50:54.479713 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 30 13:50:54.479763 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 30 13:50:54.479820 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 30 13:50:54.479829 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 30 13:50:54.479837 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 30 13:50:54.479843 kernel: pstore: Using crash dump compression: deflate Jan 30 13:50:54.479849 kernel: pstore: Registered erst as persistent store backend Jan 30 13:50:54.479857 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:50:54.479863 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:50:54.479893 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:50:54.479899 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:50:54.479905 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 30 13:50:54.479988 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 30 13:50:54.479999 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:50:54.480044 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 30 13:50:54.480091 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 30 13:50:54.480136 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-30T13:50:53 UTC (1738245053) Jan 30 13:50:54.480182 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 30 13:50:54.480190 kernel: intel_pstate: Intel P-state driver initializing Jan 30 13:50:54.480196 kernel: intel_pstate: Disabling energy efficiency optimization Jan 30 13:50:54.480204 kernel: intel_pstate: HWP enabled Jan 30 13:50:54.480210 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:50:54.480216 kernel: Segment Routing with IPv6 Jan 30 13:50:54.480221 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:50:54.480227 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:50:54.480233 kernel: Key type dns_resolver registered Jan 30 13:50:54.480239 kernel: microcode: Microcode Update Driver: v2.2. Jan 30 13:50:54.480244 kernel: IPI shorthand broadcast: enabled Jan 30 13:50:54.480250 kernel: sched_clock: Marking stable (2552001084, 1448461483)->(4563314707, -562852140) Jan 30 13:50:54.480257 kernel: registered taskstats version 1 Jan 30 13:50:54.480263 kernel: Loading compiled-in X.509 certificates Jan 30 13:50:54.480269 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:50:54.480275 kernel: Key type .fscrypt registered Jan 30 13:50:54.480280 kernel: Key type fscrypt-provisioning registered Jan 30 13:50:54.480286 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:50:54.480292 kernel: ima: No architecture policies found Jan 30 13:50:54.480297 kernel: clk: Disabling unused clocks Jan 30 13:50:54.480303 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:50:54.480311 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:50:54.480317 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:50:54.480336 kernel: Run /init as init process Jan 30 13:50:54.480342 kernel: with arguments: Jan 30 13:50:54.480348 kernel: /init Jan 30 13:50:54.480353 kernel: with environment: Jan 30 13:50:54.480359 kernel: HOME=/ Jan 30 13:50:54.480365 kernel: TERM=linux Jan 30 13:50:54.480370 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:50:54.480379 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:50:54.480386 systemd[1]: Detected architecture x86-64. Jan 30 13:50:54.480393 systemd[1]: Running in initrd. Jan 30 13:50:54.480399 systemd[1]: No hostname configured, using default hostname. Jan 30 13:50:54.480405 systemd[1]: Hostname set to . Jan 30 13:50:54.480410 systemd[1]: Initializing machine ID from random generator. Jan 30 13:50:54.480417 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:50:54.480424 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:50:54.480430 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:50:54.480436 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:50:54.480443 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:50:54.480449 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:50:54.480455 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:50:54.480461 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:50:54.480469 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:50:54.480475 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:50:54.480481 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:50:54.480488 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:50:54.480494 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:50:54.480500 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:50:54.480506 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:50:54.480512 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:50:54.480519 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:50:54.480525 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:50:54.480531 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:50:54.480538 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:50:54.480544 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:50:54.480550 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:50:54.480556 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:50:54.480562 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:50:54.480568 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:50:54.480575 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Jan 30 13:50:54.480581 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Jan 30 13:50:54.480587 kernel: clocksource: Switched to clocksource tsc Jan 30 13:50:54.480593 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:50:54.480599 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:50:54.480605 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:50:54.480624 systemd-journald[267]: Collecting audit messages is disabled. Jan 30 13:50:54.480640 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:50:54.480646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:54.480653 systemd-journald[267]: Journal started Jan 30 13:50:54.480668 systemd-journald[267]: Runtime Journal (/run/log/journal/5060d43eb4284447be4cb1314082cf1e) is 8.0M, max 639.9M, 631.9M free. Jan 30 13:50:54.489562 systemd-modules-load[268]: Inserted module 'overlay' Jan 30 13:50:54.499371 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:50:54.507601 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:50:54.507705 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:50:54.507807 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:50:54.508864 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:50:54.509239 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:50:54.512324 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:50:54.513198 systemd-modules-load[268]: Inserted module 'br_netfilter' Jan 30 13:50:54.550888 kernel: Bridge firewalling registered Jan 30 13:50:54.513754 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:50:54.648475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:54.659958 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:50:54.680974 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:50:54.724572 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:50:54.725067 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:50:54.725543 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:50:54.730916 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:50:54.731561 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:50:54.732482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:50:54.751632 systemd-resolved[307]: Positive Trust Anchors: Jan 30 13:50:54.751638 systemd-resolved[307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:50:54.751661 systemd-resolved[307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:50:54.753156 systemd-resolved[307]: Defaulting to hostname 'linux'. Jan 30 13:50:54.753601 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:50:54.772577 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:50:54.796622 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:50:54.871644 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:50:54.949261 dracut-cmdline[310]: dracut-dracut-053 Jan 30 13:50:54.956541 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:50:55.130353 kernel: SCSI subsystem initialized Jan 30 13:50:55.144355 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:50:55.157368 kernel: iscsi: registered transport (tcp) Jan 30 13:50:55.178367 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:50:55.178386 kernel: QLogic iSCSI HBA Driver Jan 30 13:50:55.201402 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:50:55.229631 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:50:55.274379 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:50:55.274423 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:50:55.283254 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:50:55.324381 kernel: raid6: avx2x4 gen() 32619 MB/s Jan 30 13:50:55.345384 kernel: raid6: avx2x2 gen() 44172 MB/s Jan 30 13:50:55.371439 kernel: raid6: avx2x1 gen() 44286 MB/s Jan 30 13:50:55.371457 kernel: raid6: using algorithm avx2x1 gen() 44286 MB/s Jan 30 13:50:55.398567 kernel: raid6: .... xor() 22740 MB/s, rmw enabled Jan 30 13:50:55.398586 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:50:55.419350 kernel: xor: automatically using best checksumming function avx Jan 30 13:50:55.517356 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:50:55.523017 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:50:55.557620 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:50:55.565194 systemd-udevd[495]: Using default interface naming scheme 'v255'. Jan 30 13:50:55.567733 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:50:55.604504 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:50:55.663648 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jan 30 13:50:55.735600 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:50:55.769723 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:50:55.859414 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:50:55.899221 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:50:55.899237 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:50:55.899245 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:50:55.899252 kernel: ACPI: bus type USB registered Jan 30 13:50:55.869474 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:50:55.916335 kernel: usbcore: registered new interface driver usbfs Jan 30 13:50:55.916348 kernel: usbcore: registered new interface driver hub Jan 30 13:50:55.916358 kernel: usbcore: registered new device driver usb Jan 30 13:50:55.923348 kernel: PTP clock support registered Jan 30 13:50:55.923365 kernel: libata version 3.00 loaded. Jan 30 13:50:55.943528 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:50:55.943577 kernel: ahci 0000:00:17.0: version 3.0 Jan 30 13:50:56.159819 kernel: AES CTR mode by8 optimization enabled Jan 30 13:50:56.159833 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 30 13:50:56.159997 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 30 13:50:56.160140 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 30 13:50:56.160167 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 13:50:56.160251 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 30 13:50:56.160259 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 30 13:50:56.160327 kernel: scsi host0: ahci Jan 30 13:50:56.160412 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 30 13:50:56.160475 kernel: scsi host1: ahci Jan 30 13:50:56.160535 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 13:50:56.160595 kernel: scsi host2: ahci Jan 30 13:50:56.160653 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 30 13:50:56.160713 kernel: scsi host3: ahci Jan 30 13:50:56.160773 kernel: pps pps0: new PPS source ptp0 Jan 30 13:50:56.160836 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 30 13:50:56.160900 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 13:50:56.160963 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:b6 Jan 30 13:50:56.161025 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 30 13:50:56.161101 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 13:50:56.161161 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 30 13:50:56.161221 kernel: scsi host4: ahci Jan 30 13:50:56.161280 kernel: hub 1-0:1.0: USB hub found Jan 30 13:50:56.161426 kernel: scsi host5: ahci Jan 30 13:50:56.161484 kernel: hub 1-0:1.0: 16 ports detected Jan 30 13:50:56.161548 kernel: scsi host6: ahci Jan 30 13:50:56.161606 kernel: pps pps1: new PPS source ptp1 Jan 30 13:50:56.161665 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 30 13:50:56.161729 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 13:50:56.161790 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:b7 Jan 30 13:50:56.161850 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 30 13:50:56.161909 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 13:50:56.161968 kernel: hub 2-0:1.0: USB hub found Jan 30 13:50:56.162038 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 30 13:50:56.162046 kernel: hub 2-0:1.0: 10 ports detected Jan 30 13:50:56.162129 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 30 13:50:56.162137 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 30 13:50:56.162212 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 30 13:50:56.162220 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 30 13:50:56.162226 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 30 13:50:56.162233 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 30 13:50:56.162240 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 30 13:50:55.967519 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:50:55.967601 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:50:56.230476 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 30 13:50:56.230571 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Jan 30 13:50:56.768169 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 13:50:56.768257 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 30 13:50:56.861707 kernel: hub 1-14:1.0: USB hub found Jan 30 13:50:56.861798 kernel: hub 1-14:1.0: 4 ports detected Jan 30 13:50:56.861870 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:56.861879 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:56.861887 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:56.861894 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:56.861905 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jan 30 13:50:56.861976 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 13:50:56.861985 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 30 13:50:56.862050 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:56.862058 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 13:50:56.862065 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 30 13:50:56.862073 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 30 13:50:56.862080 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 13:50:56.862089 kernel: ata1.00: Features: NCQ-prio Jan 30 13:50:56.862096 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 13:50:56.862103 kernel: ata2.00: Features: NCQ-prio Jan 30 13:50:56.862111 kernel: ata1.00: configured for UDMA/133 Jan 30 13:50:56.862118 kernel: ata2.00: configured for UDMA/133 Jan 30 13:50:56.862125 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 30 13:50:56.862193 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 30 13:50:56.862258 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 13:50:56.862268 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:50:56.862275 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 13:50:56.862345 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 13:50:56.862408 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 30 13:50:56.862468 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:50:56.862526 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:50:56.862585 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 30 13:50:56.862642 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 30 13:50:56.862703 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:50:56.862762 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 30 13:50:56.862820 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:50:56.862828 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 30 13:50:56.862886 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:50:56.862894 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:50:56.862952 kernel: GPT:9289727 != 937703087 Jan 30 13:50:56.862962 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:50:56.862969 kernel: GPT:9289727 != 937703087 Jan 30 13:50:56.862976 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:50:56.862983 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:50:56.862990 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:50:56.863047 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 30 13:50:56.863105 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 13:50:56.863113 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 30 13:50:56.863171 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:50:56.863237 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 30 13:50:56.863346 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Jan 30 13:50:57.381258 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 13:50:57.381728 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (546) Jan 30 13:50:57.381774 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (567) Jan 30 13:50:57.381810 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:50:57.381845 kernel: usbcore: registered new interface driver usbhid Jan 30 13:50:57.381893 kernel: usbhid: USB HID core driver Jan 30 13:50:57.381928 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 30 13:50:57.381962 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:50:57.381995 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:50:57.382027 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 30 13:50:57.382416 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 30 13:50:57.382456 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 30 13:50:57.382791 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jan 30 13:50:57.383127 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 30 13:50:57.383472 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:50:56.242523 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:50:57.403578 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 30 13:50:56.253403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:50:57.421390 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 30 13:50:56.253498 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:56.264393 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:56.280476 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:56.297187 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:50:56.300754 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:50:57.473463 disk-uuid[705]: Primary Header is updated. Jan 30 13:50:57.473463 disk-uuid[705]: Secondary Entries is updated. Jan 30 13:50:57.473463 disk-uuid[705]: Secondary Header is updated. Jan 30 13:50:56.318412 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:50:56.340407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:50:56.356465 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:50:56.366502 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:56.377520 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:50:56.397480 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:50:56.415203 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:50:56.776406 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. Jan 30 13:50:56.823306 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Jan 30 13:50:56.838156 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jan 30 13:50:56.852618 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 30 13:50:56.863390 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 30 13:50:56.907459 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:50:57.932674 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:50:57.941255 disk-uuid[706]: The operation has completed successfully. Jan 30 13:50:57.949450 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:50:57.982635 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:50:57.982689 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:50:58.020638 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:50:58.045408 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:50:58.045475 sh[736]: Success Jan 30 13:50:58.082222 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:50:58.094434 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:50:58.101852 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:50:58.133423 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:50:58.133528 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:50:58.144411 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:50:58.152808 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:50:58.160047 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:50:58.175324 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:50:58.179936 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:50:58.188944 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:50:58.204638 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:50:58.230729 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:50:58.295382 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:50:58.295413 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:50:58.295421 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:50:58.295428 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:50:58.295435 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:50:58.295441 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:50:58.285755 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:50:58.298135 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:50:58.331505 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:50:58.361641 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:50:58.374575 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:50:58.392073 unknown[820]: fetched base config from "system" Jan 30 13:50:58.389861 ignition[820]: Ignition 2.20.0 Jan 30 13:50:58.392077 unknown[820]: fetched user config from "system" Jan 30 13:50:58.389867 ignition[820]: Stage: fetch-offline Jan 30 13:50:58.392928 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:50:58.389887 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:58.395764 systemd-networkd[919]: lo: Link UP Jan 30 13:50:58.389892 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:50:58.395766 systemd-networkd[919]: lo: Gained carrier Jan 30 13:50:58.389947 ignition[820]: parsed url from cmdline: "" Jan 30 13:50:58.398219 systemd-networkd[919]: Enumeration completed Jan 30 13:50:58.389948 ignition[820]: no config URL provided Jan 30 13:50:58.399259 systemd-networkd[919]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:50:58.389951 ignition[820]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:50:58.411615 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:50:58.389974 ignition[820]: parsing config with SHA512: 49e899150e074fd51d431425dcc6d51cea42d3e5680d0823f7a8faf53dd55c6ff2caedd83c46493d41825573ef93416fb32772c05e3283c045d412bdfc7ef13f Jan 30 13:50:58.427330 systemd-networkd[919]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:50:58.392272 ignition[820]: fetch-offline: fetch-offline passed Jan 30 13:50:58.430694 systemd[1]: Reached target network.target - Network. Jan 30 13:50:58.392274 ignition[820]: POST message to Packet Timeline Jan 30 13:50:58.445485 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:50:58.392277 ignition[820]: POST Status error: resource requires networking Jan 30 13:50:58.454504 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:50:58.392316 ignition[820]: Ignition finished successfully Jan 30 13:50:58.455373 systemd-networkd[919]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:50:58.471225 ignition[931]: Ignition 2.20.0 Jan 30 13:50:58.658611 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 30 13:50:58.654451 systemd-networkd[919]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:50:58.471240 ignition[931]: Stage: kargs Jan 30 13:50:58.471522 ignition[931]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:58.471540 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:50:58.472899 ignition[931]: kargs: kargs passed Jan 30 13:50:58.472906 ignition[931]: POST message to Packet Timeline Jan 30 13:50:58.472933 ignition[931]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:50:58.473802 ignition[931]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50517->[::1]:53: read: connection refused Jan 30 13:50:58.674894 ignition[931]: GET https://metadata.packet.net/metadata: attempt #2 Jan 30 13:50:58.675155 ignition[931]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49704->[::1]:53: read: connection refused Jan 30 13:50:58.960367 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 30 13:50:58.960886 systemd-networkd[919]: eno1: Link UP Jan 30 13:50:58.961062 systemd-networkd[919]: eno2: Link UP Jan 30 13:50:58.961191 systemd-networkd[919]: enp1s0f0np0: Link UP Jan 30 13:50:58.961358 systemd-networkd[919]: enp1s0f0np0: Gained carrier Jan 30 13:50:58.970601 systemd-networkd[919]: enp1s0f1np1: Link UP Jan 30 13:50:59.002507 systemd-networkd[919]: enp1s0f0np0: DHCPv4 address 139.178.70.53/31, gateway 139.178.70.52 acquired from 145.40.83.140 Jan 30 13:50:59.075433 ignition[931]: GET https://metadata.packet.net/metadata: attempt #3 Jan 30 13:50:59.076559 ignition[931]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51245->[::1]:53: read: connection refused Jan 30 13:50:59.673125 systemd-networkd[919]: enp1s0f1np1: Gained carrier Jan 30 13:50:59.876943 ignition[931]: GET https://metadata.packet.net/metadata: attempt #4 Jan 30 13:50:59.878069 ignition[931]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58221->[::1]:53: read: connection refused Jan 30 13:51:00.056938 systemd-networkd[919]: enp1s0f0np0: Gained IPv6LL Jan 30 13:51:01.479426 ignition[931]: GET https://metadata.packet.net/metadata: attempt #5 Jan 30 13:51:01.480631 ignition[931]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39259->[::1]:53: read: connection refused Jan 30 13:51:01.592925 systemd-networkd[919]: enp1s0f1np1: Gained IPv6LL Jan 30 13:51:04.683363 ignition[931]: GET https://metadata.packet.net/metadata: attempt #6 Jan 30 13:51:05.246902 ignition[931]: GET result: OK Jan 30 13:51:05.623192 ignition[931]: Ignition finished successfully Jan 30 13:51:05.628382 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:51:05.659590 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:51:05.665789 ignition[950]: Ignition 2.20.0 Jan 30 13:51:05.665793 ignition[950]: Stage: disks Jan 30 13:51:05.665903 ignition[950]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:05.665910 ignition[950]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:51:05.666449 ignition[950]: disks: disks passed Jan 30 13:51:05.666453 ignition[950]: POST message to Packet Timeline Jan 30 13:51:05.666465 ignition[950]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:51:06.333677 ignition[950]: GET result: OK Jan 30 13:51:06.704528 ignition[950]: Ignition finished successfully Jan 30 13:51:06.707022 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:51:06.723668 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:51:06.742602 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:51:06.763593 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:51:06.785732 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:51:06.805632 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:51:06.834588 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:51:06.870790 systemd-fsck[969]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:51:06.880805 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:51:06.908567 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:51:06.979326 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:51:06.979735 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:51:06.988750 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:51:07.029669 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:51:07.082244 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (978) Jan 30 13:51:07.082259 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:51:07.082270 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:07.082278 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:51:07.082285 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:51:07.038186 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:51:07.108402 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:51:07.107812 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:51:07.119978 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 30 13:51:07.141657 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:51:07.141704 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:51:07.200437 coreos-metadata[996]: Jan 30 13:51:07.195 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:51:07.221510 coreos-metadata[995]: Jan 30 13:51:07.195 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:51:07.164268 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:51:07.189500 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:51:07.218437 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:51:07.271450 initrd-setup-root[1010]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:51:07.281426 initrd-setup-root[1017]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:51:07.291435 initrd-setup-root[1024]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:51:07.301445 initrd-setup-root[1031]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:51:07.305082 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:51:07.343581 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:51:07.369528 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:51:07.360893 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:51:07.378244 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:51:07.397107 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:51:07.400407 ignition[1098]: INFO : Ignition 2.20.0 Jan 30 13:51:07.400407 ignition[1098]: INFO : Stage: mount Jan 30 13:51:07.426494 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:07.426494 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:51:07.426494 ignition[1098]: INFO : mount: mount passed Jan 30 13:51:07.426494 ignition[1098]: INFO : POST message to Packet Timeline Jan 30 13:51:07.426494 ignition[1098]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:51:07.722861 coreos-metadata[995]: Jan 30 13:51:07.722 INFO Fetch successful Jan 30 13:51:07.755601 coreos-metadata[995]: Jan 30 13:51:07.755 INFO wrote hostname ci-4186.1.0-a-f55746354a to /sysroot/etc/hostname Jan 30 13:51:07.756797 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:51:07.791454 coreos-metadata[996]: Jan 30 13:51:07.788 INFO Fetch successful Jan 30 13:51:07.825671 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 30 13:51:07.825729 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 30 13:51:08.017126 ignition[1098]: INFO : GET result: OK Jan 30 13:51:08.351470 ignition[1098]: INFO : Ignition finished successfully Jan 30 13:51:08.354539 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:51:08.390717 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:51:08.402221 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:51:08.436362 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1119) Jan 30 13:51:08.453773 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:51:08.453789 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:08.459665 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:51:08.475057 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:51:08.475088 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:51:08.477021 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:51:08.500907 ignition[1136]: INFO : Ignition 2.20.0 Jan 30 13:51:08.500907 ignition[1136]: INFO : Stage: files Jan 30 13:51:08.515558 ignition[1136]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:08.515558 ignition[1136]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:51:08.515558 ignition[1136]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:51:08.515558 ignition[1136]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:51:08.515558 ignition[1136]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:51:08.515558 ignition[1136]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:51:08.515558 ignition[1136]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:51:08.515558 ignition[1136]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:51:08.515558 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:51:08.515558 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:51:08.505075 unknown[1136]: wrote ssh authorized keys file for user: core Jan 30 13:51:08.650396 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:51:08.672262 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:51:08.672262 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 13:51:09.149537 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:51:09.259191 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:51:09.259191 ignition[1136]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: files passed Jan 30 13:51:09.290556 ignition[1136]: INFO : POST message to Packet Timeline Jan 30 13:51:09.290556 ignition[1136]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:51:09.904703 ignition[1136]: INFO : GET result: OK Jan 30 13:51:10.248361 ignition[1136]: INFO : Ignition finished successfully Jan 30 13:51:10.249467 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:51:10.277668 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:51:10.287977 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:51:10.297745 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:51:10.297804 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:51:10.356402 initrd-setup-root-after-ignition[1175]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:10.356402 initrd-setup-root-after-ignition[1175]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:10.395622 initrd-setup-root-after-ignition[1179]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:10.360927 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:51:10.371642 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:51:10.420556 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:51:10.481942 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:51:10.481994 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:51:10.500711 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:51:10.512562 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:51:10.539628 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:51:10.560734 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:51:10.624708 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:51:10.656741 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:51:10.687093 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:51:10.699946 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:51:10.721021 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:51:10.738968 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:51:10.739385 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:51:10.767092 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:51:10.788934 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:51:10.806941 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:51:10.825934 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:51:10.847930 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:51:10.868941 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:51:10.888935 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:51:10.910078 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:51:10.931955 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:51:10.951934 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:51:10.969824 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:51:10.970218 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:51:11.005795 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:51:11.015958 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:51:11.036810 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:51:11.037256 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:51:11.059820 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:51:11.060216 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:51:11.091925 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:51:11.092404 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:51:11.112128 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:51:11.129768 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:51:11.130210 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:51:11.151953 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:51:11.169931 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:51:11.188909 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:51:11.189208 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:51:11.208952 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:51:11.209243 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:51:11.232050 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:51:11.232470 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:51:11.252018 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:51:11.369500 ignition[1199]: INFO : Ignition 2.20.0 Jan 30 13:51:11.369500 ignition[1199]: INFO : Stage: umount Jan 30 13:51:11.369500 ignition[1199]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:11.369500 ignition[1199]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:51:11.369500 ignition[1199]: INFO : umount: umount passed Jan 30 13:51:11.369500 ignition[1199]: INFO : POST message to Packet Timeline Jan 30 13:51:11.369500 ignition[1199]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:51:11.252415 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:51:11.270020 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:51:11.270432 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:51:11.299535 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:51:11.321068 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:51:11.330583 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:51:11.330695 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:51:11.360627 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:51:11.360718 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:51:11.399200 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:51:11.401885 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:51:11.402014 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:51:11.502526 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:51:11.502799 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:51:11.832779 ignition[1199]: INFO : GET result: OK Jan 30 13:51:12.244718 ignition[1199]: INFO : Ignition finished successfully Jan 30 13:51:12.247837 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:51:12.248116 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:51:12.264626 systemd[1]: Stopped target network.target - Network. Jan 30 13:51:12.279565 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:51:12.279738 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:51:12.297719 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:51:12.297886 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:51:12.315740 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:51:12.315895 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:51:12.334729 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:51:12.334898 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:51:12.353718 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:51:12.353884 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:51:12.373114 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:51:12.386489 systemd-networkd[919]: enp1s0f0np0: DHCPv6 lease lost Jan 30 13:51:12.391807 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:51:12.401564 systemd-networkd[919]: enp1s0f1np1: DHCPv6 lease lost Jan 30 13:51:12.410460 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:51:12.410749 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:51:12.429830 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:51:12.430156 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:51:12.449987 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:51:12.450109 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:51:12.486436 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:51:12.509528 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:51:12.509686 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:51:12.528701 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:51:12.528787 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:51:12.548795 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:51:12.548957 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:51:12.566805 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:51:12.566973 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:51:12.575186 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:51:12.606587 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:51:12.606969 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:51:12.638017 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:51:12.638047 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:51:12.658640 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:51:12.658664 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:51:12.686648 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:51:12.686733 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:51:12.717871 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:51:12.718037 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:51:12.745811 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:51:12.745962 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:51:12.800421 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:51:12.810488 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:51:13.042545 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Jan 30 13:51:12.810517 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:51:12.827644 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:51:12.827675 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:51:12.859573 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:51:12.859649 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:51:12.880701 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:51:12.880842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:12.902653 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:51:12.902975 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:51:12.923191 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:51:12.923460 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:51:12.944429 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:51:12.973624 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:51:12.996699 systemd[1]: Switching root. Jan 30 13:51:13.153491 systemd-journald[267]: Journal stopped Jan 30 13:50:54.463381 kernel: microcode: updated early: 0xf4 -> 0x100, date = 2024-02-05 Jan 30 13:50:54.463400 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:50:54.463409 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:50:54.463416 kernel: BIOS-provided physical RAM map: Jan 30 13:50:54.463421 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jan 30 13:50:54.463426 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jan 30 13:50:54.463433 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jan 30 13:50:54.463438 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jan 30 13:50:54.463444 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jan 30 13:50:54.463449 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819ccfff] usable Jan 30 13:50:54.463454 kernel: BIOS-e820: [mem 0x00000000819cd000-0x00000000819cdfff] ACPI NVS Jan 30 13:50:54.463460 kernel: BIOS-e820: [mem 0x00000000819ce000-0x00000000819cefff] reserved Jan 30 13:50:54.463466 kernel: BIOS-e820: [mem 0x00000000819cf000-0x000000008afccfff] usable Jan 30 13:50:54.463472 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jan 30 13:50:54.463479 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jan 30 13:50:54.463485 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jan 30 13:50:54.463492 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jan 30 13:50:54.463498 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jan 30 13:50:54.463504 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jan 30 13:50:54.463510 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 13:50:54.463516 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jan 30 13:50:54.463522 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jan 30 13:50:54.463528 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 30 13:50:54.463534 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jan 30 13:50:54.463540 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jan 30 13:50:54.463546 kernel: NX (Execute Disable) protection: active Jan 30 13:50:54.463552 kernel: APIC: Static calls initialized Jan 30 13:50:54.463558 kernel: SMBIOS 3.2.1 present. Jan 30 13:50:54.463565 kernel: DMI: Supermicro SYS-5019C-MR/X11SCM-F, BIOS 1.9 09/16/2022 Jan 30 13:50:54.463572 kernel: tsc: Detected 3400.000 MHz processor Jan 30 13:50:54.463578 kernel: tsc: Detected 3399.906 MHz TSC Jan 30 13:50:54.463584 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:50:54.463591 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:50:54.463597 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jan 30 13:50:54.463603 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jan 30 13:50:54.463610 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:50:54.463616 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jan 30 13:50:54.463622 kernel: Using GB pages for direct mapping Jan 30 13:50:54.463629 kernel: ACPI: Early table checksum verification disabled Jan 30 13:50:54.463636 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jan 30 13:50:54.463645 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jan 30 13:50:54.463652 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jan 30 13:50:54.463658 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jan 30 13:50:54.463665 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jan 30 13:50:54.463673 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jan 30 13:50:54.463679 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jan 30 13:50:54.463686 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jan 30 13:50:54.463693 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jan 30 13:50:54.463699 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jan 30 13:50:54.463706 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jan 30 13:50:54.463712 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jan 30 13:50:54.463719 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jan 30 13:50:54.463727 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:50:54.463733 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jan 30 13:50:54.463740 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jan 30 13:50:54.463747 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:50:54.463753 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:50:54.463760 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jan 30 13:50:54.463766 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jan 30 13:50:54.463773 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:50:54.463781 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jan 30 13:50:54.463787 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jan 30 13:50:54.463794 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:50:54.463801 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jan 30 13:50:54.463807 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jan 30 13:50:54.463814 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jan 30 13:50:54.463820 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jan 30 13:50:54.463827 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jan 30 13:50:54.463834 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jan 30 13:50:54.463842 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jan 30 13:50:54.463848 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jan 30 13:50:54.463855 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jan 30 13:50:54.463861 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jan 30 13:50:54.463868 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jan 30 13:50:54.463874 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jan 30 13:50:54.463881 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jan 30 13:50:54.463888 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jan 30 13:50:54.463895 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jan 30 13:50:54.463902 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jan 30 13:50:54.463909 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jan 30 13:50:54.463915 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jan 30 13:50:54.463922 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jan 30 13:50:54.463928 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jan 30 13:50:54.463935 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jan 30 13:50:54.463941 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jan 30 13:50:54.463948 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jan 30 13:50:54.463954 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jan 30 13:50:54.463962 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jan 30 13:50:54.463969 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jan 30 13:50:54.463975 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jan 30 13:50:54.463982 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jan 30 13:50:54.463988 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jan 30 13:50:54.463995 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jan 30 13:50:54.464001 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jan 30 13:50:54.464008 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jan 30 13:50:54.464014 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jan 30 13:50:54.464022 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jan 30 13:50:54.464029 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jan 30 13:50:54.464035 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jan 30 13:50:54.464041 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jan 30 13:50:54.464048 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jan 30 13:50:54.464055 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jan 30 13:50:54.464061 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jan 30 13:50:54.464068 kernel: No NUMA configuration found Jan 30 13:50:54.464074 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jan 30 13:50:54.464082 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jan 30 13:50:54.464089 kernel: Zone ranges: Jan 30 13:50:54.464096 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:50:54.464102 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:50:54.464109 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jan 30 13:50:54.464115 kernel: Movable zone start for each node Jan 30 13:50:54.464122 kernel: Early memory node ranges Jan 30 13:50:54.464128 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jan 30 13:50:54.464135 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jan 30 13:50:54.464141 kernel: node 0: [mem 0x0000000040400000-0x00000000819ccfff] Jan 30 13:50:54.464149 kernel: node 0: [mem 0x00000000819cf000-0x000000008afccfff] Jan 30 13:50:54.464155 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jan 30 13:50:54.464162 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jan 30 13:50:54.464173 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jan 30 13:50:54.464182 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jan 30 13:50:54.464188 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:50:54.464196 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jan 30 13:50:54.464204 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 30 13:50:54.464211 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jan 30 13:50:54.464218 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jan 30 13:50:54.464225 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jan 30 13:50:54.464232 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jan 30 13:50:54.464239 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jan 30 13:50:54.464246 kernel: ACPI: PM-Timer IO Port: 0x1808 Jan 30 13:50:54.464253 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 30 13:50:54.464260 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 30 13:50:54.464268 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 30 13:50:54.464275 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 30 13:50:54.464282 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 30 13:50:54.464289 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 30 13:50:54.464296 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 30 13:50:54.464303 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 30 13:50:54.464310 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 30 13:50:54.464320 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 30 13:50:54.464327 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 30 13:50:54.464334 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 30 13:50:54.464342 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 30 13:50:54.464349 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 30 13:50:54.464356 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 30 13:50:54.464363 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 30 13:50:54.464370 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jan 30 13:50:54.464377 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:50:54.464384 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:50:54.464391 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:50:54.464398 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:50:54.464406 kernel: TSC deadline timer available Jan 30 13:50:54.464413 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jan 30 13:50:54.464420 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jan 30 13:50:54.464427 kernel: Booting paravirtualized kernel on bare hardware Jan 30 13:50:54.464434 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:50:54.464442 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 13:50:54.464449 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 13:50:54.464456 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 13:50:54.464463 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 13:50:54.464472 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:50:54.464479 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:50:54.464486 kernel: random: crng init done Jan 30 13:50:54.464493 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jan 30 13:50:54.464500 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 30 13:50:54.464507 kernel: Fallback order for Node 0: 0 Jan 30 13:50:54.464514 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Jan 30 13:50:54.464521 kernel: Policy zone: Normal Jan 30 13:50:54.464529 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:50:54.464536 kernel: software IO TLB: area num 16. Jan 30 13:50:54.464544 kernel: Memory: 32718256K/33452980K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 734464K reserved, 0K cma-reserved) Jan 30 13:50:54.464551 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 13:50:54.464558 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:50:54.464565 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:50:54.464572 kernel: Dynamic Preempt: voluntary Jan 30 13:50:54.464579 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:50:54.464587 kernel: rcu: RCU event tracing is enabled. Jan 30 13:50:54.464595 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 13:50:54.464602 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:50:54.464610 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:50:54.464616 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:50:54.464623 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:50:54.464630 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 13:50:54.464637 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jan 30 13:50:54.464644 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:50:54.464651 kernel: Console: colour VGA+ 80x25 Jan 30 13:50:54.464659 kernel: printk: console [tty0] enabled Jan 30 13:50:54.464666 kernel: printk: console [ttyS1] enabled Jan 30 13:50:54.464674 kernel: ACPI: Core revision 20230628 Jan 30 13:50:54.464681 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jan 30 13:50:54.464688 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:50:54.464695 kernel: DMAR: Host address width 39 Jan 30 13:50:54.464702 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jan 30 13:50:54.464709 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jan 30 13:50:54.464716 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jan 30 13:50:54.464725 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jan 30 13:50:54.464732 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jan 30 13:50:54.464739 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jan 30 13:50:54.464746 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jan 30 13:50:54.464753 kernel: x2apic enabled Jan 30 13:50:54.464760 kernel: APIC: Switched APIC routing to: cluster x2apic Jan 30 13:50:54.464767 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jan 30 13:50:54.464774 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jan 30 13:50:54.464781 kernel: CPU0: Thermal monitoring enabled (TM1) Jan 30 13:50:54.464790 kernel: process: using mwait in idle threads Jan 30 13:50:54.464796 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:50:54.464803 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:50:54.464810 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:50:54.464817 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 13:50:54.464824 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 13:50:54.464831 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 13:50:54.464838 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:50:54.464845 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 13:50:54.464852 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 13:50:54.464859 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:50:54.464867 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:50:54.464874 kernel: TAA: Mitigation: TSX disabled Jan 30 13:50:54.464881 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jan 30 13:50:54.464888 kernel: SRBDS: Mitigation: Microcode Jan 30 13:50:54.464895 kernel: GDS: Mitigation: Microcode Jan 30 13:50:54.464902 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:50:54.464909 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:50:54.464916 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:50:54.464922 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 13:50:54.464929 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 13:50:54.464936 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:50:54.464944 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 13:50:54.464951 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 13:50:54.464958 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jan 30 13:50:54.464965 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:50:54.464972 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:50:54.464979 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:50:54.464986 kernel: landlock: Up and running. Jan 30 13:50:54.464993 kernel: SELinux: Initializing. Jan 30 13:50:54.465000 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:50:54.465007 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:50:54.465014 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 30 13:50:54.465021 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:50:54.465029 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:50:54.465036 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 13:50:54.465043 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jan 30 13:50:54.465050 kernel: ... version: 4 Jan 30 13:50:54.465057 kernel: ... bit width: 48 Jan 30 13:50:54.465065 kernel: ... generic registers: 4 Jan 30 13:50:54.465072 kernel: ... value mask: 0000ffffffffffff Jan 30 13:50:54.465079 kernel: ... max period: 00007fffffffffff Jan 30 13:50:54.465086 kernel: ... fixed-purpose events: 3 Jan 30 13:50:54.465094 kernel: ... event mask: 000000070000000f Jan 30 13:50:54.465101 kernel: signal: max sigframe size: 2032 Jan 30 13:50:54.465108 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jan 30 13:50:54.465115 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:50:54.465122 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:50:54.465129 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jan 30 13:50:54.465136 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:50:54.465143 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:50:54.465150 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jan 30 13:50:54.465159 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:50:54.465166 kernel: smp: Brought up 1 node, 16 CPUs Jan 30 13:50:54.465173 kernel: smpboot: Max logical packages: 1 Jan 30 13:50:54.465180 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jan 30 13:50:54.465187 kernel: devtmpfs: initialized Jan 30 13:50:54.465194 kernel: x86/mm: Memory block size: 128MB Jan 30 13:50:54.465201 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819cd000-0x819cdfff] (4096 bytes) Jan 30 13:50:54.465208 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jan 30 13:50:54.465217 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:50:54.465224 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 13:50:54.465231 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:50:54.465238 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:50:54.465245 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:50:54.465252 kernel: audit: type=2000 audit(1738245049.042:1): state=initialized audit_enabled=0 res=1 Jan 30 13:50:54.465259 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:50:54.465266 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:50:54.465273 kernel: cpuidle: using governor menu Jan 30 13:50:54.465282 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:50:54.465289 kernel: dca service started, version 1.12.1 Jan 30 13:50:54.465296 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 13:50:54.465303 kernel: PCI: Using configuration type 1 for base access Jan 30 13:50:54.465310 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jan 30 13:50:54.465317 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:50:54.465326 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:50:54.465333 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:50:54.465340 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:50:54.465348 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:50:54.465355 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:50:54.465362 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:50:54.465369 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:50:54.465376 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:50:54.465383 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jan 30 13:50:54.465390 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:50:54.465397 kernel: ACPI: SSDT 0xFFFF8BE801EC1800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jan 30 13:50:54.465404 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:50:54.465413 kernel: ACPI: SSDT 0xFFFF8BE801EBD800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jan 30 13:50:54.465420 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:50:54.465427 kernel: ACPI: SSDT 0xFFFF8BE801569400 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jan 30 13:50:54.465434 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:50:54.465441 kernel: ACPI: SSDT 0xFFFF8BE801EB8800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jan 30 13:50:54.465448 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:50:54.465454 kernel: ACPI: SSDT 0xFFFF8BE801ECF000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jan 30 13:50:54.465461 kernel: ACPI: Dynamic OEM Table Load: Jan 30 13:50:54.465469 kernel: ACPI: SSDT 0xFFFF8BE800E39400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jan 30 13:50:54.465476 kernel: ACPI: _OSC evaluated successfully for all CPUs Jan 30 13:50:54.465484 kernel: ACPI: Interpreter enabled Jan 30 13:50:54.465491 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:50:54.465498 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:50:54.465505 kernel: HEST: Enabling Firmware First mode for corrected errors. Jan 30 13:50:54.465512 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jan 30 13:50:54.465519 kernel: HEST: Table parsing has been initialized. Jan 30 13:50:54.465526 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jan 30 13:50:54.465533 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:50:54.465540 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:50:54.465549 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jan 30 13:50:54.465556 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jan 30 13:50:54.465563 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jan 30 13:50:54.465570 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jan 30 13:50:54.465577 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jan 30 13:50:54.465584 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jan 30 13:50:54.465592 kernel: ACPI: \_TZ_.FN00: New power resource Jan 30 13:50:54.465599 kernel: ACPI: \_TZ_.FN01: New power resource Jan 30 13:50:54.465606 kernel: ACPI: \_TZ_.FN02: New power resource Jan 30 13:50:54.465614 kernel: ACPI: \_TZ_.FN03: New power resource Jan 30 13:50:54.465621 kernel: ACPI: \_TZ_.FN04: New power resource Jan 30 13:50:54.465628 kernel: ACPI: \PIN_: New power resource Jan 30 13:50:54.465635 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jan 30 13:50:54.465729 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:50:54.465799 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jan 30 13:50:54.465865 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jan 30 13:50:54.465877 kernel: PCI host bridge to bus 0000:00 Jan 30 13:50:54.465941 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:50:54.465998 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:50:54.466053 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:50:54.466107 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jan 30 13:50:54.466163 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jan 30 13:50:54.466218 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jan 30 13:50:54.466296 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jan 30 13:50:54.466377 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jan 30 13:50:54.466444 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.466512 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jan 30 13:50:54.466577 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jan 30 13:50:54.466644 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jan 30 13:50:54.466713 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jan 30 13:50:54.466782 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jan 30 13:50:54.466846 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jan 30 13:50:54.466908 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jan 30 13:50:54.466975 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jan 30 13:50:54.467038 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jan 30 13:50:54.467105 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jan 30 13:50:54.467172 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jan 30 13:50:54.467236 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:50:54.467306 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jan 30 13:50:54.467374 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:50:54.467442 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jan 30 13:50:54.467508 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jan 30 13:50:54.467574 kernel: pci 0000:00:16.0: PME# supported from D3hot Jan 30 13:50:54.467650 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jan 30 13:50:54.467718 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jan 30 13:50:54.467782 kernel: pci 0000:00:16.1: PME# supported from D3hot Jan 30 13:50:54.467852 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jan 30 13:50:54.467915 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jan 30 13:50:54.467981 kernel: pci 0000:00:16.4: PME# supported from D3hot Jan 30 13:50:54.468048 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jan 30 13:50:54.468113 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jan 30 13:50:54.468175 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jan 30 13:50:54.468238 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jan 30 13:50:54.468301 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jan 30 13:50:54.468367 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jan 30 13:50:54.468435 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jan 30 13:50:54.468499 kernel: pci 0000:00:17.0: PME# supported from D3hot Jan 30 13:50:54.468568 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jan 30 13:50:54.468632 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.468705 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jan 30 13:50:54.468772 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.468842 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jan 30 13:50:54.468906 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.468975 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jan 30 13:50:54.469038 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.469110 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jan 30 13:50:54.469175 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.469244 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jan 30 13:50:54.469308 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jan 30 13:50:54.469380 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jan 30 13:50:54.469449 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jan 30 13:50:54.469516 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jan 30 13:50:54.469581 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jan 30 13:50:54.469651 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jan 30 13:50:54.469715 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jan 30 13:50:54.469787 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jan 30 13:50:54.469853 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jan 30 13:50:54.469918 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jan 30 13:50:54.469986 kernel: pci 0000:01:00.0: PME# supported from D3cold Jan 30 13:50:54.470051 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 13:50:54.470120 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 13:50:54.470193 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jan 30 13:50:54.470339 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jan 30 13:50:54.470406 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jan 30 13:50:54.470471 kernel: pci 0000:01:00.1: PME# supported from D3cold Jan 30 13:50:54.470540 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jan 30 13:50:54.470604 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jan 30 13:50:54.470670 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:50:54.470733 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 13:50:54.470798 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:50:54.470863 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 13:50:54.470932 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jan 30 13:50:54.471002 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jan 30 13:50:54.471067 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jan 30 13:50:54.471132 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jan 30 13:50:54.471196 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jan 30 13:50:54.471262 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.471329 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 13:50:54.471394 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 13:50:54.471532 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 13:50:54.471602 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jan 30 13:50:54.471669 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jan 30 13:50:54.471733 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jan 30 13:50:54.471799 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jan 30 13:50:54.471863 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jan 30 13:50:54.471928 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jan 30 13:50:54.471991 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 13:50:54.472059 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 13:50:54.472121 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 13:50:54.472185 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 13:50:54.472256 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jan 30 13:50:54.472328 kernel: pci 0000:06:00.0: enabling Extended Tags Jan 30 13:50:54.472396 kernel: pci 0000:06:00.0: supports D1 D2 Jan 30 13:50:54.472460 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:50:54.472528 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 13:50:54.472591 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 13:50:54.472655 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:50:54.472727 kernel: pci_bus 0000:07: extended config space not accessible Jan 30 13:50:54.472801 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jan 30 13:50:54.472940 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jan 30 13:50:54.473009 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jan 30 13:50:54.473081 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jan 30 13:50:54.473148 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:50:54.473216 kernel: pci 0000:07:00.0: supports D1 D2 Jan 30 13:50:54.473284 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:50:54.473353 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 13:50:54.473419 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 13:50:54.473485 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:50:54.473495 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jan 30 13:50:54.473506 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jan 30 13:50:54.473513 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jan 30 13:50:54.473521 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jan 30 13:50:54.473528 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jan 30 13:50:54.473536 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jan 30 13:50:54.473543 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jan 30 13:50:54.473551 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jan 30 13:50:54.473558 kernel: iommu: Default domain type: Translated Jan 30 13:50:54.473566 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:50:54.473575 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:50:54.473582 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:50:54.473590 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jan 30 13:50:54.473597 kernel: e820: reserve RAM buffer [mem 0x819cd000-0x83ffffff] Jan 30 13:50:54.473605 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jan 30 13:50:54.473612 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jan 30 13:50:54.473619 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jan 30 13:50:54.473626 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jan 30 13:50:54.473693 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jan 30 13:50:54.473765 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jan 30 13:50:54.473834 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:50:54.473845 kernel: vgaarb: loaded Jan 30 13:50:54.473853 kernel: clocksource: Switched to clocksource tsc-early Jan 30 13:50:54.473860 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:50:54.473868 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:50:54.473876 kernel: pnp: PnP ACPI init Jan 30 13:50:54.473939 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jan 30 13:50:54.474004 kernel: pnp 00:02: [dma 0 disabled] Jan 30 13:50:54.474067 kernel: pnp 00:03: [dma 0 disabled] Jan 30 13:50:54.474132 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jan 30 13:50:54.474190 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jan 30 13:50:54.474324 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jan 30 13:50:54.474387 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jan 30 13:50:54.474448 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jan 30 13:50:54.474506 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jan 30 13:50:54.474562 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jan 30 13:50:54.474625 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jan 30 13:50:54.474695 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jan 30 13:50:54.474747 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jan 30 13:50:54.474800 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jan 30 13:50:54.474859 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jan 30 13:50:54.474913 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jan 30 13:50:54.474964 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jan 30 13:50:54.475015 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jan 30 13:50:54.475066 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jan 30 13:50:54.475117 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jan 30 13:50:54.475168 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jan 30 13:50:54.475228 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jan 30 13:50:54.475238 kernel: pnp: PnP ACPI: found 10 devices Jan 30 13:50:54.475246 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:50:54.475253 kernel: NET: Registered PF_INET protocol family Jan 30 13:50:54.475259 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:50:54.475266 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jan 30 13:50:54.475273 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:50:54.475280 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:50:54.475289 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:50:54.475296 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jan 30 13:50:54.475302 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:50:54.475309 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:50:54.475316 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:50:54.475375 kernel: NET: Registered PF_XDP protocol family Jan 30 13:50:54.475456 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jan 30 13:50:54.475528 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jan 30 13:50:54.475588 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jan 30 13:50:54.475739 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 13:50:54.475804 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 13:50:54.475854 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jan 30 13:50:54.475903 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jan 30 13:50:54.475953 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:50:54.476002 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jan 30 13:50:54.476050 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:50:54.476099 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jan 30 13:50:54.476150 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jan 30 13:50:54.476198 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jan 30 13:50:54.476247 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jan 30 13:50:54.476296 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jan 30 13:50:54.476374 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jan 30 13:50:54.476439 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jan 30 13:50:54.476488 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jan 30 13:50:54.476538 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jan 30 13:50:54.476588 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jan 30 13:50:54.476636 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:50:54.476685 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jan 30 13:50:54.476734 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jan 30 13:50:54.476783 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jan 30 13:50:54.476830 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jan 30 13:50:54.476876 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:50:54.476919 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:50:54.476963 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:50:54.477019 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jan 30 13:50:54.477076 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jan 30 13:50:54.477125 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jan 30 13:50:54.477171 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jan 30 13:50:54.477222 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jan 30 13:50:54.477267 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jan 30 13:50:54.477315 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 13:50:54.477399 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jan 30 13:50:54.477450 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jan 30 13:50:54.477494 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jan 30 13:50:54.477544 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jan 30 13:50:54.477590 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jan 30 13:50:54.477599 kernel: PCI: CLS 64 bytes, default 64 Jan 30 13:50:54.477605 kernel: DMAR: No ATSR found Jan 30 13:50:54.477610 kernel: DMAR: No SATC found Jan 30 13:50:54.477616 kernel: DMAR: dmar0: Using Queued invalidation Jan 30 13:50:54.477665 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jan 30 13:50:54.477714 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jan 30 13:50:54.477766 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jan 30 13:50:54.477815 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jan 30 13:50:54.477863 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jan 30 13:50:54.477911 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jan 30 13:50:54.477959 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jan 30 13:50:54.478006 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jan 30 13:50:54.478055 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jan 30 13:50:54.478103 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jan 30 13:50:54.478155 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jan 30 13:50:54.478204 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jan 30 13:50:54.478252 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jan 30 13:50:54.478301 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jan 30 13:50:54.478373 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jan 30 13:50:54.478436 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jan 30 13:50:54.478484 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jan 30 13:50:54.478532 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jan 30 13:50:54.478582 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jan 30 13:50:54.478631 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jan 30 13:50:54.478682 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jan 30 13:50:54.478813 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jan 30 13:50:54.478949 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jan 30 13:50:54.479091 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jan 30 13:50:54.479181 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jan 30 13:50:54.479231 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jan 30 13:50:54.479286 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jan 30 13:50:54.479295 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jan 30 13:50:54.479301 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:50:54.479307 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jan 30 13:50:54.479313 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jan 30 13:50:54.479332 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jan 30 13:50:54.479339 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jan 30 13:50:54.479345 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jan 30 13:50:54.479404 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jan 30 13:50:54.479416 kernel: Initialise system trusted keyrings Jan 30 13:50:54.479422 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jan 30 13:50:54.479428 kernel: Key type asymmetric registered Jan 30 13:50:54.479434 kernel: Asymmetric key parser 'x509' registered Jan 30 13:50:54.479439 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:50:54.479445 kernel: io scheduler mq-deadline registered Jan 30 13:50:54.479451 kernel: io scheduler kyber registered Jan 30 13:50:54.479457 kernel: io scheduler bfq registered Jan 30 13:50:54.479507 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jan 30 13:50:54.479562 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jan 30 13:50:54.479613 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jan 30 13:50:54.479663 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jan 30 13:50:54.479713 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jan 30 13:50:54.479763 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jan 30 13:50:54.479820 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jan 30 13:50:54.479829 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jan 30 13:50:54.479837 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jan 30 13:50:54.479843 kernel: pstore: Using crash dump compression: deflate Jan 30 13:50:54.479849 kernel: pstore: Registered erst as persistent store backend Jan 30 13:50:54.479857 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:50:54.479863 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:50:54.479893 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:50:54.479899 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:50:54.479905 kernel: hpet_acpi_add: no address or irqs in _CRS Jan 30 13:50:54.479988 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jan 30 13:50:54.479999 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:50:54.480044 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jan 30 13:50:54.480091 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jan 30 13:50:54.480136 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-01-30T13:50:53 UTC (1738245053) Jan 30 13:50:54.480182 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jan 30 13:50:54.480190 kernel: intel_pstate: Intel P-state driver initializing Jan 30 13:50:54.480196 kernel: intel_pstate: Disabling energy efficiency optimization Jan 30 13:50:54.480204 kernel: intel_pstate: HWP enabled Jan 30 13:50:54.480210 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:50:54.480216 kernel: Segment Routing with IPv6 Jan 30 13:50:54.480221 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:50:54.480227 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:50:54.480233 kernel: Key type dns_resolver registered Jan 30 13:50:54.480239 kernel: microcode: Microcode Update Driver: v2.2. Jan 30 13:50:54.480244 kernel: IPI shorthand broadcast: enabled Jan 30 13:50:54.480250 kernel: sched_clock: Marking stable (2552001084, 1448461483)->(4563314707, -562852140) Jan 30 13:50:54.480257 kernel: registered taskstats version 1 Jan 30 13:50:54.480263 kernel: Loading compiled-in X.509 certificates Jan 30 13:50:54.480269 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:50:54.480275 kernel: Key type .fscrypt registered Jan 30 13:50:54.480280 kernel: Key type fscrypt-provisioning registered Jan 30 13:50:54.480286 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:50:54.480292 kernel: ima: No architecture policies found Jan 30 13:50:54.480297 kernel: clk: Disabling unused clocks Jan 30 13:50:54.480303 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:50:54.480311 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:50:54.480317 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:50:54.480336 kernel: Run /init as init process Jan 30 13:50:54.480342 kernel: with arguments: Jan 30 13:50:54.480348 kernel: /init Jan 30 13:50:54.480353 kernel: with environment: Jan 30 13:50:54.480359 kernel: HOME=/ Jan 30 13:50:54.480365 kernel: TERM=linux Jan 30 13:50:54.480370 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:50:54.480379 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:50:54.480386 systemd[1]: Detected architecture x86-64. Jan 30 13:50:54.480393 systemd[1]: Running in initrd. Jan 30 13:50:54.480399 systemd[1]: No hostname configured, using default hostname. Jan 30 13:50:54.480405 systemd[1]: Hostname set to . Jan 30 13:50:54.480410 systemd[1]: Initializing machine ID from random generator. Jan 30 13:50:54.480417 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:50:54.480424 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:50:54.480430 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:50:54.480436 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:50:54.480443 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:50:54.480449 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:50:54.480455 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:50:54.480461 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:50:54.480469 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:50:54.480475 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:50:54.480481 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:50:54.480488 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:50:54.480494 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:50:54.480500 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:50:54.480506 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:50:54.480512 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:50:54.480519 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:50:54.480525 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:50:54.480531 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:50:54.480538 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:50:54.480544 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:50:54.480550 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:50:54.480556 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:50:54.480562 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:50:54.480568 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:50:54.480575 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Jan 30 13:50:54.480581 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Jan 30 13:50:54.480587 kernel: clocksource: Switched to clocksource tsc Jan 30 13:50:54.480593 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:50:54.480599 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:50:54.480605 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:50:54.480624 systemd-journald[267]: Collecting audit messages is disabled. Jan 30 13:50:54.480640 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:50:54.480646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:54.480653 systemd-journald[267]: Journal started Jan 30 13:50:54.480668 systemd-journald[267]: Runtime Journal (/run/log/journal/5060d43eb4284447be4cb1314082cf1e) is 8.0M, max 639.9M, 631.9M free. Jan 30 13:50:54.489562 systemd-modules-load[268]: Inserted module 'overlay' Jan 30 13:50:54.499371 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:50:54.507601 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:50:54.507705 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:50:54.507807 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:50:54.508864 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:50:54.509239 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:50:54.512324 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:50:54.513198 systemd-modules-load[268]: Inserted module 'br_netfilter' Jan 30 13:50:54.550888 kernel: Bridge firewalling registered Jan 30 13:50:54.513754 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:50:54.648475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:54.659958 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:50:54.680974 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:50:54.724572 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:50:54.725067 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:50:54.725543 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:50:54.730916 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:50:54.731561 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:50:54.732482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:50:54.751632 systemd-resolved[307]: Positive Trust Anchors: Jan 30 13:50:54.751638 systemd-resolved[307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:50:54.751661 systemd-resolved[307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:50:54.753156 systemd-resolved[307]: Defaulting to hostname 'linux'. Jan 30 13:50:54.753601 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:50:54.772577 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:50:54.796622 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:50:54.871644 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:50:54.949261 dracut-cmdline[310]: dracut-dracut-053 Jan 30 13:50:54.956541 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:50:55.130353 kernel: SCSI subsystem initialized Jan 30 13:50:55.144355 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:50:55.157368 kernel: iscsi: registered transport (tcp) Jan 30 13:50:55.178367 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:50:55.178386 kernel: QLogic iSCSI HBA Driver Jan 30 13:50:55.201402 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:50:55.229631 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:50:55.274379 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:50:55.274423 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:50:55.283254 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:50:55.324381 kernel: raid6: avx2x4 gen() 32619 MB/s Jan 30 13:50:55.345384 kernel: raid6: avx2x2 gen() 44172 MB/s Jan 30 13:50:55.371439 kernel: raid6: avx2x1 gen() 44286 MB/s Jan 30 13:50:55.371457 kernel: raid6: using algorithm avx2x1 gen() 44286 MB/s Jan 30 13:50:55.398567 kernel: raid6: .... xor() 22740 MB/s, rmw enabled Jan 30 13:50:55.398586 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:50:55.419350 kernel: xor: automatically using best checksumming function avx Jan 30 13:50:55.517356 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:50:55.523017 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:50:55.557620 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:50:55.565194 systemd-udevd[495]: Using default interface naming scheme 'v255'. Jan 30 13:50:55.567733 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:50:55.604504 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:50:55.663648 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jan 30 13:50:55.735600 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:50:55.769723 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:50:55.859414 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:50:55.899221 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:50:55.899237 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:50:55.899245 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:50:55.899252 kernel: ACPI: bus type USB registered Jan 30 13:50:55.869474 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:50:55.916335 kernel: usbcore: registered new interface driver usbfs Jan 30 13:50:55.916348 kernel: usbcore: registered new interface driver hub Jan 30 13:50:55.916358 kernel: usbcore: registered new device driver usb Jan 30 13:50:55.923348 kernel: PTP clock support registered Jan 30 13:50:55.923365 kernel: libata version 3.00 loaded. Jan 30 13:50:55.943528 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:50:55.943577 kernel: ahci 0000:00:17.0: version 3.0 Jan 30 13:50:56.159819 kernel: AES CTR mode by8 optimization enabled Jan 30 13:50:56.159833 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jan 30 13:50:56.159997 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jan 30 13:50:56.160140 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jan 30 13:50:56.160167 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 13:50:56.160251 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jan 30 13:50:56.160259 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jan 30 13:50:56.160327 kernel: scsi host0: ahci Jan 30 13:50:56.160412 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jan 30 13:50:56.160475 kernel: scsi host1: ahci Jan 30 13:50:56.160535 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jan 30 13:50:56.160595 kernel: scsi host2: ahci Jan 30 13:50:56.160653 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jan 30 13:50:56.160713 kernel: scsi host3: ahci Jan 30 13:50:56.160773 kernel: pps pps0: new PPS source ptp0 Jan 30 13:50:56.160836 kernel: igb 0000:03:00.0: added PHC on eth0 Jan 30 13:50:56.160900 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 13:50:56.160963 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:b6 Jan 30 13:50:56.161025 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jan 30 13:50:56.161101 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 13:50:56.161161 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jan 30 13:50:56.161221 kernel: scsi host4: ahci Jan 30 13:50:56.161280 kernel: hub 1-0:1.0: USB hub found Jan 30 13:50:56.161426 kernel: scsi host5: ahci Jan 30 13:50:56.161484 kernel: hub 1-0:1.0: 16 ports detected Jan 30 13:50:56.161548 kernel: scsi host6: ahci Jan 30 13:50:56.161606 kernel: pps pps1: new PPS source ptp1 Jan 30 13:50:56.161665 kernel: igb 0000:04:00.0: added PHC on eth1 Jan 30 13:50:56.161729 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jan 30 13:50:56.161790 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:b7 Jan 30 13:50:56.161850 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jan 30 13:50:56.161909 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jan 30 13:50:56.161968 kernel: hub 2-0:1.0: USB hub found Jan 30 13:50:56.162038 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jan 30 13:50:56.162046 kernel: hub 2-0:1.0: 10 ports detected Jan 30 13:50:56.162129 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jan 30 13:50:56.162137 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jan 30 13:50:56.162212 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jan 30 13:50:56.162220 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jan 30 13:50:56.162226 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jan 30 13:50:56.162233 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jan 30 13:50:56.162240 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jan 30 13:50:55.967519 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:50:55.967601 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:50:56.230476 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jan 30 13:50:56.230571 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Jan 30 13:50:56.768169 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 13:50:56.768257 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jan 30 13:50:56.861707 kernel: hub 1-14:1.0: USB hub found Jan 30 13:50:56.861798 kernel: hub 1-14:1.0: 4 ports detected Jan 30 13:50:56.861870 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:56.861879 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:56.861887 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:56.861894 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:56.861905 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jan 30 13:50:56.861976 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 13:50:56.861985 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jan 30 13:50:56.862050 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 13:50:56.862058 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 30 13:50:56.862065 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 30 13:50:56.862073 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jan 30 13:50:56.862080 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 13:50:56.862089 kernel: ata1.00: Features: NCQ-prio Jan 30 13:50:56.862096 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jan 30 13:50:56.862103 kernel: ata2.00: Features: NCQ-prio Jan 30 13:50:56.862111 kernel: ata1.00: configured for UDMA/133 Jan 30 13:50:56.862118 kernel: ata2.00: configured for UDMA/133 Jan 30 13:50:56.862125 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 30 13:50:56.862193 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jan 30 13:50:56.862258 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 13:50:56.862268 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:50:56.862275 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 13:50:56.862345 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jan 30 13:50:56.862408 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jan 30 13:50:56.862468 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:50:56.862526 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:50:56.862585 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jan 30 13:50:56.862642 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 30 13:50:56.862703 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:50:56.862762 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jan 30 13:50:56.862820 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:50:56.862828 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jan 30 13:50:56.862886 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:50:56.862894 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:50:56.862952 kernel: GPT:9289727 != 937703087 Jan 30 13:50:56.862962 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:50:56.862969 kernel: GPT:9289727 != 937703087 Jan 30 13:50:56.862976 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:50:56.862983 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:50:56.862990 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:50:56.863047 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jan 30 13:50:56.863105 kernel: ata2.00: Enabling discard_zeroes_data Jan 30 13:50:56.863113 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jan 30 13:50:56.863171 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:50:56.863237 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jan 30 13:50:56.863346 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Jan 30 13:50:57.381258 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jan 30 13:50:57.381728 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (546) Jan 30 13:50:57.381774 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (567) Jan 30 13:50:57.381810 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:50:57.381845 kernel: usbcore: registered new interface driver usbhid Jan 30 13:50:57.381893 kernel: usbhid: USB HID core driver Jan 30 13:50:57.381928 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jan 30 13:50:57.381962 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:50:57.381995 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:50:57.382027 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jan 30 13:50:57.382416 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jan 30 13:50:57.382456 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jan 30 13:50:57.382791 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jan 30 13:50:57.383127 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jan 30 13:50:57.383472 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:50:56.242523 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:50:57.403578 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jan 30 13:50:56.253403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:50:57.421390 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jan 30 13:50:56.253498 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:56.264393 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:56.280476 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:50:56.297187 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:50:56.300754 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:50:57.473463 disk-uuid[705]: Primary Header is updated. Jan 30 13:50:57.473463 disk-uuid[705]: Secondary Entries is updated. Jan 30 13:50:57.473463 disk-uuid[705]: Secondary Header is updated. Jan 30 13:50:56.318412 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:50:56.340407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:50:56.356465 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:50:56.366502 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:50:56.377520 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:50:56.397480 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:50:56.415203 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:50:56.776406 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. Jan 30 13:50:56.823306 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Jan 30 13:50:56.838156 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jan 30 13:50:56.852618 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 30 13:50:56.863390 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Jan 30 13:50:56.907459 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:50:57.932674 kernel: ata1.00: Enabling discard_zeroes_data Jan 30 13:50:57.941255 disk-uuid[706]: The operation has completed successfully. Jan 30 13:50:57.949450 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:50:57.982635 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:50:57.982689 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:50:58.020638 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:50:58.045408 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:50:58.045475 sh[736]: Success Jan 30 13:50:58.082222 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:50:58.094434 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:50:58.101852 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:50:58.133423 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:50:58.133528 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:50:58.144411 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:50:58.152808 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:50:58.160047 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:50:58.175324 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:50:58.179936 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:50:58.188944 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:50:58.204638 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:50:58.230729 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:50:58.295382 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:50:58.295413 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:50:58.295421 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:50:58.295428 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:50:58.295435 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:50:58.295441 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:50:58.285755 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:50:58.298135 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:50:58.331505 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:50:58.361641 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:50:58.374575 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:50:58.392073 unknown[820]: fetched base config from "system" Jan 30 13:50:58.389861 ignition[820]: Ignition 2.20.0 Jan 30 13:50:58.392077 unknown[820]: fetched user config from "system" Jan 30 13:50:58.389867 ignition[820]: Stage: fetch-offline Jan 30 13:50:58.392928 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:50:58.389887 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:58.395764 systemd-networkd[919]: lo: Link UP Jan 30 13:50:58.389892 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:50:58.395766 systemd-networkd[919]: lo: Gained carrier Jan 30 13:50:58.389947 ignition[820]: parsed url from cmdline: "" Jan 30 13:50:58.398219 systemd-networkd[919]: Enumeration completed Jan 30 13:50:58.389948 ignition[820]: no config URL provided Jan 30 13:50:58.399259 systemd-networkd[919]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:50:58.389951 ignition[820]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:50:58.411615 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:50:58.389974 ignition[820]: parsing config with SHA512: 49e899150e074fd51d431425dcc6d51cea42d3e5680d0823f7a8faf53dd55c6ff2caedd83c46493d41825573ef93416fb32772c05e3283c045d412bdfc7ef13f Jan 30 13:50:58.427330 systemd-networkd[919]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:50:58.392272 ignition[820]: fetch-offline: fetch-offline passed Jan 30 13:50:58.430694 systemd[1]: Reached target network.target - Network. Jan 30 13:50:58.392274 ignition[820]: POST message to Packet Timeline Jan 30 13:50:58.445485 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:50:58.392277 ignition[820]: POST Status error: resource requires networking Jan 30 13:50:58.454504 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:50:58.392316 ignition[820]: Ignition finished successfully Jan 30 13:50:58.455373 systemd-networkd[919]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:50:58.471225 ignition[931]: Ignition 2.20.0 Jan 30 13:50:58.658611 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 30 13:50:58.654451 systemd-networkd[919]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:50:58.471240 ignition[931]: Stage: kargs Jan 30 13:50:58.471522 ignition[931]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:50:58.471540 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:50:58.472899 ignition[931]: kargs: kargs passed Jan 30 13:50:58.472906 ignition[931]: POST message to Packet Timeline Jan 30 13:50:58.472933 ignition[931]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:50:58.473802 ignition[931]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50517->[::1]:53: read: connection refused Jan 30 13:50:58.674894 ignition[931]: GET https://metadata.packet.net/metadata: attempt #2 Jan 30 13:50:58.675155 ignition[931]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49704->[::1]:53: read: connection refused Jan 30 13:50:58.960367 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 30 13:50:58.960886 systemd-networkd[919]: eno1: Link UP Jan 30 13:50:58.961062 systemd-networkd[919]: eno2: Link UP Jan 30 13:50:58.961191 systemd-networkd[919]: enp1s0f0np0: Link UP Jan 30 13:50:58.961358 systemd-networkd[919]: enp1s0f0np0: Gained carrier Jan 30 13:50:58.970601 systemd-networkd[919]: enp1s0f1np1: Link UP Jan 30 13:50:59.002507 systemd-networkd[919]: enp1s0f0np0: DHCPv4 address 139.178.70.53/31, gateway 139.178.70.52 acquired from 145.40.83.140 Jan 30 13:50:59.075433 ignition[931]: GET https://metadata.packet.net/metadata: attempt #3 Jan 30 13:50:59.076559 ignition[931]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51245->[::1]:53: read: connection refused Jan 30 13:50:59.673125 systemd-networkd[919]: enp1s0f1np1: Gained carrier Jan 30 13:50:59.876943 ignition[931]: GET https://metadata.packet.net/metadata: attempt #4 Jan 30 13:50:59.878069 ignition[931]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58221->[::1]:53: read: connection refused Jan 30 13:51:00.056938 systemd-networkd[919]: enp1s0f0np0: Gained IPv6LL Jan 30 13:51:01.479426 ignition[931]: GET https://metadata.packet.net/metadata: attempt #5 Jan 30 13:51:01.480631 ignition[931]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39259->[::1]:53: read: connection refused Jan 30 13:51:01.592925 systemd-networkd[919]: enp1s0f1np1: Gained IPv6LL Jan 30 13:51:04.683363 ignition[931]: GET https://metadata.packet.net/metadata: attempt #6 Jan 30 13:51:05.246902 ignition[931]: GET result: OK Jan 30 13:51:05.623192 ignition[931]: Ignition finished successfully Jan 30 13:51:05.628382 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:51:05.659590 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:51:05.665789 ignition[950]: Ignition 2.20.0 Jan 30 13:51:05.665793 ignition[950]: Stage: disks Jan 30 13:51:05.665903 ignition[950]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:05.665910 ignition[950]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:51:05.666449 ignition[950]: disks: disks passed Jan 30 13:51:05.666453 ignition[950]: POST message to Packet Timeline Jan 30 13:51:05.666465 ignition[950]: GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:51:06.333677 ignition[950]: GET result: OK Jan 30 13:51:06.704528 ignition[950]: Ignition finished successfully Jan 30 13:51:06.707022 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:51:06.723668 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:51:06.742602 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:51:06.763593 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:51:06.785732 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:51:06.805632 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:51:06.834588 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:51:06.870790 systemd-fsck[969]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:51:06.880805 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:51:06.908567 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:51:06.979326 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:51:06.979735 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:51:06.988750 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:51:07.029669 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:51:07.082244 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (978) Jan 30 13:51:07.082259 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:51:07.082270 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:07.082278 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:51:07.082285 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:51:07.038186 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:51:07.108402 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:51:07.107812 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:51:07.119978 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jan 30 13:51:07.141657 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:51:07.141704 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:51:07.200437 coreos-metadata[996]: Jan 30 13:51:07.195 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:51:07.221510 coreos-metadata[995]: Jan 30 13:51:07.195 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:51:07.164268 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:51:07.189500 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:51:07.218437 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:51:07.271450 initrd-setup-root[1010]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:51:07.281426 initrd-setup-root[1017]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:51:07.291435 initrd-setup-root[1024]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:51:07.301445 initrd-setup-root[1031]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:51:07.305082 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:51:07.343581 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:51:07.369528 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:51:07.360893 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:51:07.378244 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:51:07.397107 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:51:07.400407 ignition[1098]: INFO : Ignition 2.20.0 Jan 30 13:51:07.400407 ignition[1098]: INFO : Stage: mount Jan 30 13:51:07.426494 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:07.426494 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:51:07.426494 ignition[1098]: INFO : mount: mount passed Jan 30 13:51:07.426494 ignition[1098]: INFO : POST message to Packet Timeline Jan 30 13:51:07.426494 ignition[1098]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:51:07.722861 coreos-metadata[995]: Jan 30 13:51:07.722 INFO Fetch successful Jan 30 13:51:07.755601 coreos-metadata[995]: Jan 30 13:51:07.755 INFO wrote hostname ci-4186.1.0-a-f55746354a to /sysroot/etc/hostname Jan 30 13:51:07.756797 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:51:07.791454 coreos-metadata[996]: Jan 30 13:51:07.788 INFO Fetch successful Jan 30 13:51:07.825671 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jan 30 13:51:07.825729 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jan 30 13:51:08.017126 ignition[1098]: INFO : GET result: OK Jan 30 13:51:08.351470 ignition[1098]: INFO : Ignition finished successfully Jan 30 13:51:08.354539 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:51:08.390717 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:51:08.402221 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:51:08.436362 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1119) Jan 30 13:51:08.453773 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:51:08.453789 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:08.459665 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:51:08.475057 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:51:08.475088 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:51:08.477021 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:51:08.500907 ignition[1136]: INFO : Ignition 2.20.0 Jan 30 13:51:08.500907 ignition[1136]: INFO : Stage: files Jan 30 13:51:08.515558 ignition[1136]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:08.515558 ignition[1136]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:51:08.515558 ignition[1136]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:51:08.515558 ignition[1136]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:51:08.515558 ignition[1136]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:51:08.515558 ignition[1136]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:51:08.515558 ignition[1136]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:51:08.515558 ignition[1136]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:51:08.515558 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:51:08.515558 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:51:08.505075 unknown[1136]: wrote ssh authorized keys file for user: core Jan 30 13:51:08.650396 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:51:08.672262 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:51:08.672262 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:51:08.705550 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 13:51:09.149537 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:51:09.259191 ignition[1136]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:51:09.259191 ignition[1136]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:51:09.290556 ignition[1136]: INFO : files: files passed Jan 30 13:51:09.290556 ignition[1136]: INFO : POST message to Packet Timeline Jan 30 13:51:09.290556 ignition[1136]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:51:09.904703 ignition[1136]: INFO : GET result: OK Jan 30 13:51:10.248361 ignition[1136]: INFO : Ignition finished successfully Jan 30 13:51:10.249467 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:51:10.277668 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:51:10.287977 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:51:10.297745 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:51:10.297804 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:51:10.356402 initrd-setup-root-after-ignition[1175]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:10.356402 initrd-setup-root-after-ignition[1175]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:10.395622 initrd-setup-root-after-ignition[1179]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:10.360927 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:51:10.371642 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:51:10.420556 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:51:10.481942 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:51:10.481994 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:51:10.500711 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:51:10.512562 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:51:10.539628 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:51:10.560734 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:51:10.624708 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:51:10.656741 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:51:10.687093 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:51:10.699946 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:51:10.721021 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:51:10.738968 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:51:10.739385 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:51:10.767092 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:51:10.788934 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:51:10.806941 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:51:10.825934 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:51:10.847930 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:51:10.868941 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:51:10.888935 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:51:10.910078 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:51:10.931955 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:51:10.951934 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:51:10.969824 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:51:10.970218 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:51:11.005795 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:51:11.015958 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:51:11.036810 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:51:11.037256 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:51:11.059820 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:51:11.060216 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:51:11.091925 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:51:11.092404 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:51:11.112128 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:51:11.129768 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:51:11.130210 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:51:11.151953 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:51:11.169931 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:51:11.188909 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:51:11.189208 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:51:11.208952 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:51:11.209243 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:51:11.232050 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:51:11.232470 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:51:11.252018 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:51:11.369500 ignition[1199]: INFO : Ignition 2.20.0 Jan 30 13:51:11.369500 ignition[1199]: INFO : Stage: umount Jan 30 13:51:11.369500 ignition[1199]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:11.369500 ignition[1199]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jan 30 13:51:11.369500 ignition[1199]: INFO : umount: umount passed Jan 30 13:51:11.369500 ignition[1199]: INFO : POST message to Packet Timeline Jan 30 13:51:11.369500 ignition[1199]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jan 30 13:51:11.252415 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:51:11.270020 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:51:11.270432 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:51:11.299535 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:51:11.321068 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:51:11.330583 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:51:11.330695 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:51:11.360627 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:51:11.360718 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:51:11.399200 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:51:11.401885 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:51:11.402014 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:51:11.502526 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:51:11.502799 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:51:11.832779 ignition[1199]: INFO : GET result: OK Jan 30 13:51:12.244718 ignition[1199]: INFO : Ignition finished successfully Jan 30 13:51:12.247837 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:51:12.248116 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:51:12.264626 systemd[1]: Stopped target network.target - Network. Jan 30 13:51:12.279565 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:51:12.279738 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:51:12.297719 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:51:12.297886 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:51:12.315740 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:51:12.315895 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:51:12.334729 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:51:12.334898 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:51:12.353718 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:51:12.353884 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:51:12.373114 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:51:12.386489 systemd-networkd[919]: enp1s0f0np0: DHCPv6 lease lost Jan 30 13:51:12.391807 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:51:12.401564 systemd-networkd[919]: enp1s0f1np1: DHCPv6 lease lost Jan 30 13:51:12.410460 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:51:12.410749 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:51:12.429830 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:51:12.430156 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:51:12.449987 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:51:12.450109 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:51:12.486436 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:51:12.509528 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:51:12.509686 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:51:12.528701 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:51:12.528787 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:51:12.548795 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:51:12.548957 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:51:12.566805 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:51:12.566973 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:51:12.575186 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:51:12.606587 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:51:12.606969 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:51:12.638017 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:51:12.638047 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:51:12.658640 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:51:12.658664 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:51:12.686648 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:51:12.686733 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:51:12.717871 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:51:12.718037 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:51:12.745811 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:51:12.745962 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:51:12.800421 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:51:12.810488 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:51:13.042545 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Jan 30 13:51:12.810517 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:51:12.827644 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:51:12.827675 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:51:12.859573 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:51:12.859649 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:51:12.880701 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:51:12.880842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:12.902653 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:51:12.902975 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:51:12.923191 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:51:12.923460 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:51:12.944429 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:51:12.973624 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:51:12.996699 systemd[1]: Switching root. Jan 30 13:51:13.153491 systemd-journald[267]: Journal stopped Jan 30 13:51:14.780968 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:51:14.780987 kernel: SELinux: policy capability open_perms=1 Jan 30 13:51:14.780996 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:51:14.781003 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:51:14.781011 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:51:14.781018 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:51:14.781025 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:51:14.781032 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:51:14.781038 kernel: audit: type=1403 audit(1738245073.268:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:51:14.781046 systemd[1]: Successfully loaded SELinux policy in 75.997ms. Jan 30 13:51:14.781056 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.901ms. Jan 30 13:51:14.781064 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:51:14.781072 systemd[1]: Detected architecture x86-64. Jan 30 13:51:14.781079 systemd[1]: Detected first boot. Jan 30 13:51:14.781087 systemd[1]: Hostname set to . Jan 30 13:51:14.781097 systemd[1]: Initializing machine ID from random generator. Jan 30 13:51:14.781104 zram_generator::config[1248]: No configuration found. Jan 30 13:51:14.781113 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:51:14.781120 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:51:14.781128 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:51:14.781136 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:51:14.781146 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:51:14.781156 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:51:14.781163 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:51:14.781171 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:51:14.781179 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:51:14.781187 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:51:14.781195 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:51:14.781203 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:51:14.781212 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:51:14.781220 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:51:14.781228 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:51:14.781235 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:51:14.781243 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:51:14.781251 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:51:14.781259 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jan 30 13:51:14.781267 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:51:14.781276 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:51:14.781284 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:51:14.781292 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:51:14.781301 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:51:14.781310 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:51:14.781337 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:51:14.781347 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:51:14.781355 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:51:14.781365 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:51:14.781373 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:51:14.781381 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:51:14.781389 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:51:14.781397 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:51:14.781407 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:51:14.781415 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:51:14.781423 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:51:14.781432 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:51:14.781440 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:14.781448 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:51:14.781456 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:51:14.781465 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:51:14.781474 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:51:14.781483 systemd[1]: Reached target machines.target - Containers. Jan 30 13:51:14.781491 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:51:14.781499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:51:14.781508 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:51:14.781517 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:51:14.781526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:51:14.781534 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:51:14.781543 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:51:14.781552 kernel: ACPI: bus type drm_connector registered Jan 30 13:51:14.781559 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:51:14.781567 kernel: fuse: init (API version 7.39) Jan 30 13:51:14.781575 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:51:14.781583 kernel: loop: module loaded Jan 30 13:51:14.781590 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:51:14.781598 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:51:14.781608 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:51:14.781616 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:51:14.781624 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:51:14.781633 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:51:14.781652 systemd-journald[1352]: Collecting audit messages is disabled. Jan 30 13:51:14.781672 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:51:14.781681 systemd-journald[1352]: Journal started Jan 30 13:51:14.781698 systemd-journald[1352]: Runtime Journal (/run/log/journal/e55b0764e9464d1f9160fdce56627cd4) is 8.0M, max 639.9M, 631.9M free. Jan 30 13:51:13.664900 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:51:13.684268 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:51:13.684503 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:51:14.810357 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:51:14.822512 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:51:14.854407 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:51:14.875590 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:51:14.875615 systemd[1]: Stopped verity-setup.service. Jan 30 13:51:14.908304 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:14.908331 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:51:14.918754 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:51:14.928623 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:51:14.938588 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:51:14.948588 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:51:14.958579 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:51:14.968576 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:51:14.978665 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:51:14.989671 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:51:15.000780 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:51:15.000934 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:51:15.012891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:51:15.013111 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:51:15.025212 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:51:15.025626 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:51:15.036251 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:51:15.036653 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:51:15.048255 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:51:15.048706 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:51:15.060248 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:51:15.060643 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:51:15.071256 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:51:15.083243 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:51:15.096219 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:51:15.108234 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:51:15.143662 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:51:15.167619 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:51:15.179136 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:51:15.188522 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:51:15.188544 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:51:15.199261 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:51:15.221577 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:51:15.234501 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:51:15.245878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:51:15.248205 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:51:15.258942 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:51:15.269495 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:51:15.270207 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:51:15.277655 systemd-journald[1352]: Time spent on flushing to /var/log/journal/e55b0764e9464d1f9160fdce56627cd4 is 13.192ms for 1362 entries. Jan 30 13:51:15.277655 systemd-journald[1352]: System Journal (/var/log/journal/e55b0764e9464d1f9160fdce56627cd4) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:51:15.310386 systemd-journald[1352]: Received client request to flush runtime journal. Jan 30 13:51:15.286080 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:51:15.286938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:51:15.298147 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:51:15.310220 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:51:15.322386 kernel: loop0: detected capacity change from 0 to 138184 Jan 30 13:51:15.327204 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:51:15.339491 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:51:15.342808 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Jan 30 13:51:15.342818 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Jan 30 13:51:15.351369 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:51:15.357552 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:51:15.368545 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:51:15.379556 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:51:15.390532 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:51:15.402323 kernel: loop1: detected capacity change from 0 to 141000 Jan 30 13:51:15.407532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:51:15.417547 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:51:15.431602 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:51:15.455509 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:51:15.463367 kernel: loop2: detected capacity change from 0 to 205544 Jan 30 13:51:15.474167 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:51:15.483928 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:51:15.484412 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:51:15.495924 udevadm[1388]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:51:15.501096 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:51:15.521474 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:51:15.522323 kernel: loop3: detected capacity change from 0 to 8 Jan 30 13:51:15.529030 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Jan 30 13:51:15.529040 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Jan 30 13:51:15.533650 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:51:15.577357 kernel: loop4: detected capacity change from 0 to 138184 Jan 30 13:51:15.603366 kernel: loop5: detected capacity change from 0 to 141000 Jan 30 13:51:15.608998 ldconfig[1378]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:51:15.611353 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:51:15.623389 kernel: loop6: detected capacity change from 0 to 205544 Jan 30 13:51:15.641327 kernel: loop7: detected capacity change from 0 to 8 Jan 30 13:51:15.641283 (sd-merge)[1410]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jan 30 13:51:15.641545 (sd-merge)[1410]: Merged extensions into '/usr'. Jan 30 13:51:15.669469 systemd[1]: Reloading requested from client PID 1383 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:51:15.669478 systemd[1]: Reloading... Jan 30 13:51:15.695419 zram_generator::config[1435]: No configuration found. Jan 30 13:51:15.768186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:15.807341 systemd[1]: Reloading finished in 137 ms. Jan 30 13:51:15.831323 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:51:15.842673 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:51:15.869763 systemd[1]: Starting ensure-sysext.service... Jan 30 13:51:15.879707 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:51:15.894914 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:51:15.906653 systemd-tmpfiles[1495]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:51:15.906806 systemd-tmpfiles[1495]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:51:15.907264 systemd-tmpfiles[1495]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:51:15.907484 systemd-tmpfiles[1495]: ACLs are not supported, ignoring. Jan 30 13:51:15.907535 systemd-tmpfiles[1495]: ACLs are not supported, ignoring. Jan 30 13:51:15.909078 systemd[1]: Reloading requested from client PID 1492 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:51:15.909103 systemd[1]: Reloading... Jan 30 13:51:15.910626 systemd-tmpfiles[1495]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:51:15.910630 systemd-tmpfiles[1495]: Skipping /boot Jan 30 13:51:15.916185 systemd-tmpfiles[1495]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:51:15.916189 systemd-tmpfiles[1495]: Skipping /boot Jan 30 13:51:15.922845 systemd-udevd[1496]: Using default interface naming scheme 'v255'. Jan 30 13:51:15.938378 zram_generator::config[1523]: No configuration found. Jan 30 13:51:15.973336 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1547) Jan 30 13:51:15.981328 kernel: IPMI message handler: version 39.2 Jan 30 13:51:15.988332 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:51:15.988377 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jan 30 13:51:16.002327 kernel: ipmi device interface Jan 30 13:51:16.002376 kernel: ACPI: button: Sleep Button [SLPB] Jan 30 13:51:16.017362 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:51:16.025326 kernel: ipmi_si: IPMI System Interface driver Jan 30 13:51:16.025383 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jan 30 13:51:16.054845 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jan 30 13:51:16.054979 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:51:16.054995 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jan 30 13:51:16.055109 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jan 30 13:51:16.055124 kernel: i2c i2c-0: 1/4 memory slots populated (from DMI) Jan 30 13:51:16.055235 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jan 30 13:51:16.055253 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jan 30 13:51:16.096094 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jan 30 13:51:16.096186 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jan 30 13:51:16.096261 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jan 30 13:51:16.096275 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jan 30 13:51:16.049693 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:16.107346 kernel: iTCO_vendor_support: vendor-support=0 Jan 30 13:51:16.107387 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jan 30 13:51:16.125776 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jan 30 13:51:16.113338 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jan 30 13:51:16.136582 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jan 30 13:51:16.136719 systemd[1]: Reloading finished in 227 ms. Jan 30 13:51:16.168330 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jan 30 13:51:16.174817 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jan 30 13:51:16.179719 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:51:16.189327 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jan 30 13:51:16.219768 kernel: intel_rapl_common: Found RAPL domain package Jan 30 13:51:16.219827 kernel: intel_rapl_common: Found RAPL domain core Jan 30 13:51:16.219841 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jan 30 13:51:16.219948 kernel: intel_rapl_common: Found RAPL domain dram Jan 30 13:51:16.220036 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:51:16.257390 systemd[1]: Finished ensure-sysext.service. Jan 30 13:51:16.277125 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:16.290490 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:51:16.303330 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jan 30 13:51:16.304045 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:51:16.311327 kernel: ipmi_ssif: IPMI SSIF Interface driver Jan 30 13:51:16.319532 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:51:16.320215 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:51:16.321233 augenrules[1701]: No rules Jan 30 13:51:16.329986 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:51:16.339972 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:51:16.350974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:51:16.360479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:51:16.368743 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:51:16.380005 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:51:16.391288 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:51:16.392238 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:51:16.393104 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:51:16.431466 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:51:16.443014 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:51:16.452410 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:16.459652 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:51:16.470561 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:51:16.470647 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:51:16.470893 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:51:16.471034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:51:16.471101 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:51:16.471238 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:51:16.471304 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:51:16.471449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:51:16.471514 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:51:16.471647 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:51:16.471711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:51:16.471839 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:51:16.471976 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:51:16.486496 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:51:16.486532 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:51:16.486564 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:51:16.487173 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:51:16.488019 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:51:16.488044 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:51:16.488258 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:51:16.493519 lvm[1728]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:51:16.495879 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:51:16.508810 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:51:16.519955 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:51:16.548507 systemd-resolved[1713]: Positive Trust Anchors: Jan 30 13:51:16.548512 systemd-resolved[1713]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:51:16.548540 systemd-resolved[1713]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:51:16.551586 systemd-resolved[1713]: Using system hostname 'ci-4186.1.0-a-f55746354a'. Jan 30 13:51:16.554852 systemd-networkd[1712]: lo: Link UP Jan 30 13:51:16.554856 systemd-networkd[1712]: lo: Gained carrier Jan 30 13:51:16.557401 systemd-networkd[1712]: bond0: netdev ready Jan 30 13:51:16.558403 systemd-networkd[1712]: Enumeration completed Jan 30 13:51:16.568850 systemd-networkd[1712]: enp1s0f0np0: Configuring with /etc/systemd/network/10-b8:59:9f:e1:63:ea.network. Jan 30 13:51:16.600534 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:51:16.611640 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:51:16.621437 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:51:16.631539 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:16.643661 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:51:16.653406 systemd[1]: Reached target network.target - Network. Jan 30 13:51:16.661388 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:51:16.672391 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:51:16.681444 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:51:16.692407 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:51:16.703397 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:51:16.714391 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:51:16.714408 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:51:16.722389 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:51:16.732476 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:51:16.742440 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:51:16.753388 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:51:16.761896 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:51:16.772040 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:51:16.781842 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:51:16.792218 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:51:16.804102 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:51:16.806114 lvm[1752]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:51:16.816765 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:51:16.827458 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:51:16.837404 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:51:16.845435 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:51:16.845451 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:51:16.856421 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:51:16.867068 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:51:16.877884 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:51:16.887139 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:51:16.890215 coreos-metadata[1755]: Jan 30 13:51:16.890 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:51:16.891051 coreos-metadata[1755]: Jan 30 13:51:16.891 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jan 30 13:51:16.896907 dbus-daemon[1756]: [system] SELinux support is enabled Jan 30 13:51:16.897160 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:51:16.899138 jq[1759]: false Jan 30 13:51:16.906433 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:51:16.907042 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:51:16.914418 extend-filesystems[1761]: Found loop4 Jan 30 13:51:16.916555 extend-filesystems[1761]: Found loop5 Jan 30 13:51:16.916555 extend-filesystems[1761]: Found loop6 Jan 30 13:51:16.916555 extend-filesystems[1761]: Found loop7 Jan 30 13:51:16.916555 extend-filesystems[1761]: Found sda Jan 30 13:51:16.916555 extend-filesystems[1761]: Found sda1 Jan 30 13:51:16.916555 extend-filesystems[1761]: Found sda2 Jan 30 13:51:16.916555 extend-filesystems[1761]: Found sda3 Jan 30 13:51:16.916555 extend-filesystems[1761]: Found usr Jan 30 13:51:16.916555 extend-filesystems[1761]: Found sda4 Jan 30 13:51:16.916555 extend-filesystems[1761]: Found sda6 Jan 30 13:51:16.916555 extend-filesystems[1761]: Found sda7 Jan 30 13:51:16.916555 extend-filesystems[1761]: Found sda9 Jan 30 13:51:16.916555 extend-filesystems[1761]: Checking size of /dev/sda9 Jan 30 13:51:17.063431 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jan 30 13:51:17.063452 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1543) Jan 30 13:51:17.063462 extend-filesystems[1761]: Resized partition /dev/sda9 Jan 30 13:51:16.917000 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:51:17.085443 extend-filesystems[1769]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:51:16.956143 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:51:16.985993 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:51:17.024910 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:51:17.029262 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jan 30 13:51:17.056647 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:51:17.104752 update_engine[1786]: I20250130 13:51:17.071722 1786 main.cc:92] Flatcar Update Engine starting Jan 30 13:51:17.104752 update_engine[1786]: I20250130 13:51:17.072463 1786 update_check_scheduler.cc:74] Next update check in 8m57s Jan 30 13:51:17.062435 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:51:17.104925 jq[1787]: true Jan 30 13:51:17.064999 systemd-logind[1781]: Watching system buttons on /dev/input/event3 (Power Button) Jan 30 13:51:17.065008 systemd-logind[1781]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 30 13:51:17.065020 systemd-logind[1781]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jan 30 13:51:17.065208 systemd-logind[1781]: New seat seat0. Jan 30 13:51:17.078089 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:51:17.096636 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:51:17.106093 sshd_keygen[1785]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:51:17.114863 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:51:17.124676 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:51:17.151605 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:51:17.151711 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:51:17.151899 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:51:17.152008 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:51:17.161906 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:51:17.162015 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:51:17.173562 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:51:17.186498 (ntainerd)[1799]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:51:17.187931 jq[1798]: true Jan 30 13:51:17.190429 dbus-daemon[1756]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:51:17.192089 tar[1796]: linux-amd64/helm Jan 30 13:51:17.198691 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jan 30 13:51:17.198811 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jan 30 13:51:17.200509 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:51:17.211767 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:51:17.220428 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:51:17.220523 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:51:17.231479 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:51:17.231560 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:51:17.253013 bash[1828]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:51:17.255555 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:51:17.268882 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:51:17.275870 locksmithd[1835]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:51:17.279680 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:51:17.279787 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:51:17.300537 systemd[1]: Starting sshkeys.service... Jan 30 13:51:17.308086 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:51:17.320509 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:51:17.332294 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:51:17.343768 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:51:17.355284 coreos-metadata[1849]: Jan 30 13:51:17.355 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jan 30 13:51:17.356273 coreos-metadata[1849]: Jan 30 13:51:17.356 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jan 30 13:51:17.359845 containerd[1799]: time="2025-01-30T13:51:17.359805118Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:51:17.372414 containerd[1799]: time="2025-01-30T13:51:17.372388452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:17.373281 containerd[1799]: time="2025-01-30T13:51:17.373261630Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:17.373281 containerd[1799]: time="2025-01-30T13:51:17.373278129Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:51:17.373343 containerd[1799]: time="2025-01-30T13:51:17.373287552Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:51:17.373387 containerd[1799]: time="2025-01-30T13:51:17.373377149Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:51:17.373414 containerd[1799]: time="2025-01-30T13:51:17.373389824Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:17.373538 containerd[1799]: time="2025-01-30T13:51:17.373424139Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:17.373538 containerd[1799]: time="2025-01-30T13:51:17.373432507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:17.373538 containerd[1799]: time="2025-01-30T13:51:17.373523635Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:17.373538 containerd[1799]: time="2025-01-30T13:51:17.373532024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:17.373538 containerd[1799]: time="2025-01-30T13:51:17.373538986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:17.373538 containerd[1799]: time="2025-01-30T13:51:17.373544241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:17.373773 containerd[1799]: time="2025-01-30T13:51:17.373588113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:17.373773 containerd[1799]: time="2025-01-30T13:51:17.373700936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:17.373773 containerd[1799]: time="2025-01-30T13:51:17.373757720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:17.373773 containerd[1799]: time="2025-01-30T13:51:17.373766033Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:51:17.373832 containerd[1799]: time="2025-01-30T13:51:17.373814831Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:51:17.373774 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:51:17.373882 containerd[1799]: time="2025-01-30T13:51:17.373841615Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:51:17.374325 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jan 30 13:51:17.384258 containerd[1799]: time="2025-01-30T13:51:17.384206925Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:51:17.384258 containerd[1799]: time="2025-01-30T13:51:17.384229393Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:51:17.384258 containerd[1799]: time="2025-01-30T13:51:17.384238922Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:51:17.384258 containerd[1799]: time="2025-01-30T13:51:17.384257396Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:51:17.384389 containerd[1799]: time="2025-01-30T13:51:17.384266133Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:51:17.384389 containerd[1799]: time="2025-01-30T13:51:17.384340875Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:51:17.384814 containerd[1799]: time="2025-01-30T13:51:17.384794668Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:51:17.384926 containerd[1799]: time="2025-01-30T13:51:17.384914318Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:51:17.384965 containerd[1799]: time="2025-01-30T13:51:17.384926804Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:51:17.384965 containerd[1799]: time="2025-01-30T13:51:17.384935562Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:51:17.384965 containerd[1799]: time="2025-01-30T13:51:17.384943673Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:51:17.384965 containerd[1799]: time="2025-01-30T13:51:17.384955265Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:51:17.384965 containerd[1799]: time="2025-01-30T13:51:17.384963774Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:51:17.385067 containerd[1799]: time="2025-01-30T13:51:17.384971395Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:51:17.385067 containerd[1799]: time="2025-01-30T13:51:17.384981762Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:51:17.385067 containerd[1799]: time="2025-01-30T13:51:17.384989554Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:51:17.385067 containerd[1799]: time="2025-01-30T13:51:17.384996675Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:51:17.385067 containerd[1799]: time="2025-01-30T13:51:17.385003156Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:51:17.385067 containerd[1799]: time="2025-01-30T13:51:17.385015400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385067 containerd[1799]: time="2025-01-30T13:51:17.385023108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385067 containerd[1799]: time="2025-01-30T13:51:17.385029997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385067 containerd[1799]: time="2025-01-30T13:51:17.385036739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385067 containerd[1799]: time="2025-01-30T13:51:17.385045784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385067 containerd[1799]: time="2025-01-30T13:51:17.385053274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385067 containerd[1799]: time="2025-01-30T13:51:17.385059974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385067 containerd[1799]: time="2025-01-30T13:51:17.385067592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385333 containerd[1799]: time="2025-01-30T13:51:17.385074929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385333 containerd[1799]: time="2025-01-30T13:51:17.385083387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385333 containerd[1799]: time="2025-01-30T13:51:17.385089679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385333 containerd[1799]: time="2025-01-30T13:51:17.385095826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385333 containerd[1799]: time="2025-01-30T13:51:17.385104781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385333 containerd[1799]: time="2025-01-30T13:51:17.385113165Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:51:17.385333 containerd[1799]: time="2025-01-30T13:51:17.385124941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385333 containerd[1799]: time="2025-01-30T13:51:17.385134954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385333 containerd[1799]: time="2025-01-30T13:51:17.385141198Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:51:17.385333 containerd[1799]: time="2025-01-30T13:51:17.385165956Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:51:17.385333 containerd[1799]: time="2025-01-30T13:51:17.385175713Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:51:17.385333 containerd[1799]: time="2025-01-30T13:51:17.385181784Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:51:17.385333 containerd[1799]: time="2025-01-30T13:51:17.385188305Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:51:17.385594 containerd[1799]: time="2025-01-30T13:51:17.385193304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385594 containerd[1799]: time="2025-01-30T13:51:17.385200416Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:51:17.385594 containerd[1799]: time="2025-01-30T13:51:17.385206287Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:51:17.385594 containerd[1799]: time="2025-01-30T13:51:17.385212026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:51:17.385677 containerd[1799]: time="2025-01-30T13:51:17.385380985Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:51:17.385677 containerd[1799]: time="2025-01-30T13:51:17.385408153Z" level=info msg="Connect containerd service" Jan 30 13:51:17.385677 containerd[1799]: time="2025-01-30T13:51:17.385426679Z" level=info msg="using legacy CRI server" Jan 30 13:51:17.385677 containerd[1799]: time="2025-01-30T13:51:17.385431328Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:51:17.385677 containerd[1799]: time="2025-01-30T13:51:17.385487516Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:51:17.385876 containerd[1799]: time="2025-01-30T13:51:17.385790015Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:51:17.385931 containerd[1799]: time="2025-01-30T13:51:17.385907480Z" level=info msg="Start subscribing containerd event" Jan 30 13:51:17.385965 containerd[1799]: time="2025-01-30T13:51:17.385938067Z" level=info msg="Start recovering state" Jan 30 13:51:17.385965 containerd[1799]: time="2025-01-30T13:51:17.385951951Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:51:17.386004 containerd[1799]: time="2025-01-30T13:51:17.385976305Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:51:17.386004 containerd[1799]: time="2025-01-30T13:51:17.385982093Z" level=info msg="Start event monitor" Jan 30 13:51:17.386004 containerd[1799]: time="2025-01-30T13:51:17.385994280Z" level=info msg="Start snapshots syncer" Jan 30 13:51:17.386004 containerd[1799]: time="2025-01-30T13:51:17.385999908Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:51:17.386004 containerd[1799]: time="2025-01-30T13:51:17.386004575Z" level=info msg="Start streaming server" Jan 30 13:51:17.386274 containerd[1799]: time="2025-01-30T13:51:17.386042852Z" level=info msg="containerd successfully booted in 0.026700s" Jan 30 13:51:17.387366 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jan 30 13:51:17.388681 systemd-networkd[1712]: enp1s0f1np1: Configuring with /etc/systemd/network/10-b8:59:9f:e1:63:eb.network. Jan 30 13:51:17.400538 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jan 30 13:51:17.410508 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:51:17.418650 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:51:17.449358 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jan 30 13:51:17.474491 extend-filesystems[1769]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 13:51:17.474491 extend-filesystems[1769]: old_desc_blocks = 1, new_desc_blocks = 56 Jan 30 13:51:17.474491 extend-filesystems[1769]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jan 30 13:51:17.507735 extend-filesystems[1761]: Resized filesystem in /dev/sda9 Jan 30 13:51:17.507735 extend-filesystems[1761]: Found sdb Jan 30 13:51:17.529250 tar[1796]: linux-amd64/LICENSE Jan 30 13:51:17.529250 tar[1796]: linux-amd64/README.md Jan 30 13:51:17.475439 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:51:17.475538 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:51:17.540601 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:51:17.577400 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jan 30 13:51:17.590413 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jan 30 13:51:17.591032 systemd-networkd[1712]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jan 30 13:51:17.593131 systemd-networkd[1712]: enp1s0f0np0: Link UP Jan 30 13:51:17.594146 systemd-networkd[1712]: enp1s0f0np0: Gained carrier Jan 30 13:51:17.602353 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jan 30 13:51:17.619949 systemd-networkd[1712]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:59:9f:e1:63:ea.network. Jan 30 13:51:17.620864 systemd-networkd[1712]: enp1s0f1np1: Link UP Jan 30 13:51:17.621722 systemd-networkd[1712]: enp1s0f1np1: Gained carrier Jan 30 13:51:17.630041 systemd-networkd[1712]: bond0: Link UP Jan 30 13:51:17.631082 systemd-networkd[1712]: bond0: Gained carrier Jan 30 13:51:17.631784 systemd-timesyncd[1714]: Network configuration changed, trying to establish connection. Jan 30 13:51:17.633845 systemd-timesyncd[1714]: Network configuration changed, trying to establish connection. Jan 30 13:51:17.635179 systemd-timesyncd[1714]: Network configuration changed, trying to establish connection. Jan 30 13:51:17.635763 systemd-timesyncd[1714]: Network configuration changed, trying to establish connection. Jan 30 13:51:17.709996 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Jan 30 13:51:17.710022 kernel: bond0: active interface up! Jan 30 13:51:17.826366 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Jan 30 13:51:17.891180 coreos-metadata[1755]: Jan 30 13:51:17.891 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jan 30 13:51:18.356512 coreos-metadata[1849]: Jan 30 13:51:18.356 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jan 30 13:51:18.808509 systemd-networkd[1712]: bond0: Gained IPv6LL Jan 30 13:51:18.808791 systemd-timesyncd[1714]: Network configuration changed, trying to establish connection. Jan 30 13:51:19.129460 systemd-timesyncd[1714]: Network configuration changed, trying to establish connection. Jan 30 13:51:19.129959 systemd-timesyncd[1714]: Network configuration changed, trying to establish connection. Jan 30 13:51:19.133464 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:51:19.147812 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:51:19.177619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:19.188228 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:51:19.208797 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:51:19.854140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:19.866797 (kubelet)[1891]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:51:20.276449 kubelet[1891]: E0130 13:51:20.276361 1891 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:51:20.277788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:51:20.277876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:51:21.170903 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:51:21.176397 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Jan 30 13:51:21.176531 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Jan 30 13:51:21.202656 systemd[1]: Started sshd@0-139.178.70.53:22-218.92.0.155:43670.service - OpenSSH per-connection server daemon (218.92.0.155:43670). Jan 30 13:51:21.459099 coreos-metadata[1755]: Jan 30 13:51:21.459 INFO Fetch successful Jan 30 13:51:21.501890 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:51:21.513488 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jan 30 13:51:21.575582 coreos-metadata[1849]: Jan 30 13:51:21.575 INFO Fetch successful Jan 30 13:51:21.604672 unknown[1849]: wrote ssh authorized keys file for user: core Jan 30 13:51:21.627461 update-ssh-keys[1920]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:51:21.627947 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:51:21.640260 systemd[1]: Finished sshkeys.service. Jan 30 13:51:21.824312 systemd[1]: Started sshd@1-139.178.70.53:22-139.178.89.65:51536.service - OpenSSH per-connection server daemon (139.178.89.65:51536). Jan 30 13:51:21.836668 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jan 30 13:51:21.848983 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:51:21.859600 systemd[1]: Startup finished in 2.738s (kernel) + 19.431s (initrd) + 8.665s (userspace) = 30.835s. Jan 30 13:51:21.870951 sshd[1924]: Accepted publickey for core from 139.178.89.65 port 51536 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:51:21.871742 sshd-session[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:21.880666 systemd-logind[1781]: New session 1 of user core. Jan 30 13:51:21.880703 agetty[1858]: failed to open credentials directory Jan 30 13:51:21.880739 agetty[1869]: failed to open credentials directory Jan 30 13:51:21.881324 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:51:21.882075 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:51:21.885659 login[1869]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:51:21.890356 systemd-logind[1781]: New session 2 of user core. Jan 30 13:51:21.892234 login[1858]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:51:21.892892 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:51:21.894521 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:51:21.895897 systemd-logind[1781]: New session 3 of user core. Jan 30 13:51:21.898528 (systemd)[1933]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:51:21.970848 systemd[1933]: Queued start job for default target default.target. Jan 30 13:51:21.980873 systemd[1933]: Created slice app.slice - User Application Slice. Jan 30 13:51:21.980888 systemd[1933]: Reached target paths.target - Paths. Jan 30 13:51:21.980897 systemd[1933]: Reached target timers.target - Timers. Jan 30 13:51:21.981506 systemd[1933]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:51:21.987482 systemd[1933]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:51:21.987512 systemd[1933]: Reached target sockets.target - Sockets. Jan 30 13:51:21.987522 systemd[1933]: Reached target basic.target - Basic System. Jan 30 13:51:21.987545 systemd[1933]: Reached target default.target - Main User Target. Jan 30 13:51:21.987561 systemd[1933]: Startup finished in 86ms. Jan 30 13:51:21.987639 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:51:21.988366 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:51:21.988756 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:51:21.989153 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:51:22.048426 systemd[1]: Started sshd@2-139.178.70.53:22-139.178.89.65:49482.service - OpenSSH per-connection server daemon (139.178.89.65:49482). Jan 30 13:51:22.076282 sshd[1966]: Accepted publickey for core from 139.178.89.65 port 49482 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:51:22.076950 sshd-session[1966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:22.079294 systemd-logind[1781]: New session 4 of user core. Jan 30 13:51:22.089387 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:51:22.136633 sshd[1968]: Connection closed by 139.178.89.65 port 49482 Jan 30 13:51:22.136798 sshd-session[1966]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:22.144639 systemd[1]: sshd@2-139.178.70.53:22-139.178.89.65:49482.service: Deactivated successfully. Jan 30 13:51:22.145349 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:51:22.145984 systemd-logind[1781]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:51:22.146540 systemd[1]: Started sshd@3-139.178.70.53:22-139.178.89.65:49488.service - OpenSSH per-connection server daemon (139.178.89.65:49488). Jan 30 13:51:22.146949 systemd-logind[1781]: Removed session 4. Jan 30 13:51:22.174539 sshd[1973]: Accepted publickey for core from 139.178.89.65 port 49488 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:51:22.175156 sshd-session[1973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:22.177812 systemd-logind[1781]: New session 5 of user core. Jan 30 13:51:22.188414 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:51:22.233580 sshd[1976]: Connection closed by 139.178.89.65 port 49488 Jan 30 13:51:22.233706 sshd-session[1973]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:22.242838 systemd[1]: sshd@3-139.178.70.53:22-139.178.89.65:49488.service: Deactivated successfully. Jan 30 13:51:22.243563 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:51:22.244202 systemd-logind[1781]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:51:22.244895 systemd[1]: Started sshd@4-139.178.70.53:22-139.178.89.65:49502.service - OpenSSH per-connection server daemon (139.178.89.65:49502). Jan 30 13:51:22.245372 systemd-logind[1781]: Removed session 5. Jan 30 13:51:22.274563 sshd[1981]: Accepted publickey for core from 139.178.89.65 port 49502 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:51:22.275291 sshd-session[1981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:22.278178 systemd-logind[1781]: New session 6 of user core. Jan 30 13:51:22.286588 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:51:22.349841 sshd[1984]: Connection closed by 139.178.89.65 port 49502 Jan 30 13:51:22.350494 sshd-session[1981]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:22.372386 systemd[1]: sshd@4-139.178.70.53:22-139.178.89.65:49502.service: Deactivated successfully. Jan 30 13:51:22.373078 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:51:22.373764 systemd-logind[1781]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:51:22.374418 systemd[1]: Started sshd@5-139.178.70.53:22-139.178.89.65:49514.service - OpenSSH per-connection server daemon (139.178.89.65:49514). Jan 30 13:51:22.374843 systemd-logind[1781]: Removed session 6. Jan 30 13:51:22.402814 sshd[1989]: Accepted publickey for core from 139.178.89.65 port 49514 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:51:22.403536 sshd-session[1989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:22.406282 systemd-logind[1781]: New session 7 of user core. Jan 30 13:51:22.427733 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:51:22.532852 sudo[1993]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:51:22.532998 sudo[1993]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:22.549004 sudo[1993]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:22.549848 sshd[1992]: Connection closed by 139.178.89.65 port 49514 Jan 30 13:51:22.550031 sshd-session[1989]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:22.570996 systemd[1]: sshd@5-139.178.70.53:22-139.178.89.65:49514.service: Deactivated successfully. Jan 30 13:51:22.572297 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:51:22.573401 systemd-logind[1781]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:51:22.574450 systemd[1]: Started sshd@6-139.178.70.53:22-139.178.89.65:49520.service - OpenSSH per-connection server daemon (139.178.89.65:49520). Jan 30 13:51:22.574993 systemd-logind[1781]: Removed session 7. Jan 30 13:51:22.587105 sshd-session[1991]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:51:22.613424 sshd[1998]: Accepted publickey for core from 139.178.89.65 port 49520 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:51:22.616552 sshd-session[1998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:22.627670 systemd-logind[1781]: New session 8 of user core. Jan 30 13:51:22.644803 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:51:22.715458 sudo[2002]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:51:22.716245 sudo[2002]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:22.724827 sudo[2002]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:22.739100 sudo[2001]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:51:22.739914 sudo[2001]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:22.761656 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:51:22.778350 augenrules[2024]: No rules Jan 30 13:51:22.778847 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:51:22.778975 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:51:22.779681 sudo[2001]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:22.780538 sshd[2000]: Connection closed by 139.178.89.65 port 49520 Jan 30 13:51:22.780743 sshd-session[1998]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:22.783440 systemd[1]: sshd@6-139.178.70.53:22-139.178.89.65:49520.service: Deactivated successfully. Jan 30 13:51:22.784469 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:51:22.785099 systemd-logind[1781]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:51:22.786360 systemd[1]: Started sshd@7-139.178.70.53:22-139.178.89.65:49526.service - OpenSSH per-connection server daemon (139.178.89.65:49526). Jan 30 13:51:22.787048 systemd-logind[1781]: Removed session 8. Jan 30 13:51:22.822707 sshd[2032]: Accepted publickey for core from 139.178.89.65 port 49526 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:51:22.823645 sshd-session[2032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:22.827251 systemd-logind[1781]: New session 9 of user core. Jan 30 13:51:22.836570 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:51:22.900067 sudo[2035]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:51:22.900940 sudo[2035]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:23.246693 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:51:23.246759 (dockerd)[2064]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:51:23.506085 dockerd[2064]: time="2025-01-30T13:51:23.505991634Z" level=info msg="Starting up" Jan 30 13:51:23.772413 dockerd[2064]: time="2025-01-30T13:51:23.772279722Z" level=info msg="Loading containers: start." Jan 30 13:51:23.889326 kernel: Initializing XFRM netlink socket Jan 30 13:51:23.925567 systemd-timesyncd[1714]: Network configuration changed, trying to establish connection. Jan 30 13:51:23.925642 systemd-timesyncd[1714]: Network configuration changed, trying to establish connection. Jan 30 13:51:23.928698 systemd-timesyncd[1714]: Network configuration changed, trying to establish connection. Jan 30 13:51:23.981316 systemd-networkd[1712]: docker0: Link UP Jan 30 13:51:23.981717 systemd-timesyncd[1714]: Network configuration changed, trying to establish connection. Jan 30 13:51:24.017221 dockerd[2064]: time="2025-01-30T13:51:24.017176273Z" level=info msg="Loading containers: done." Jan 30 13:51:24.038054 dockerd[2064]: time="2025-01-30T13:51:24.037986628Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:51:24.038054 dockerd[2064]: time="2025-01-30T13:51:24.038030661Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:51:24.038133 dockerd[2064]: time="2025-01-30T13:51:24.038082939Z" level=info msg="Daemon has completed initialization" Jan 30 13:51:24.052028 dockerd[2064]: time="2025-01-30T13:51:24.051967736Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:51:24.052066 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:51:24.567742 sshd[1909]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:51:24.811018 containerd[1799]: time="2025-01-30T13:51:24.810902516Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 13:51:25.262235 sshd-session[2277]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:51:25.349071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount466043516.mount: Deactivated successfully. Jan 30 13:51:26.101335 containerd[1799]: time="2025-01-30T13:51:26.101280325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:26.101561 containerd[1799]: time="2025-01-30T13:51:26.101336541Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 30 13:51:26.101924 containerd[1799]: time="2025-01-30T13:51:26.101883546Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:26.103402 containerd[1799]: time="2025-01-30T13:51:26.103387509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:26.104133 containerd[1799]: time="2025-01-30T13:51:26.104120804Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.2931258s" Jan 30 13:51:26.104172 containerd[1799]: time="2025-01-30T13:51:26.104138736Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 30 13:51:26.105333 containerd[1799]: time="2025-01-30T13:51:26.105323396Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 13:51:27.149079 containerd[1799]: time="2025-01-30T13:51:27.149053392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:27.149302 containerd[1799]: time="2025-01-30T13:51:27.149242659Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 30 13:51:27.149745 containerd[1799]: time="2025-01-30T13:51:27.149706516Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:27.151688 containerd[1799]: time="2025-01-30T13:51:27.151647625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:27.152109 containerd[1799]: time="2025-01-30T13:51:27.152068281Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.046729841s" Jan 30 13:51:27.152109 containerd[1799]: time="2025-01-30T13:51:27.152083645Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 30 13:51:27.152443 containerd[1799]: time="2025-01-30T13:51:27.152407746Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 13:51:27.853473 sshd[1909]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:51:27.972351 containerd[1799]: time="2025-01-30T13:51:27.972322330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:27.972541 containerd[1799]: time="2025-01-30T13:51:27.972519174Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 30 13:51:27.972940 containerd[1799]: time="2025-01-30T13:51:27.972927432Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:27.974746 containerd[1799]: time="2025-01-30T13:51:27.974732339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:27.975251 containerd[1799]: time="2025-01-30T13:51:27.975235816Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 822.787356ms" Jan 30 13:51:27.975287 containerd[1799]: time="2025-01-30T13:51:27.975253306Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 30 13:51:27.975546 containerd[1799]: time="2025-01-30T13:51:27.975531809Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:51:28.175386 sshd-session[2342]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:51:28.734657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840981595.mount: Deactivated successfully. Jan 30 13:51:28.935121 containerd[1799]: time="2025-01-30T13:51:28.935067562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:28.935374 containerd[1799]: time="2025-01-30T13:51:28.935207174Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 30 13:51:28.935627 containerd[1799]: time="2025-01-30T13:51:28.935586851Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:28.936631 containerd[1799]: time="2025-01-30T13:51:28.936590240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:28.937017 containerd[1799]: time="2025-01-30T13:51:28.936976777Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 961.427713ms" Jan 30 13:51:28.937017 containerd[1799]: time="2025-01-30T13:51:28.936992157Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 30 13:51:28.937296 containerd[1799]: time="2025-01-30T13:51:28.937284968Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:51:29.451487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3396230010.mount: Deactivated successfully. Jan 30 13:51:29.924414 containerd[1799]: time="2025-01-30T13:51:29.924356227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:29.924603 containerd[1799]: time="2025-01-30T13:51:29.924553720Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:51:29.925084 containerd[1799]: time="2025-01-30T13:51:29.925042640Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:29.926649 containerd[1799]: time="2025-01-30T13:51:29.926605671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:29.927283 containerd[1799]: time="2025-01-30T13:51:29.927242408Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 989.942809ms" Jan 30 13:51:29.927283 containerd[1799]: time="2025-01-30T13:51:29.927258427Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:51:29.927546 containerd[1799]: time="2025-01-30T13:51:29.927535001Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:51:29.980433 sshd[1909]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:51:30.142916 sshd[1909]: Received disconnect from 218.92.0.155 port 43670:11: [preauth] Jan 30 13:51:30.142916 sshd[1909]: Disconnected from authenticating user root 218.92.0.155 port 43670 [preauth] Jan 30 13:51:30.143761 systemd[1]: sshd@0-139.178.70.53:22-218.92.0.155:43670.service: Deactivated successfully. Jan 30 13:51:30.416885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:51:30.419616 containerd[1799]: time="2025-01-30T13:51:30.419567008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:30.432641 containerd[1799]: time="2025-01-30T13:51:30.419736192Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 13:51:30.432641 containerd[1799]: time="2025-01-30T13:51:30.420188477Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:30.432641 containerd[1799]: time="2025-01-30T13:51:30.421522545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:30.432641 containerd[1799]: time="2025-01-30T13:51:30.422473561Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 494.924854ms" Jan 30 13:51:30.432641 containerd[1799]: time="2025-01-30T13:51:30.422487400Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:51:30.432641 containerd[1799]: time="2025-01-30T13:51:30.422778382Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 13:51:30.432620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:30.433460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1170427526.mount: Deactivated successfully. Jan 30 13:51:30.654903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:30.657183 (kubelet)[2409]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:51:30.681668 kubelet[2409]: E0130 13:51:30.681523 2409 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:51:30.683552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:51:30.683627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:51:31.064348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount481394142.mount: Deactivated successfully. Jan 30 13:51:32.107530 containerd[1799]: time="2025-01-30T13:51:32.107475223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:32.107746 containerd[1799]: time="2025-01-30T13:51:32.107589839Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 30 13:51:32.108194 containerd[1799]: time="2025-01-30T13:51:32.108150399Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:32.109887 containerd[1799]: time="2025-01-30T13:51:32.109847795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:32.110672 containerd[1799]: time="2025-01-30T13:51:32.110628719Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.687833024s" Jan 30 13:51:32.110672 containerd[1799]: time="2025-01-30T13:51:32.110648512Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 30 13:51:33.897488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:33.911665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:33.924914 systemd[1]: Reloading requested from client PID 2533 ('systemctl') (unit session-9.scope)... Jan 30 13:51:33.924920 systemd[1]: Reloading... Jan 30 13:51:33.970350 zram_generator::config[2572]: No configuration found. Jan 30 13:51:34.046284 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:34.107594 systemd[1]: Reloading finished in 182 ms. Jan 30 13:51:34.144633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:34.146251 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:34.146905 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:51:34.147005 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:34.147845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:34.358218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:34.360580 (kubelet)[2641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:51:34.380332 kubelet[2641]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:34.380332 kubelet[2641]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:51:34.380332 kubelet[2641]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:34.380549 kubelet[2641]: I0130 13:51:34.380343 2641 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:51:34.620570 kubelet[2641]: I0130 13:51:34.620526 2641 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:51:34.620570 kubelet[2641]: I0130 13:51:34.620539 2641 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:51:34.620726 kubelet[2641]: I0130 13:51:34.620689 2641 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:51:34.657519 kubelet[2641]: I0130 13:51:34.657502 2641 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:51:34.658035 kubelet[2641]: E0130 13:51:34.657965 2641 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.53:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:34.663175 kubelet[2641]: E0130 13:51:34.663139 2641 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:51:34.663175 kubelet[2641]: I0130 13:51:34.663169 2641 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:51:34.672861 kubelet[2641]: I0130 13:51:34.672853 2641 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:51:34.673956 kubelet[2641]: I0130 13:51:34.673927 2641 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:51:34.674055 kubelet[2641]: I0130 13:51:34.674005 2641 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:51:34.674164 kubelet[2641]: I0130 13:51:34.674049 2641 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-f55746354a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:51:34.674164 kubelet[2641]: I0130 13:51:34.674164 2641 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:51:34.674243 kubelet[2641]: I0130 13:51:34.674169 2641 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:51:34.674243 kubelet[2641]: I0130 13:51:34.674226 2641 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:34.675739 kubelet[2641]: I0130 13:51:34.675704 2641 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:51:34.675739 kubelet[2641]: I0130 13:51:34.675714 2641 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:51:34.675739 kubelet[2641]: I0130 13:51:34.675729 2641 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:51:34.675739 kubelet[2641]: I0130 13:51:34.675734 2641 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:51:34.677675 kubelet[2641]: W0130 13:51:34.677616 2641 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 30 13:51:34.677742 kubelet[2641]: E0130 13:51:34.677678 2641 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.53:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:34.677956 kubelet[2641]: W0130 13:51:34.677905 2641 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-f55746354a&limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 30 13:51:34.677956 kubelet[2641]: E0130 13:51:34.677930 2641 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-f55746354a&limit=500&resourceVersion=0\": dial tcp 139.178.70.53:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:34.679739 kubelet[2641]: I0130 13:51:34.679701 2641 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:51:34.681014 kubelet[2641]: I0130 13:51:34.680978 2641 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:51:34.681553 kubelet[2641]: W0130 13:51:34.681518 2641 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:51:34.681810 kubelet[2641]: I0130 13:51:34.681776 2641 server.go:1269] "Started kubelet" Jan 30 13:51:34.681860 kubelet[2641]: I0130 13:51:34.681817 2641 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:51:34.681955 kubelet[2641]: I0130 13:51:34.681895 2641 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:51:34.682162 kubelet[2641]: I0130 13:51:34.682152 2641 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:51:34.682901 kubelet[2641]: I0130 13:51:34.682891 2641 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:51:34.682901 kubelet[2641]: I0130 13:51:34.682897 2641 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:51:34.683012 kubelet[2641]: I0130 13:51:34.682914 2641 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:51:34.683012 kubelet[2641]: E0130 13:51:34.682940 2641 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-f55746354a\" not found" Jan 30 13:51:34.683012 kubelet[2641]: I0130 13:51:34.682995 2641 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:51:34.683115 kubelet[2641]: I0130 13:51:34.683105 2641 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:51:34.685870 kubelet[2641]: I0130 13:51:34.685829 2641 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:51:34.686160 kubelet[2641]: E0130 13:51:34.685909 2641 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:51:34.686549 kubelet[2641]: W0130 13:51:34.686494 2641 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 30 13:51:34.686646 kubelet[2641]: E0130 13:51:34.686563 2641 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.53:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:34.686750 kubelet[2641]: E0130 13:51:34.686723 2641 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-f55746354a?timeout=10s\": dial tcp 139.178.70.53:6443: connect: connection refused" interval="200ms" Jan 30 13:51:34.687059 kubelet[2641]: I0130 13:51:34.687050 2641 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:51:34.687120 kubelet[2641]: I0130 13:51:34.687112 2641 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:51:34.687762 kubelet[2641]: I0130 13:51:34.687754 2641 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:51:34.689437 kubelet[2641]: E0130 13:51:34.687475 2641 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.53:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-f55746354a.181f7cb439126738 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-f55746354a,UID:ci-4186.1.0-a-f55746354a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-f55746354a,},FirstTimestamp:2025-01-30 13:51:34.681765688 +0000 UTC m=+0.319275728,LastTimestamp:2025-01-30 13:51:34.681765688 +0000 UTC m=+0.319275728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-f55746354a,}" Jan 30 13:51:34.694221 kubelet[2641]: I0130 13:51:34.694203 2641 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:51:34.694827 kubelet[2641]: I0130 13:51:34.694817 2641 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:51:34.694880 kubelet[2641]: I0130 13:51:34.694855 2641 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:51:34.694880 kubelet[2641]: I0130 13:51:34.694866 2641 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:51:34.694944 kubelet[2641]: E0130 13:51:34.694887 2641 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:51:34.695146 kubelet[2641]: W0130 13:51:34.695131 2641 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 30 13:51:34.695181 kubelet[2641]: E0130 13:51:34.695155 2641 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.53:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:34.783308 kubelet[2641]: E0130 13:51:34.783228 2641 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-f55746354a\" not found" Jan 30 13:51:34.795212 kubelet[2641]: E0130 13:51:34.795099 2641 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:51:34.832172 kubelet[2641]: I0130 13:51:34.832083 2641 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:51:34.832172 kubelet[2641]: I0130 13:51:34.832125 2641 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:51:34.832172 kubelet[2641]: I0130 13:51:34.832166 2641 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:34.851514 kubelet[2641]: I0130 13:51:34.851470 2641 policy_none.go:49] "None policy: Start" Jan 30 13:51:34.852000 kubelet[2641]: I0130 13:51:34.851953 2641 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:51:34.852000 kubelet[2641]: I0130 13:51:34.851984 2641 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:51:34.857064 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:51:34.872815 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:51:34.874447 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:51:34.884253 kubelet[2641]: E0130 13:51:34.884240 2641 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-f55746354a\" not found" Jan 30 13:51:34.885931 kubelet[2641]: I0130 13:51:34.885921 2641 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:51:34.886031 kubelet[2641]: I0130 13:51:34.886023 2641 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:51:34.886063 kubelet[2641]: I0130 13:51:34.886032 2641 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:51:34.886181 kubelet[2641]: I0130 13:51:34.886151 2641 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:51:34.886635 kubelet[2641]: E0130 13:51:34.886597 2641 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-a-f55746354a\" not found" Jan 30 13:51:34.886965 kubelet[2641]: E0130 13:51:34.886922 2641 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-f55746354a?timeout=10s\": dial tcp 139.178.70.53:6443: connect: connection refused" interval="400ms" Jan 30 13:51:34.990555 kubelet[2641]: I0130 13:51:34.990468 2641 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-f55746354a" Jan 30 13:51:34.991270 kubelet[2641]: E0130 13:51:34.991164 2641 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.53:6443/api/v1/nodes\": dial tcp 139.178.70.53:6443: connect: connection refused" node="ci-4186.1.0-a-f55746354a" Jan 30 13:51:35.006518 systemd[1]: Created slice kubepods-burstable-pode0a26097b10a013c09830ca44f56fe21.slice - libcontainer container kubepods-burstable-pode0a26097b10a013c09830ca44f56fe21.slice. Jan 30 13:51:35.034467 systemd[1]: Created slice kubepods-burstable-podf081be31b661fee280505ae8193df5a8.slice - libcontainer container kubepods-burstable-podf081be31b661fee280505ae8193df5a8.slice. Jan 30 13:51:35.047719 systemd[1]: Created slice kubepods-burstable-poddd030eeb0104867033a6e645f5908afa.slice - libcontainer container kubepods-burstable-poddd030eeb0104867033a6e645f5908afa.slice. Jan 30 13:51:35.186865 kubelet[2641]: I0130 13:51:35.186597 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f081be31b661fee280505ae8193df5a8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-f55746354a\" (UID: \"f081be31b661fee280505ae8193df5a8\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f55746354a" Jan 30 13:51:35.186865 kubelet[2641]: I0130 13:51:35.186751 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd030eeb0104867033a6e645f5908afa-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-f55746354a\" (UID: \"dd030eeb0104867033a6e645f5908afa\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-f55746354a" Jan 30 13:51:35.186865 kubelet[2641]: I0130 13:51:35.186844 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0a26097b10a013c09830ca44f56fe21-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-f55746354a\" (UID: \"e0a26097b10a013c09830ca44f56fe21\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-f55746354a" Jan 30 13:51:35.187550 kubelet[2641]: I0130 13:51:35.186899 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f081be31b661fee280505ae8193df5a8-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-f55746354a\" (UID: \"f081be31b661fee280505ae8193df5a8\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f55746354a" Jan 30 13:51:35.187550 kubelet[2641]: I0130 13:51:35.186958 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f081be31b661fee280505ae8193df5a8-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-f55746354a\" (UID: \"f081be31b661fee280505ae8193df5a8\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f55746354a" Jan 30 13:51:35.187550 kubelet[2641]: I0130 13:51:35.187009 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f081be31b661fee280505ae8193df5a8-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-f55746354a\" (UID: \"f081be31b661fee280505ae8193df5a8\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f55746354a" Jan 30 13:51:35.187550 kubelet[2641]: I0130 13:51:35.187060 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0a26097b10a013c09830ca44f56fe21-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-f55746354a\" (UID: \"e0a26097b10a013c09830ca44f56fe21\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-f55746354a" Jan 30 13:51:35.187550 kubelet[2641]: I0130 13:51:35.187109 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0a26097b10a013c09830ca44f56fe21-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-f55746354a\" (UID: \"e0a26097b10a013c09830ca44f56fe21\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-f55746354a" Jan 30 13:51:35.188351 kubelet[2641]: I0130 13:51:35.187158 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f081be31b661fee280505ae8193df5a8-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-f55746354a\" (UID: \"f081be31b661fee280505ae8193df5a8\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f55746354a" Jan 30 13:51:35.195820 kubelet[2641]: I0130 13:51:35.195729 2641 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-f55746354a" Jan 30 13:51:35.196591 kubelet[2641]: E0130 13:51:35.196484 2641 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.53:6443/api/v1/nodes\": dial tcp 139.178.70.53:6443: connect: connection refused" node="ci-4186.1.0-a-f55746354a" Jan 30 13:51:35.288317 kubelet[2641]: E0130 13:51:35.288192 2641 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-f55746354a?timeout=10s\": dial tcp 139.178.70.53:6443: connect: connection refused" interval="800ms" Jan 30 13:51:35.334264 containerd[1799]: time="2025-01-30T13:51:35.334128130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-f55746354a,Uid:e0a26097b10a013c09830ca44f56fe21,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:35.345922 containerd[1799]: time="2025-01-30T13:51:35.345877812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-f55746354a,Uid:f081be31b661fee280505ae8193df5a8,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:35.350767 containerd[1799]: time="2025-01-30T13:51:35.350686986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-f55746354a,Uid:dd030eeb0104867033a6e645f5908afa,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:35.598626 kubelet[2641]: I0130 13:51:35.598578 2641 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-f55746354a" Jan 30 13:51:35.598930 kubelet[2641]: E0130 13:51:35.598868 2641 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.53:6443/api/v1/nodes\": dial tcp 139.178.70.53:6443: connect: connection refused" node="ci-4186.1.0-a-f55746354a" Jan 30 13:51:35.773792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1369422660.mount: Deactivated successfully. Jan 30 13:51:35.775380 containerd[1799]: time="2025-01-30T13:51:35.775329243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:35.776072 containerd[1799]: time="2025-01-30T13:51:35.776054898Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:51:35.776467 containerd[1799]: time="2025-01-30T13:51:35.776426673Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:35.776952 containerd[1799]: time="2025-01-30T13:51:35.776910907Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:35.777108 containerd[1799]: time="2025-01-30T13:51:35.777075657Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:51:35.777570 containerd[1799]: time="2025-01-30T13:51:35.777530113Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:35.777724 containerd[1799]: time="2025-01-30T13:51:35.777710376Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:51:35.778627 containerd[1799]: time="2025-01-30T13:51:35.778586192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:35.780105 containerd[1799]: time="2025-01-30T13:51:35.780082071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 445.724881ms" Jan 30 13:51:35.780846 containerd[1799]: time="2025-01-30T13:51:35.780826544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 430.048764ms" Jan 30 13:51:35.781776 containerd[1799]: time="2025-01-30T13:51:35.781763170Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 435.825101ms" Jan 30 13:51:35.860594 kubelet[2641]: W0130 13:51:35.860512 2641 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Jan 30 13:51:35.860594 kubelet[2641]: E0130 13:51:35.860559 2641 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.53:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:51:35.882663 containerd[1799]: time="2025-01-30T13:51:35.882601635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:35.882663 containerd[1799]: time="2025-01-30T13:51:35.882632884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:35.882663 containerd[1799]: time="2025-01-30T13:51:35.882640006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:35.882663 containerd[1799]: time="2025-01-30T13:51:35.882413045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:35.882663 containerd[1799]: time="2025-01-30T13:51:35.882646681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:35.882663 containerd[1799]: time="2025-01-30T13:51:35.882654577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:35.882844 containerd[1799]: time="2025-01-30T13:51:35.882660532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:35.882844 containerd[1799]: time="2025-01-30T13:51:35.882681951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:35.882844 containerd[1799]: time="2025-01-30T13:51:35.882688711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:35.882844 containerd[1799]: time="2025-01-30T13:51:35.882690209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:35.882844 containerd[1799]: time="2025-01-30T13:51:35.882695641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:35.882844 containerd[1799]: time="2025-01-30T13:51:35.882726433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:35.909628 systemd[1]: Started cri-containerd-2aced902d2e3f07d01e18344d17f7dd2e3dfaf12510d2ce4bf238e299ed7347a.scope - libcontainer container 2aced902d2e3f07d01e18344d17f7dd2e3dfaf12510d2ce4bf238e299ed7347a. Jan 30 13:51:35.910434 systemd[1]: Started cri-containerd-879b46f3eb23933f3eeabf8057ec0d5e1abfb94040c54d882ab932348dbafdeb.scope - libcontainer container 879b46f3eb23933f3eeabf8057ec0d5e1abfb94040c54d882ab932348dbafdeb. Jan 30 13:51:35.911301 systemd[1]: Started cri-containerd-ccc0a54bfd427adc0fefe403866a25413be076d21f3d50c2d69aac715ad3bb1a.scope - libcontainer container ccc0a54bfd427adc0fefe403866a25413be076d21f3d50c2d69aac715ad3bb1a. Jan 30 13:51:35.933697 containerd[1799]: time="2025-01-30T13:51:35.933667008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-f55746354a,Uid:e0a26097b10a013c09830ca44f56fe21,Namespace:kube-system,Attempt:0,} returns sandbox id \"2aced902d2e3f07d01e18344d17f7dd2e3dfaf12510d2ce4bf238e299ed7347a\"" Jan 30 13:51:35.934255 containerd[1799]: time="2025-01-30T13:51:35.934242495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-f55746354a,Uid:f081be31b661fee280505ae8193df5a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"879b46f3eb23933f3eeabf8057ec0d5e1abfb94040c54d882ab932348dbafdeb\"" Jan 30 13:51:35.935531 containerd[1799]: time="2025-01-30T13:51:35.935514831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-f55746354a,Uid:dd030eeb0104867033a6e645f5908afa,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccc0a54bfd427adc0fefe403866a25413be076d21f3d50c2d69aac715ad3bb1a\"" Jan 30 13:51:35.935636 containerd[1799]: time="2025-01-30T13:51:35.935620866Z" level=info msg="CreateContainer within sandbox \"2aced902d2e3f07d01e18344d17f7dd2e3dfaf12510d2ce4bf238e299ed7347a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:51:35.935683 containerd[1799]: time="2025-01-30T13:51:35.935669796Z" level=info msg="CreateContainer within sandbox \"879b46f3eb23933f3eeabf8057ec0d5e1abfb94040c54d882ab932348dbafdeb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:51:35.936381 containerd[1799]: time="2025-01-30T13:51:35.936368450Z" level=info msg="CreateContainer within sandbox \"ccc0a54bfd427adc0fefe403866a25413be076d21f3d50c2d69aac715ad3bb1a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:51:35.942405 containerd[1799]: time="2025-01-30T13:51:35.942368746Z" level=info msg="CreateContainer within sandbox \"2aced902d2e3f07d01e18344d17f7dd2e3dfaf12510d2ce4bf238e299ed7347a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f556758bd314a0d2dab58761261703b88aa644dd3bc65d44730cbad8b4497fe2\"" Jan 30 13:51:35.942657 containerd[1799]: time="2025-01-30T13:51:35.942616053Z" level=info msg="StartContainer for \"f556758bd314a0d2dab58761261703b88aa644dd3bc65d44730cbad8b4497fe2\"" Jan 30 13:51:35.943248 containerd[1799]: time="2025-01-30T13:51:35.943234542Z" level=info msg="CreateContainer within sandbox \"879b46f3eb23933f3eeabf8057ec0d5e1abfb94040c54d882ab932348dbafdeb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"da5409844b70668a889bc2a1306e4cbaa09b9e009ce8b12896052c96408de4f4\"" Jan 30 13:51:35.943368 containerd[1799]: time="2025-01-30T13:51:35.943358256Z" level=info msg="StartContainer for \"da5409844b70668a889bc2a1306e4cbaa09b9e009ce8b12896052c96408de4f4\"" Jan 30 13:51:35.943619 containerd[1799]: time="2025-01-30T13:51:35.943605645Z" level=info msg="CreateContainer within sandbox \"ccc0a54bfd427adc0fefe403866a25413be076d21f3d50c2d69aac715ad3bb1a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d81b334afa22003848c54cb3ee1fc6d1878e93fbebdb5cbc03e74db629262314\"" Jan 30 13:51:35.943790 containerd[1799]: time="2025-01-30T13:51:35.943779308Z" level=info msg="StartContainer for \"d81b334afa22003848c54cb3ee1fc6d1878e93fbebdb5cbc03e74db629262314\"" Jan 30 13:51:35.973637 systemd[1]: Started cri-containerd-d81b334afa22003848c54cb3ee1fc6d1878e93fbebdb5cbc03e74db629262314.scope - libcontainer container d81b334afa22003848c54cb3ee1fc6d1878e93fbebdb5cbc03e74db629262314. Jan 30 13:51:35.974286 systemd[1]: Started cri-containerd-da5409844b70668a889bc2a1306e4cbaa09b9e009ce8b12896052c96408de4f4.scope - libcontainer container da5409844b70668a889bc2a1306e4cbaa09b9e009ce8b12896052c96408de4f4. Jan 30 13:51:35.974996 systemd[1]: Started cri-containerd-f556758bd314a0d2dab58761261703b88aa644dd3bc65d44730cbad8b4497fe2.scope - libcontainer container f556758bd314a0d2dab58761261703b88aa644dd3bc65d44730cbad8b4497fe2. Jan 30 13:51:35.998269 containerd[1799]: time="2025-01-30T13:51:35.998213560Z" level=info msg="StartContainer for \"d81b334afa22003848c54cb3ee1fc6d1878e93fbebdb5cbc03e74db629262314\" returns successfully" Jan 30 13:51:35.998269 containerd[1799]: time="2025-01-30T13:51:35.998275931Z" level=info msg="StartContainer for \"da5409844b70668a889bc2a1306e4cbaa09b9e009ce8b12896052c96408de4f4\" returns successfully" Jan 30 13:51:35.999251 containerd[1799]: time="2025-01-30T13:51:35.999236727Z" level=info msg="StartContainer for \"f556758bd314a0d2dab58761261703b88aa644dd3bc65d44730cbad8b4497fe2\" returns successfully" Jan 30 13:51:36.400811 kubelet[2641]: I0130 13:51:36.400764 2641 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-f55746354a" Jan 30 13:51:36.516081 kubelet[2641]: E0130 13:51:36.516039 2641 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.0-a-f55746354a\" not found" node="ci-4186.1.0-a-f55746354a" Jan 30 13:51:36.617561 kubelet[2641]: I0130 13:51:36.617500 2641 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186.1.0-a-f55746354a" Jan 30 13:51:36.676758 kubelet[2641]: I0130 13:51:36.676683 2641 apiserver.go:52] "Watching apiserver" Jan 30 13:51:36.686140 kubelet[2641]: I0130 13:51:36.686122 2641 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:51:36.702681 kubelet[2641]: E0130 13:51:36.702648 2641 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-f55746354a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4186.1.0-a-f55746354a" Jan 30 13:51:36.702818 kubelet[2641]: E0130 13:51:36.702688 2641 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186.1.0-a-f55746354a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4186.1.0-a-f55746354a" Jan 30 13:51:36.702818 kubelet[2641]: E0130 13:51:36.702659 2641 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186.1.0-a-f55746354a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f55746354a" Jan 30 13:51:37.713887 kubelet[2641]: W0130 13:51:37.713778 2641 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:51:38.810667 systemd[1]: Reloading requested from client PID 2959 ('systemctl') (unit session-9.scope)... Jan 30 13:51:38.810675 systemd[1]: Reloading... Jan 30 13:51:38.856417 zram_generator::config[2998]: No configuration found. Jan 30 13:51:38.929871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:38.998575 systemd[1]: Reloading finished in 187 ms. Jan 30 13:51:39.021287 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:39.033094 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:51:39.033197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:39.056762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:39.294704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:39.297501 (kubelet)[3061]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:51:39.316930 kubelet[3061]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:39.316930 kubelet[3061]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:51:39.316930 kubelet[3061]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:39.317206 kubelet[3061]: I0130 13:51:39.316934 3061 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:51:39.320245 kubelet[3061]: I0130 13:51:39.320209 3061 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:51:39.320245 kubelet[3061]: I0130 13:51:39.320231 3061 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:51:39.321057 kubelet[3061]: I0130 13:51:39.321018 3061 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:51:39.321824 kubelet[3061]: I0130 13:51:39.321787 3061 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:51:39.322948 kubelet[3061]: I0130 13:51:39.322910 3061 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:51:39.324359 kubelet[3061]: E0130 13:51:39.324340 3061 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:51:39.324359 kubelet[3061]: I0130 13:51:39.324354 3061 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:51:39.331707 kubelet[3061]: I0130 13:51:39.331664 3061 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:51:39.331764 kubelet[3061]: I0130 13:51:39.331729 3061 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:51:39.331833 kubelet[3061]: I0130 13:51:39.331786 3061 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:51:39.331929 kubelet[3061]: I0130 13:51:39.331801 3061 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-f55746354a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:51:39.331929 kubelet[3061]: I0130 13:51:39.331902 3061 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:51:39.331929 kubelet[3061]: I0130 13:51:39.331907 3061 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:51:39.331929 kubelet[3061]: I0130 13:51:39.331924 3061 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:39.332030 kubelet[3061]: I0130 13:51:39.331977 3061 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:51:39.332030 kubelet[3061]: I0130 13:51:39.331984 3061 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:51:39.332030 kubelet[3061]: I0130 13:51:39.331999 3061 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:51:39.332030 kubelet[3061]: I0130 13:51:39.332006 3061 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:51:39.332379 kubelet[3061]: I0130 13:51:39.332369 3061 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:51:39.332637 kubelet[3061]: I0130 13:51:39.332595 3061 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:51:39.332813 kubelet[3061]: I0130 13:51:39.332805 3061 server.go:1269] "Started kubelet" Jan 30 13:51:39.332892 kubelet[3061]: I0130 13:51:39.332855 3061 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:51:39.332892 kubelet[3061]: I0130 13:51:39.332871 3061 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:51:39.333028 kubelet[3061]: I0130 13:51:39.333020 3061 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:51:39.333465 kubelet[3061]: I0130 13:51:39.333458 3061 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:51:39.333494 kubelet[3061]: I0130 13:51:39.333474 3061 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:51:39.333494 kubelet[3061]: I0130 13:51:39.333488 3061 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:51:39.333559 kubelet[3061]: E0130 13:51:39.333536 3061 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-f55746354a\" not found" Jan 30 13:51:39.333705 kubelet[3061]: I0130 13:51:39.333656 3061 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:51:39.334263 kubelet[3061]: I0130 13:51:39.334187 3061 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:51:39.335112 kubelet[3061]: I0130 13:51:39.335101 3061 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:51:39.336423 kubelet[3061]: E0130 13:51:39.336409 3061 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:51:39.337047 kubelet[3061]: I0130 13:51:39.337037 3061 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:51:39.337047 kubelet[3061]: I0130 13:51:39.337047 3061 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:51:39.337116 kubelet[3061]: I0130 13:51:39.337097 3061 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:51:39.339726 kubelet[3061]: I0130 13:51:39.339702 3061 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:51:39.340287 kubelet[3061]: I0130 13:51:39.340277 3061 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:51:39.340337 kubelet[3061]: I0130 13:51:39.340303 3061 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:51:39.340337 kubelet[3061]: I0130 13:51:39.340321 3061 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:51:39.340414 kubelet[3061]: E0130 13:51:39.340362 3061 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:51:39.351717 kubelet[3061]: I0130 13:51:39.351663 3061 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:51:39.351717 kubelet[3061]: I0130 13:51:39.351673 3061 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:51:39.351717 kubelet[3061]: I0130 13:51:39.351684 3061 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:39.351842 kubelet[3061]: I0130 13:51:39.351767 3061 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:51:39.351842 kubelet[3061]: I0130 13:51:39.351774 3061 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:51:39.351842 kubelet[3061]: I0130 13:51:39.351786 3061 policy_none.go:49] "None policy: Start" Jan 30 13:51:39.352027 kubelet[3061]: I0130 13:51:39.352017 3061 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:51:39.352058 kubelet[3061]: I0130 13:51:39.352029 3061 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:51:39.352126 kubelet[3061]: I0130 13:51:39.352121 3061 state_mem.go:75] "Updated machine memory state" Jan 30 13:51:39.354022 kubelet[3061]: I0130 13:51:39.354012 3061 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:51:39.354117 kubelet[3061]: I0130 13:51:39.354109 3061 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:51:39.354149 kubelet[3061]: I0130 13:51:39.354116 3061 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:51:39.354221 kubelet[3061]: I0130 13:51:39.354215 3061 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:51:39.449086 kubelet[3061]: W0130 13:51:39.449023 3061 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:51:39.449562 kubelet[3061]: W0130 13:51:39.449479 3061 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:51:39.450580 kubelet[3061]: W0130 13:51:39.450548 3061 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:51:39.450717 kubelet[3061]: E0130 13:51:39.450656 3061 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-f55746354a\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-a-f55746354a" Jan 30 13:51:39.461206 kubelet[3061]: I0130 13:51:39.461115 3061 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-f55746354a" Jan 30 13:51:39.470179 kubelet[3061]: I0130 13:51:39.470089 3061 kubelet_node_status.go:111] "Node was previously registered" node="ci-4186.1.0-a-f55746354a" Jan 30 13:51:39.470376 kubelet[3061]: I0130 13:51:39.470236 3061 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186.1.0-a-f55746354a" Jan 30 13:51:39.535138 kubelet[3061]: I0130 13:51:39.535062 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd030eeb0104867033a6e645f5908afa-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-f55746354a\" (UID: \"dd030eeb0104867033a6e645f5908afa\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-f55746354a" Jan 30 13:51:39.535138 kubelet[3061]: I0130 13:51:39.535145 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0a26097b10a013c09830ca44f56fe21-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-f55746354a\" (UID: \"e0a26097b10a013c09830ca44f56fe21\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-f55746354a" Jan 30 13:51:39.535618 kubelet[3061]: I0130 13:51:39.535212 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0a26097b10a013c09830ca44f56fe21-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-f55746354a\" (UID: \"e0a26097b10a013c09830ca44f56fe21\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-f55746354a" Jan 30 13:51:39.535618 kubelet[3061]: I0130 13:51:39.535264 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f081be31b661fee280505ae8193df5a8-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-f55746354a\" (UID: \"f081be31b661fee280505ae8193df5a8\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f55746354a" Jan 30 13:51:39.535618 kubelet[3061]: I0130 13:51:39.535367 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f081be31b661fee280505ae8193df5a8-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-f55746354a\" (UID: \"f081be31b661fee280505ae8193df5a8\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f55746354a" Jan 30 13:51:39.535618 kubelet[3061]: I0130 13:51:39.535424 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f081be31b661fee280505ae8193df5a8-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-f55746354a\" (UID: \"f081be31b661fee280505ae8193df5a8\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f55746354a" Jan 30 13:51:39.535618 kubelet[3061]: I0130 13:51:39.535567 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f081be31b661fee280505ae8193df5a8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-f55746354a\" (UID: \"f081be31b661fee280505ae8193df5a8\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f55746354a" Jan 30 13:51:39.536257 kubelet[3061]: I0130 13:51:39.535706 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f081be31b661fee280505ae8193df5a8-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-f55746354a\" (UID: \"f081be31b661fee280505ae8193df5a8\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f55746354a" Jan 30 13:51:39.536257 kubelet[3061]: I0130 13:51:39.535825 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0a26097b10a013c09830ca44f56fe21-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-f55746354a\" (UID: \"e0a26097b10a013c09830ca44f56fe21\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-f55746354a" Jan 30 13:51:40.332417 kubelet[3061]: I0130 13:51:40.332375 3061 apiserver.go:52] "Watching apiserver" Jan 30 13:51:40.334767 kubelet[3061]: I0130 13:51:40.334739 3061 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:51:40.348033 kubelet[3061]: W0130 13:51:40.348002 3061 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:51:40.348033 kubelet[3061]: W0130 13:51:40.348010 3061 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:51:40.348196 kubelet[3061]: E0130 13:51:40.348075 3061 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186.1.0-a-f55746354a\" already exists" pod="kube-system/kube-scheduler-ci-4186.1.0-a-f55746354a" Jan 30 13:51:40.348196 kubelet[3061]: E0130 13:51:40.348077 3061 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-f55746354a\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-a-f55746354a" Jan 30 13:51:40.376427 kubelet[3061]: I0130 13:51:40.373996 3061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-a-f55746354a" podStartSLOduration=1.373965678 podStartE2EDuration="1.373965678s" podCreationTimestamp="2025-01-30 13:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:40.364726683 +0000 UTC m=+1.064930602" watchObservedRunningTime="2025-01-30 13:51:40.373965678 +0000 UTC m=+1.074169591" Jan 30 13:51:40.382334 kubelet[3061]: I0130 13:51:40.382291 3061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-a-f55746354a" podStartSLOduration=3.3822783149999998 podStartE2EDuration="3.382278315s" podCreationTimestamp="2025-01-30 13:51:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:40.382254153 +0000 UTC m=+1.082458067" watchObservedRunningTime="2025-01-30 13:51:40.382278315 +0000 UTC m=+1.082482223" Jan 30 13:51:40.382459 kubelet[3061]: I0130 13:51:40.382370 3061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-f55746354a" podStartSLOduration=1.382367419 podStartE2EDuration="1.382367419s" podCreationTimestamp="2025-01-30 13:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:40.37443403 +0000 UTC m=+1.074637950" watchObservedRunningTime="2025-01-30 13:51:40.382367419 +0000 UTC m=+1.082571328" Jan 30 13:51:43.330439 sudo[2035]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:43.331126 sshd[2034]: Connection closed by 139.178.89.65 port 49526 Jan 30 13:51:43.331267 sshd-session[2032]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:43.333205 systemd[1]: sshd@7-139.178.70.53:22-139.178.89.65:49526.service: Deactivated successfully. Jan 30 13:51:43.334014 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:51:43.334093 systemd[1]: session-9.scope: Consumed 3.235s CPU time, 159.4M memory peak, 0B memory swap peak. Jan 30 13:51:43.334492 systemd-logind[1781]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:51:43.335124 systemd-logind[1781]: Removed session 9. Jan 30 13:51:44.616065 kubelet[3061]: I0130 13:51:44.615950 3061 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:51:44.617223 kubelet[3061]: I0130 13:51:44.617150 3061 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:51:44.617395 containerd[1799]: time="2025-01-30T13:51:44.616663456Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:51:45.172289 systemd[1]: Created slice kubepods-besteffort-pod8772fa0c_f6da_4b9a_9f05_ab95586a145f.slice - libcontainer container kubepods-besteffort-pod8772fa0c_f6da_4b9a_9f05_ab95586a145f.slice. Jan 30 13:51:45.179546 kubelet[3061]: I0130 13:51:45.179452 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8772fa0c-f6da-4b9a-9f05-ab95586a145f-lib-modules\") pod \"kube-proxy-h2bjd\" (UID: \"8772fa0c-f6da-4b9a-9f05-ab95586a145f\") " pod="kube-system/kube-proxy-h2bjd" Jan 30 13:51:45.179847 kubelet[3061]: I0130 13:51:45.179557 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfrxx\" (UniqueName: \"kubernetes.io/projected/8772fa0c-f6da-4b9a-9f05-ab95586a145f-kube-api-access-wfrxx\") pod \"kube-proxy-h2bjd\" (UID: \"8772fa0c-f6da-4b9a-9f05-ab95586a145f\") " pod="kube-system/kube-proxy-h2bjd" Jan 30 13:51:45.179847 kubelet[3061]: I0130 13:51:45.179627 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8772fa0c-f6da-4b9a-9f05-ab95586a145f-kube-proxy\") pod \"kube-proxy-h2bjd\" (UID: \"8772fa0c-f6da-4b9a-9f05-ab95586a145f\") " pod="kube-system/kube-proxy-h2bjd" Jan 30 13:51:45.179847 kubelet[3061]: I0130 13:51:45.179753 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8772fa0c-f6da-4b9a-9f05-ab95586a145f-xtables-lock\") pod \"kube-proxy-h2bjd\" (UID: \"8772fa0c-f6da-4b9a-9f05-ab95586a145f\") " pod="kube-system/kube-proxy-h2bjd" Jan 30 13:51:45.290048 kubelet[3061]: E0130 13:51:45.289945 3061 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 13:51:45.290048 kubelet[3061]: E0130 13:51:45.289997 3061 projected.go:194] Error preparing data for projected volume kube-api-access-wfrxx for pod kube-system/kube-proxy-h2bjd: configmap "kube-root-ca.crt" not found Jan 30 13:51:45.290463 kubelet[3061]: E0130 13:51:45.290099 3061 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8772fa0c-f6da-4b9a-9f05-ab95586a145f-kube-api-access-wfrxx podName:8772fa0c-f6da-4b9a-9f05-ab95586a145f nodeName:}" failed. No retries permitted until 2025-01-30 13:51:45.790061624 +0000 UTC m=+6.490265582 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wfrxx" (UniqueName: "kubernetes.io/projected/8772fa0c-f6da-4b9a-9f05-ab95586a145f-kube-api-access-wfrxx") pod "kube-proxy-h2bjd" (UID: "8772fa0c-f6da-4b9a-9f05-ab95586a145f") : configmap "kube-root-ca.crt" not found Jan 30 13:51:45.758267 systemd[1]: Created slice kubepods-besteffort-pod3b79ee7d_c97c_4e69_bea2_e1c214bc1fa8.slice - libcontainer container kubepods-besteffort-pod3b79ee7d_c97c_4e69_bea2_e1c214bc1fa8.slice. Jan 30 13:51:45.785285 kubelet[3061]: I0130 13:51:45.785217 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdzfc\" (UniqueName: \"kubernetes.io/projected/3b79ee7d-c97c-4e69-bea2-e1c214bc1fa8-kube-api-access-xdzfc\") pod \"tigera-operator-76c4976dd7-4z4dd\" (UID: \"3b79ee7d-c97c-4e69-bea2-e1c214bc1fa8\") " pod="tigera-operator/tigera-operator-76c4976dd7-4z4dd" Jan 30 13:51:45.785285 kubelet[3061]: I0130 13:51:45.785284 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3b79ee7d-c97c-4e69-bea2-e1c214bc1fa8-var-lib-calico\") pod \"tigera-operator-76c4976dd7-4z4dd\" (UID: \"3b79ee7d-c97c-4e69-bea2-e1c214bc1fa8\") " pod="tigera-operator/tigera-operator-76c4976dd7-4z4dd" Jan 30 13:51:46.062131 containerd[1799]: time="2025-01-30T13:51:46.062023468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-4z4dd,Uid:3b79ee7d-c97c-4e69-bea2-e1c214bc1fa8,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:51:46.103972 containerd[1799]: time="2025-01-30T13:51:46.103874871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h2bjd,Uid:8772fa0c-f6da-4b9a-9f05-ab95586a145f,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:46.538424 containerd[1799]: time="2025-01-30T13:51:46.538383889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:46.538619 containerd[1799]: time="2025-01-30T13:51:46.538604829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:46.538650 containerd[1799]: time="2025-01-30T13:51:46.538619386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:46.538670 containerd[1799]: time="2025-01-30T13:51:46.538662733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:46.555500 systemd[1]: Started cri-containerd-d32b1ea758015f03e13b1fd48156a86a0906ee565904ae5110d860f286223887.scope - libcontainer container d32b1ea758015f03e13b1fd48156a86a0906ee565904ae5110d860f286223887. Jan 30 13:51:46.587396 containerd[1799]: time="2025-01-30T13:51:46.587335811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-4z4dd,Uid:3b79ee7d-c97c-4e69-bea2-e1c214bc1fa8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d32b1ea758015f03e13b1fd48156a86a0906ee565904ae5110d860f286223887\"" Jan 30 13:51:46.588401 containerd[1799]: time="2025-01-30T13:51:46.588382348Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:51:46.615379 containerd[1799]: time="2025-01-30T13:51:46.615283848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:46.615624 containerd[1799]: time="2025-01-30T13:51:46.615547475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:46.615624 containerd[1799]: time="2025-01-30T13:51:46.615560614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:46.615679 containerd[1799]: time="2025-01-30T13:51:46.615615552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:46.645502 systemd[1]: Started cri-containerd-0c9d6cdace7869dbded8246bfdd409bba73c89f5433b7e514ac1b5713b7c7205.scope - libcontainer container 0c9d6cdace7869dbded8246bfdd409bba73c89f5433b7e514ac1b5713b7c7205. Jan 30 13:51:46.659877 containerd[1799]: time="2025-01-30T13:51:46.659845796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h2bjd,Uid:8772fa0c-f6da-4b9a-9f05-ab95586a145f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c9d6cdace7869dbded8246bfdd409bba73c89f5433b7e514ac1b5713b7c7205\"" Jan 30 13:51:46.661102 containerd[1799]: time="2025-01-30T13:51:46.661059831Z" level=info msg="CreateContainer within sandbox \"0c9d6cdace7869dbded8246bfdd409bba73c89f5433b7e514ac1b5713b7c7205\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:51:46.686874 containerd[1799]: time="2025-01-30T13:51:46.686826273Z" level=info msg="CreateContainer within sandbox \"0c9d6cdace7869dbded8246bfdd409bba73c89f5433b7e514ac1b5713b7c7205\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"39e76b9e27fb61941d93d4986ae7abd8fcf17473598dab66c38d08c7923b5319\"" Jan 30 13:51:46.687119 containerd[1799]: time="2025-01-30T13:51:46.687074000Z" level=info msg="StartContainer for \"39e76b9e27fb61941d93d4986ae7abd8fcf17473598dab66c38d08c7923b5319\"" Jan 30 13:51:46.711602 systemd[1]: Started cri-containerd-39e76b9e27fb61941d93d4986ae7abd8fcf17473598dab66c38d08c7923b5319.scope - libcontainer container 39e76b9e27fb61941d93d4986ae7abd8fcf17473598dab66c38d08c7923b5319. Jan 30 13:51:46.728640 containerd[1799]: time="2025-01-30T13:51:46.728588684Z" level=info msg="StartContainer for \"39e76b9e27fb61941d93d4986ae7abd8fcf17473598dab66c38d08c7923b5319\" returns successfully" Jan 30 13:51:47.381910 kubelet[3061]: I0130 13:51:47.381796 3061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h2bjd" podStartSLOduration=2.3817557 podStartE2EDuration="2.3817557s" podCreationTimestamp="2025-01-30 13:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:47.381673654 +0000 UTC m=+8.081877633" watchObservedRunningTime="2025-01-30 13:51:47.3817557 +0000 UTC m=+8.081959658" Jan 30 13:51:48.597756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301384121.mount: Deactivated successfully. Jan 30 13:51:48.812382 containerd[1799]: time="2025-01-30T13:51:48.812314582Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:48.812596 containerd[1799]: time="2025-01-30T13:51:48.812493558Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:51:48.812789 containerd[1799]: time="2025-01-30T13:51:48.812750469Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:48.814219 containerd[1799]: time="2025-01-30T13:51:48.814174600Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:48.814498 containerd[1799]: time="2025-01-30T13:51:48.814456998Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.22605395s" Jan 30 13:51:48.814498 containerd[1799]: time="2025-01-30T13:51:48.814473195Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:51:48.815489 containerd[1799]: time="2025-01-30T13:51:48.815475713Z" level=info msg="CreateContainer within sandbox \"d32b1ea758015f03e13b1fd48156a86a0906ee565904ae5110d860f286223887\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:51:48.819343 containerd[1799]: time="2025-01-30T13:51:48.819293042Z" level=info msg="CreateContainer within sandbox \"d32b1ea758015f03e13b1fd48156a86a0906ee565904ae5110d860f286223887\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3142152f59a7e28bd64aeb6836e81833cedd6a8b0c60eac25e7dfc8575d42aa3\"" Jan 30 13:51:48.819534 containerd[1799]: time="2025-01-30T13:51:48.819521155Z" level=info msg="StartContainer for \"3142152f59a7e28bd64aeb6836e81833cedd6a8b0c60eac25e7dfc8575d42aa3\"" Jan 30 13:51:48.843481 systemd[1]: Started cri-containerd-3142152f59a7e28bd64aeb6836e81833cedd6a8b0c60eac25e7dfc8575d42aa3.scope - libcontainer container 3142152f59a7e28bd64aeb6836e81833cedd6a8b0c60eac25e7dfc8575d42aa3. Jan 30 13:51:48.854178 containerd[1799]: time="2025-01-30T13:51:48.854114625Z" level=info msg="StartContainer for \"3142152f59a7e28bd64aeb6836e81833cedd6a8b0c60eac25e7dfc8575d42aa3\" returns successfully" Jan 30 13:51:49.387976 kubelet[3061]: I0130 13:51:49.387825 3061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-4z4dd" podStartSLOduration=2.161000419 podStartE2EDuration="4.387787456s" podCreationTimestamp="2025-01-30 13:51:45 +0000 UTC" firstStartedPulling="2025-01-30 13:51:46.588119267 +0000 UTC m=+7.288323182" lastFinishedPulling="2025-01-30 13:51:48.81490631 +0000 UTC m=+9.515110219" observedRunningTime="2025-01-30 13:51:49.387450865 +0000 UTC m=+10.087654876" watchObservedRunningTime="2025-01-30 13:51:49.387787456 +0000 UTC m=+10.087991419" Jan 30 13:51:51.669941 systemd[1]: Created slice kubepods-besteffort-podb8202b04_7538_4f47_8af4_0f67d1afb744.slice - libcontainer container kubepods-besteffort-podb8202b04_7538_4f47_8af4_0f67d1afb744.slice. Jan 30 13:51:51.694275 systemd[1]: Created slice kubepods-besteffort-pod93a521a3_ad1c_4f57_b494_9f6d4f2f046e.slice - libcontainer container kubepods-besteffort-pod93a521a3_ad1c_4f57_b494_9f6d4f2f046e.slice. Jan 30 13:51:51.725714 kubelet[3061]: I0130 13:51:51.725603 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8202b04-7538-4f47-8af4-0f67d1afb744-tigera-ca-bundle\") pod \"calico-typha-68b9b9555c-qs9t9\" (UID: \"b8202b04-7538-4f47-8af4-0f67d1afb744\") " pod="calico-system/calico-typha-68b9b9555c-qs9t9" Jan 30 13:51:51.726517 kubelet[3061]: I0130 13:51:51.725725 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q22m\" (UniqueName: \"kubernetes.io/projected/b8202b04-7538-4f47-8af4-0f67d1afb744-kube-api-access-2q22m\") pod \"calico-typha-68b9b9555c-qs9t9\" (UID: \"b8202b04-7538-4f47-8af4-0f67d1afb744\") " pod="calico-system/calico-typha-68b9b9555c-qs9t9" Jan 30 13:51:51.726517 kubelet[3061]: I0130 13:51:51.725817 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/93a521a3-ad1c-4f57-b494-9f6d4f2f046e-cni-net-dir\") pod \"calico-node-gn94v\" (UID: \"93a521a3-ad1c-4f57-b494-9f6d4f2f046e\") " pod="calico-system/calico-node-gn94v" Jan 30 13:51:51.726517 kubelet[3061]: I0130 13:51:51.725903 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/93a521a3-ad1c-4f57-b494-9f6d4f2f046e-cni-log-dir\") pod \"calico-node-gn94v\" (UID: \"93a521a3-ad1c-4f57-b494-9f6d4f2f046e\") " pod="calico-system/calico-node-gn94v" Jan 30 13:51:51.726517 kubelet[3061]: I0130 13:51:51.725972 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxsgm\" (UniqueName: \"kubernetes.io/projected/93a521a3-ad1c-4f57-b494-9f6d4f2f046e-kube-api-access-zxsgm\") pod \"calico-node-gn94v\" (UID: \"93a521a3-ad1c-4f57-b494-9f6d4f2f046e\") " pod="calico-system/calico-node-gn94v" Jan 30 13:51:51.726517 kubelet[3061]: I0130 13:51:51.726037 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/93a521a3-ad1c-4f57-b494-9f6d4f2f046e-var-lib-calico\") pod \"calico-node-gn94v\" (UID: \"93a521a3-ad1c-4f57-b494-9f6d4f2f046e\") " pod="calico-system/calico-node-gn94v" Jan 30 13:51:51.727087 kubelet[3061]: I0130 13:51:51.726104 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93a521a3-ad1c-4f57-b494-9f6d4f2f046e-xtables-lock\") pod \"calico-node-gn94v\" (UID: \"93a521a3-ad1c-4f57-b494-9f6d4f2f046e\") " pod="calico-system/calico-node-gn94v" Jan 30 13:51:51.727087 kubelet[3061]: I0130 13:51:51.726205 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b8202b04-7538-4f47-8af4-0f67d1afb744-typha-certs\") pod \"calico-typha-68b9b9555c-qs9t9\" (UID: \"b8202b04-7538-4f47-8af4-0f67d1afb744\") " pod="calico-system/calico-typha-68b9b9555c-qs9t9" Jan 30 13:51:51.727087 kubelet[3061]: I0130 13:51:51.726267 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93a521a3-ad1c-4f57-b494-9f6d4f2f046e-tigera-ca-bundle\") pod \"calico-node-gn94v\" (UID: \"93a521a3-ad1c-4f57-b494-9f6d4f2f046e\") " pod="calico-system/calico-node-gn94v" Jan 30 13:51:51.727087 kubelet[3061]: I0130 13:51:51.726367 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/93a521a3-ad1c-4f57-b494-9f6d4f2f046e-var-run-calico\") pod \"calico-node-gn94v\" (UID: \"93a521a3-ad1c-4f57-b494-9f6d4f2f046e\") " pod="calico-system/calico-node-gn94v" Jan 30 13:51:51.727087 kubelet[3061]: I0130 13:51:51.726451 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/93a521a3-ad1c-4f57-b494-9f6d4f2f046e-policysync\") pod \"calico-node-gn94v\" (UID: \"93a521a3-ad1c-4f57-b494-9f6d4f2f046e\") " pod="calico-system/calico-node-gn94v" Jan 30 13:51:51.727585 kubelet[3061]: I0130 13:51:51.726519 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/93a521a3-ad1c-4f57-b494-9f6d4f2f046e-flexvol-driver-host\") pod \"calico-node-gn94v\" (UID: \"93a521a3-ad1c-4f57-b494-9f6d4f2f046e\") " pod="calico-system/calico-node-gn94v" Jan 30 13:51:51.727585 kubelet[3061]: I0130 13:51:51.726583 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93a521a3-ad1c-4f57-b494-9f6d4f2f046e-lib-modules\") pod \"calico-node-gn94v\" (UID: \"93a521a3-ad1c-4f57-b494-9f6d4f2f046e\") " pod="calico-system/calico-node-gn94v" Jan 30 13:51:51.727585 kubelet[3061]: I0130 13:51:51.726646 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/93a521a3-ad1c-4f57-b494-9f6d4f2f046e-node-certs\") pod \"calico-node-gn94v\" (UID: \"93a521a3-ad1c-4f57-b494-9f6d4f2f046e\") " pod="calico-system/calico-node-gn94v" Jan 30 13:51:51.727585 kubelet[3061]: I0130 13:51:51.726722 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/93a521a3-ad1c-4f57-b494-9f6d4f2f046e-cni-bin-dir\") pod \"calico-node-gn94v\" (UID: \"93a521a3-ad1c-4f57-b494-9f6d4f2f046e\") " pod="calico-system/calico-node-gn94v" Jan 30 13:51:51.816753 kubelet[3061]: E0130 13:51:51.816645 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gpjs7" podUID="08ec3d9c-69d5-48e2-969e-46a8611fadde" Jan 30 13:51:51.827824 kubelet[3061]: I0130 13:51:51.827769 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/08ec3d9c-69d5-48e2-969e-46a8611fadde-varrun\") pod \"csi-node-driver-gpjs7\" (UID: \"08ec3d9c-69d5-48e2-969e-46a8611fadde\") " pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:51:51.828027 kubelet[3061]: I0130 13:51:51.827870 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/08ec3d9c-69d5-48e2-969e-46a8611fadde-registration-dir\") pod \"csi-node-driver-gpjs7\" (UID: \"08ec3d9c-69d5-48e2-969e-46a8611fadde\") " pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:51:51.828390 kubelet[3061]: I0130 13:51:51.828315 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08ec3d9c-69d5-48e2-969e-46a8611fadde-kubelet-dir\") pod \"csi-node-driver-gpjs7\" (UID: \"08ec3d9c-69d5-48e2-969e-46a8611fadde\") " pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:51:51.828706 kubelet[3061]: I0130 13:51:51.828561 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/08ec3d9c-69d5-48e2-969e-46a8611fadde-socket-dir\") pod \"csi-node-driver-gpjs7\" (UID: \"08ec3d9c-69d5-48e2-969e-46a8611fadde\") " pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:51:51.828706 kubelet[3061]: I0130 13:51:51.828629 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qv8r\" (UniqueName: \"kubernetes.io/projected/08ec3d9c-69d5-48e2-969e-46a8611fadde-kube-api-access-4qv8r\") pod \"csi-node-driver-gpjs7\" (UID: \"08ec3d9c-69d5-48e2-969e-46a8611fadde\") " pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:51:51.829474 kubelet[3061]: E0130 13:51:51.829261 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.829474 kubelet[3061]: W0130 13:51:51.829389 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.829782 kubelet[3061]: E0130 13:51:51.829729 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.830176 kubelet[3061]: E0130 13:51:51.830143 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.830176 kubelet[3061]: W0130 13:51:51.830172 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.830415 kubelet[3061]: E0130 13:51:51.830218 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.830663 kubelet[3061]: E0130 13:51:51.830634 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.830663 kubelet[3061]: W0130 13:51:51.830660 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.830894 kubelet[3061]: E0130 13:51:51.830695 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.831128 kubelet[3061]: E0130 13:51:51.831077 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.831128 kubelet[3061]: W0130 13:51:51.831108 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.831375 kubelet[3061]: E0130 13:51:51.831154 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.831725 kubelet[3061]: E0130 13:51:51.831695 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.831725 kubelet[3061]: W0130 13:51:51.831721 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.831932 kubelet[3061]: E0130 13:51:51.831752 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.832068 kubelet[3061]: E0130 13:51:51.832051 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.832068 kubelet[3061]: W0130 13:51:51.832065 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.832199 kubelet[3061]: E0130 13:51:51.832084 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.833121 kubelet[3061]: E0130 13:51:51.833090 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.833121 kubelet[3061]: W0130 13:51:51.833114 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.833246 kubelet[3061]: E0130 13:51:51.833140 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.833421 kubelet[3061]: E0130 13:51:51.833405 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.833476 kubelet[3061]: W0130 13:51:51.833420 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.833476 kubelet[3061]: E0130 13:51:51.833435 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.838112 kubelet[3061]: E0130 13:51:51.838075 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.838112 kubelet[3061]: W0130 13:51:51.838088 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.838112 kubelet[3061]: E0130 13:51:51.838100 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.854610 kubelet[3061]: E0130 13:51:51.854585 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.854610 kubelet[3061]: W0130 13:51:51.854604 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.854779 kubelet[3061]: E0130 13:51:51.854620 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.929266 kubelet[3061]: E0130 13:51:51.929179 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.929266 kubelet[3061]: W0130 13:51:51.929191 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.929266 kubelet[3061]: E0130 13:51:51.929203 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.929383 kubelet[3061]: E0130 13:51:51.929356 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.929383 kubelet[3061]: W0130 13:51:51.929363 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.929383 kubelet[3061]: E0130 13:51:51.929373 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.929561 kubelet[3061]: E0130 13:51:51.929522 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.929561 kubelet[3061]: W0130 13:51:51.929529 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.929561 kubelet[3061]: E0130 13:51:51.929539 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.929766 kubelet[3061]: E0130 13:51:51.929730 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.929766 kubelet[3061]: W0130 13:51:51.929737 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.929766 kubelet[3061]: E0130 13:51:51.929746 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.929913 kubelet[3061]: E0130 13:51:51.929871 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.929913 kubelet[3061]: W0130 13:51:51.929878 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.929913 kubelet[3061]: E0130 13:51:51.929888 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.930033 kubelet[3061]: E0130 13:51:51.929995 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.930033 kubelet[3061]: W0130 13:51:51.930002 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.930033 kubelet[3061]: E0130 13:51:51.930011 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.930143 kubelet[3061]: E0130 13:51:51.930111 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.930143 kubelet[3061]: W0130 13:51:51.930117 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.930143 kubelet[3061]: E0130 13:51:51.930125 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.930249 kubelet[3061]: E0130 13:51:51.930243 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.930272 kubelet[3061]: W0130 13:51:51.930250 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.930272 kubelet[3061]: E0130 13:51:51.930259 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.930365 kubelet[3061]: E0130 13:51:51.930359 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.930388 kubelet[3061]: W0130 13:51:51.930366 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.930388 kubelet[3061]: E0130 13:51:51.930375 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.930478 kubelet[3061]: E0130 13:51:51.930473 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.930501 kubelet[3061]: W0130 13:51:51.930479 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.930501 kubelet[3061]: E0130 13:51:51.930488 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.930589 kubelet[3061]: E0130 13:51:51.930583 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.930612 kubelet[3061]: W0130 13:51:51.930590 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.930612 kubelet[3061]: E0130 13:51:51.930598 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.930757 kubelet[3061]: E0130 13:51:51.930751 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.930780 kubelet[3061]: W0130 13:51:51.930758 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.930780 kubelet[3061]: E0130 13:51:51.930767 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.930867 kubelet[3061]: E0130 13:51:51.930861 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.930891 kubelet[3061]: W0130 13:51:51.930868 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.930891 kubelet[3061]: E0130 13:51:51.930876 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.931005 kubelet[3061]: E0130 13:51:51.930999 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.931035 kubelet[3061]: W0130 13:51:51.931006 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.931035 kubelet[3061]: E0130 13:51:51.931023 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.931102 kubelet[3061]: E0130 13:51:51.931096 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.931128 kubelet[3061]: W0130 13:51:51.931102 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.931128 kubelet[3061]: E0130 13:51:51.931116 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.931200 kubelet[3061]: E0130 13:51:51.931194 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.931222 kubelet[3061]: W0130 13:51:51.931201 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.931243 kubelet[3061]: E0130 13:51:51.931217 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.931295 kubelet[3061]: E0130 13:51:51.931289 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.931316 kubelet[3061]: W0130 13:51:51.931295 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.931316 kubelet[3061]: E0130 13:51:51.931304 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.931404 kubelet[3061]: E0130 13:51:51.931398 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.931424 kubelet[3061]: W0130 13:51:51.931405 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.931424 kubelet[3061]: E0130 13:51:51.931414 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.931550 kubelet[3061]: E0130 13:51:51.931544 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.931571 kubelet[3061]: W0130 13:51:51.931551 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.931571 kubelet[3061]: E0130 13:51:51.931560 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.931656 kubelet[3061]: E0130 13:51:51.931651 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.931677 kubelet[3061]: W0130 13:51:51.931656 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.931677 kubelet[3061]: E0130 13:51:51.931662 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.931739 kubelet[3061]: E0130 13:51:51.931734 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.931757 kubelet[3061]: W0130 13:51:51.931739 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.931757 kubelet[3061]: E0130 13:51:51.931744 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.931926 kubelet[3061]: E0130 13:51:51.931916 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.931951 kubelet[3061]: W0130 13:51:51.931926 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.931951 kubelet[3061]: E0130 13:51:51.931936 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.932035 kubelet[3061]: E0130 13:51:51.932029 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.932054 kubelet[3061]: W0130 13:51:51.932035 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.932054 kubelet[3061]: E0130 13:51:51.932042 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.932157 kubelet[3061]: E0130 13:51:51.932152 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.932179 kubelet[3061]: W0130 13:51:51.932157 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.932179 kubelet[3061]: E0130 13:51:51.932164 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.932343 kubelet[3061]: E0130 13:51:51.932334 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.932343 kubelet[3061]: W0130 13:51:51.932342 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.932394 kubelet[3061]: E0130 13:51:51.932349 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.937107 kubelet[3061]: E0130 13:51:51.937090 3061 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:51.937107 kubelet[3061]: W0130 13:51:51.937102 3061 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:51.937190 kubelet[3061]: E0130 13:51:51.937114 3061 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:51.973961 containerd[1799]: time="2025-01-30T13:51:51.973888018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68b9b9555c-qs9t9,Uid:b8202b04-7538-4f47-8af4-0f67d1afb744,Namespace:calico-system,Attempt:0,}" Jan 30 13:51:51.985135 containerd[1799]: time="2025-01-30T13:51:51.984921270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:51.985135 containerd[1799]: time="2025-01-30T13:51:51.985123456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:51.985135 containerd[1799]: time="2025-01-30T13:51:51.985131248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:51.985248 containerd[1799]: time="2025-01-30T13:51:51.985174755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:51.995628 containerd[1799]: time="2025-01-30T13:51:51.995572129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gn94v,Uid:93a521a3-ad1c-4f57-b494-9f6d4f2f046e,Namespace:calico-system,Attempt:0,}" Jan 30 13:51:51.999469 systemd[1]: Started cri-containerd-2ee08cdae114e04314deb7ff32cd924a46d08ab8a64ea6b2aa893b332c9b940d.scope - libcontainer container 2ee08cdae114e04314deb7ff32cd924a46d08ab8a64ea6b2aa893b332c9b940d. Jan 30 13:51:52.004879 containerd[1799]: time="2025-01-30T13:51:52.004806361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:52.004879 containerd[1799]: time="2025-01-30T13:51:52.004840279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:52.004879 containerd[1799]: time="2025-01-30T13:51:52.004847453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:52.004977 containerd[1799]: time="2025-01-30T13:51:52.004885521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:52.010570 systemd[1]: Started cri-containerd-2707cbfd6ea0c487e9745846805c123eaab26afbc52bbc78940c10e52a740ced.scope - libcontainer container 2707cbfd6ea0c487e9745846805c123eaab26afbc52bbc78940c10e52a740ced. Jan 30 13:51:52.021222 containerd[1799]: time="2025-01-30T13:51:52.021199552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gn94v,Uid:93a521a3-ad1c-4f57-b494-9f6d4f2f046e,Namespace:calico-system,Attempt:0,} returns sandbox id \"2707cbfd6ea0c487e9745846805c123eaab26afbc52bbc78940c10e52a740ced\"" Jan 30 13:51:52.021960 containerd[1799]: time="2025-01-30T13:51:52.021943949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:51:52.022271 containerd[1799]: time="2025-01-30T13:51:52.022257638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68b9b9555c-qs9t9,Uid:b8202b04-7538-4f47-8af4-0f67d1afb744,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ee08cdae114e04314deb7ff32cd924a46d08ab8a64ea6b2aa893b332c9b940d\"" Jan 30 13:51:53.341341 kubelet[3061]: E0130 13:51:53.341226 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gpjs7" podUID="08ec3d9c-69d5-48e2-969e-46a8611fadde" Jan 30 13:51:53.388335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057536658.mount: Deactivated successfully. Jan 30 13:51:53.444437 containerd[1799]: time="2025-01-30T13:51:53.444382746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:53.444645 containerd[1799]: time="2025-01-30T13:51:53.444594712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 13:51:53.444893 containerd[1799]: time="2025-01-30T13:51:53.444854799Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:53.445943 containerd[1799]: time="2025-01-30T13:51:53.445900729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:53.446373 containerd[1799]: time="2025-01-30T13:51:53.446328055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.424364364s" Jan 30 13:51:53.446373 containerd[1799]: time="2025-01-30T13:51:53.446342105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:51:53.446825 containerd[1799]: time="2025-01-30T13:51:53.446814575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:51:53.447380 containerd[1799]: time="2025-01-30T13:51:53.447311506Z" level=info msg="CreateContainer within sandbox \"2707cbfd6ea0c487e9745846805c123eaab26afbc52bbc78940c10e52a740ced\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:51:53.451960 containerd[1799]: time="2025-01-30T13:51:53.451917813Z" level=info msg="CreateContainer within sandbox \"2707cbfd6ea0c487e9745846805c123eaab26afbc52bbc78940c10e52a740ced\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0e9fccfc629b2ad81e627c89866fa4fc32e57091975a19c3eda6a585e684efb6\"" Jan 30 13:51:53.452200 containerd[1799]: time="2025-01-30T13:51:53.452162333Z" level=info msg="StartContainer for \"0e9fccfc629b2ad81e627c89866fa4fc32e57091975a19c3eda6a585e684efb6\"" Jan 30 13:51:53.481442 systemd[1]: Started cri-containerd-0e9fccfc629b2ad81e627c89866fa4fc32e57091975a19c3eda6a585e684efb6.scope - libcontainer container 0e9fccfc629b2ad81e627c89866fa4fc32e57091975a19c3eda6a585e684efb6. Jan 30 13:51:53.495728 containerd[1799]: time="2025-01-30T13:51:53.495705773Z" level=info msg="StartContainer for \"0e9fccfc629b2ad81e627c89866fa4fc32e57091975a19c3eda6a585e684efb6\" returns successfully" Jan 30 13:51:53.501834 systemd[1]: cri-containerd-0e9fccfc629b2ad81e627c89866fa4fc32e57091975a19c3eda6a585e684efb6.scope: Deactivated successfully. Jan 30 13:51:53.739324 containerd[1799]: time="2025-01-30T13:51:53.739290554Z" level=info msg="shim disconnected" id=0e9fccfc629b2ad81e627c89866fa4fc32e57091975a19c3eda6a585e684efb6 namespace=k8s.io Jan 30 13:51:53.739324 containerd[1799]: time="2025-01-30T13:51:53.739323351Z" level=warning msg="cleaning up after shim disconnected" id=0e9fccfc629b2ad81e627c89866fa4fc32e57091975a19c3eda6a585e684efb6 namespace=k8s.io Jan 30 13:51:53.739537 containerd[1799]: time="2025-01-30T13:51:53.739329469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:51:53.833587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e9fccfc629b2ad81e627c89866fa4fc32e57091975a19c3eda6a585e684efb6-rootfs.mount: Deactivated successfully. Jan 30 13:51:54.053188 systemd-timesyncd[1714]: Contacted time server [2604:a880:400:d0::83:2002]:123 (2.flatcar.pool.ntp.org). Jan 30 13:51:54.053364 systemd-timesyncd[1714]: Initial clock synchronization to Thu 2025-01-30 13:51:53.808438 UTC. Jan 30 13:51:54.945500 containerd[1799]: time="2025-01-30T13:51:54.945471310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:54.945791 containerd[1799]: time="2025-01-30T13:51:54.945711504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 13:51:54.945988 containerd[1799]: time="2025-01-30T13:51:54.945973874Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:54.947392 containerd[1799]: time="2025-01-30T13:51:54.947327259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:54.947655 containerd[1799]: time="2025-01-30T13:51:54.947640137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.50081007s" Jan 30 13:51:54.947699 containerd[1799]: time="2025-01-30T13:51:54.947658324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:51:54.948173 containerd[1799]: time="2025-01-30T13:51:54.948153695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:51:54.951169 containerd[1799]: time="2025-01-30T13:51:54.951153674Z" level=info msg="CreateContainer within sandbox \"2ee08cdae114e04314deb7ff32cd924a46d08ab8a64ea6b2aa893b332c9b940d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:51:54.957300 containerd[1799]: time="2025-01-30T13:51:54.957255696Z" level=info msg="CreateContainer within sandbox \"2ee08cdae114e04314deb7ff32cd924a46d08ab8a64ea6b2aa893b332c9b940d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"86ac572fb098e7115ac01d0de639ae15113a6edcb40a1f3e235c247ca4f7ce71\"" Jan 30 13:51:54.957544 containerd[1799]: time="2025-01-30T13:51:54.957489145Z" level=info msg="StartContainer for \"86ac572fb098e7115ac01d0de639ae15113a6edcb40a1f3e235c247ca4f7ce71\"" Jan 30 13:51:54.977587 systemd[1]: Started cri-containerd-86ac572fb098e7115ac01d0de639ae15113a6edcb40a1f3e235c247ca4f7ce71.scope - libcontainer container 86ac572fb098e7115ac01d0de639ae15113a6edcb40a1f3e235c247ca4f7ce71. Jan 30 13:51:55.005547 containerd[1799]: time="2025-01-30T13:51:55.005523690Z" level=info msg="StartContainer for \"86ac572fb098e7115ac01d0de639ae15113a6edcb40a1f3e235c247ca4f7ce71\" returns successfully" Jan 30 13:51:55.341204 kubelet[3061]: E0130 13:51:55.341145 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gpjs7" podUID="08ec3d9c-69d5-48e2-969e-46a8611fadde" Jan 30 13:51:55.401636 kubelet[3061]: I0130 13:51:55.401568 3061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68b9b9555c-qs9t9" podStartSLOduration=1.476175511 podStartE2EDuration="4.401553544s" podCreationTimestamp="2025-01-30 13:51:51 +0000 UTC" firstStartedPulling="2025-01-30 13:51:52.022710861 +0000 UTC m=+12.722914768" lastFinishedPulling="2025-01-30 13:51:54.948088894 +0000 UTC m=+15.648292801" observedRunningTime="2025-01-30 13:51:55.40139553 +0000 UTC m=+16.101599437" watchObservedRunningTime="2025-01-30 13:51:55.401553544 +0000 UTC m=+16.101757449" Jan 30 13:51:56.388220 kubelet[3061]: I0130 13:51:56.388201 3061 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:51:57.341235 kubelet[3061]: E0130 13:51:57.341212 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gpjs7" podUID="08ec3d9c-69d5-48e2-969e-46a8611fadde" Jan 30 13:51:57.384590 containerd[1799]: time="2025-01-30T13:51:57.384538582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:57.384798 containerd[1799]: time="2025-01-30T13:51:57.384763808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:51:57.385139 containerd[1799]: time="2025-01-30T13:51:57.385104733Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:57.386456 containerd[1799]: time="2025-01-30T13:51:57.386408527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:57.386733 containerd[1799]: time="2025-01-30T13:51:57.386695768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 2.438526948s" Jan 30 13:51:57.386733 containerd[1799]: time="2025-01-30T13:51:57.386708411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:51:57.387673 containerd[1799]: time="2025-01-30T13:51:57.387660543Z" level=info msg="CreateContainer within sandbox \"2707cbfd6ea0c487e9745846805c123eaab26afbc52bbc78940c10e52a740ced\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:51:57.393758 containerd[1799]: time="2025-01-30T13:51:57.393716407Z" level=info msg="CreateContainer within sandbox \"2707cbfd6ea0c487e9745846805c123eaab26afbc52bbc78940c10e52a740ced\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6b1c0e9598cbc874b88d17147342a0a37214ef02484ce50207f04ab56ffc427a\"" Jan 30 13:51:57.393930 containerd[1799]: time="2025-01-30T13:51:57.393887915Z" level=info msg="StartContainer for \"6b1c0e9598cbc874b88d17147342a0a37214ef02484ce50207f04ab56ffc427a\"" Jan 30 13:51:57.420520 systemd[1]: Started cri-containerd-6b1c0e9598cbc874b88d17147342a0a37214ef02484ce50207f04ab56ffc427a.scope - libcontainer container 6b1c0e9598cbc874b88d17147342a0a37214ef02484ce50207f04ab56ffc427a. Jan 30 13:51:57.435313 containerd[1799]: time="2025-01-30T13:51:57.435286731Z" level=info msg="StartContainer for \"6b1c0e9598cbc874b88d17147342a0a37214ef02484ce50207f04ab56ffc427a\" returns successfully" Jan 30 13:51:57.919776 systemd[1]: cri-containerd-6b1c0e9598cbc874b88d17147342a0a37214ef02484ce50207f04ab56ffc427a.scope: Deactivated successfully. Jan 30 13:51:57.930987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b1c0e9598cbc874b88d17147342a0a37214ef02484ce50207f04ab56ffc427a-rootfs.mount: Deactivated successfully. Jan 30 13:51:57.940086 kubelet[3061]: I0130 13:51:57.940046 3061 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:51:57.961720 systemd[1]: Created slice kubepods-burstable-pod7c64afcb_0671_44d3_8136_9ee0bad3d72c.slice - libcontainer container kubepods-burstable-pod7c64afcb_0671_44d3_8136_9ee0bad3d72c.slice. Jan 30 13:51:57.966539 systemd[1]: Created slice kubepods-burstable-pod04991ed4_bcd5_4f9b_b027_ba79cc5149a0.slice - libcontainer container kubepods-burstable-pod04991ed4_bcd5_4f9b_b027_ba79cc5149a0.slice. Jan 30 13:51:57.972039 systemd[1]: Created slice kubepods-besteffort-pod3a1bfeac_92e9_4eac_a174_cabc6e4921c6.slice - libcontainer container kubepods-besteffort-pod3a1bfeac_92e9_4eac_a174_cabc6e4921c6.slice. Jan 30 13:51:57.977262 systemd[1]: Created slice kubepods-besteffort-pode79b48d4_f379_4135_a6bf_0a0ccaeb5c67.slice - libcontainer container kubepods-besteffort-pode79b48d4_f379_4135_a6bf_0a0ccaeb5c67.slice. Jan 30 13:51:57.982143 systemd[1]: Created slice kubepods-besteffort-pod520ef51f_94d3_44ca_8df4_36fb6501930e.slice - libcontainer container kubepods-besteffort-pod520ef51f_94d3_44ca_8df4_36fb6501930e.slice. Jan 30 13:51:58.067982 kubelet[3061]: I0130 13:51:58.067862 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a1bfeac-92e9-4eac-a174-cabc6e4921c6-tigera-ca-bundle\") pod \"calico-kube-controllers-5984859c66-hc7cz\" (UID: \"3a1bfeac-92e9-4eac-a174-cabc6e4921c6\") " pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:51:58.068307 kubelet[3061]: I0130 13:51:58.068000 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/520ef51f-94d3-44ca-8df4-36fb6501930e-calico-apiserver-certs\") pod \"calico-apiserver-68c748b76b-s2glf\" (UID: \"520ef51f-94d3-44ca-8df4-36fb6501930e\") " pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:51:58.068307 kubelet[3061]: I0130 13:51:58.068122 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e79b48d4-f379-4135-a6bf-0a0ccaeb5c67-calico-apiserver-certs\") pod \"calico-apiserver-68c748b76b-mcv7b\" (UID: \"e79b48d4-f379-4135-a6bf-0a0ccaeb5c67\") " pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:51:58.068307 kubelet[3061]: I0130 13:51:58.068220 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c64afcb-0671-44d3-8136-9ee0bad3d72c-config-volume\") pod \"coredns-6f6b679f8f-7vhrq\" (UID: \"7c64afcb-0671-44d3-8136-9ee0bad3d72c\") " pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:51:58.068307 kubelet[3061]: I0130 13:51:58.068296 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfr2p\" (UniqueName: \"kubernetes.io/projected/7c64afcb-0671-44d3-8136-9ee0bad3d72c-kube-api-access-lfr2p\") pod \"coredns-6f6b679f8f-7vhrq\" (UID: \"7c64afcb-0671-44d3-8136-9ee0bad3d72c\") " pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:51:58.068806 kubelet[3061]: I0130 13:51:58.068393 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwfmm\" (UniqueName: \"kubernetes.io/projected/3a1bfeac-92e9-4eac-a174-cabc6e4921c6-kube-api-access-zwfmm\") pod \"calico-kube-controllers-5984859c66-hc7cz\" (UID: \"3a1bfeac-92e9-4eac-a174-cabc6e4921c6\") " pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:51:58.068806 kubelet[3061]: I0130 13:51:58.068490 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsmvp\" (UniqueName: \"kubernetes.io/projected/04991ed4-bcd5-4f9b-b027-ba79cc5149a0-kube-api-access-vsmvp\") pod \"coredns-6f6b679f8f-c4k8v\" (UID: \"04991ed4-bcd5-4f9b-b027-ba79cc5149a0\") " pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:51:58.068806 kubelet[3061]: I0130 13:51:58.068557 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq6jz\" (UniqueName: \"kubernetes.io/projected/520ef51f-94d3-44ca-8df4-36fb6501930e-kube-api-access-bq6jz\") pod \"calico-apiserver-68c748b76b-s2glf\" (UID: \"520ef51f-94d3-44ca-8df4-36fb6501930e\") " pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:51:58.068806 kubelet[3061]: I0130 13:51:58.068618 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04991ed4-bcd5-4f9b-b027-ba79cc5149a0-config-volume\") pod \"coredns-6f6b679f8f-c4k8v\" (UID: \"04991ed4-bcd5-4f9b-b027-ba79cc5149a0\") " pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:51:58.068806 kubelet[3061]: I0130 13:51:58.068666 3061 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhjr2\" (UniqueName: \"kubernetes.io/projected/e79b48d4-f379-4135-a6bf-0a0ccaeb5c67-kube-api-access-rhjr2\") pod \"calico-apiserver-68c748b76b-mcv7b\" (UID: \"e79b48d4-f379-4135-a6bf-0a0ccaeb5c67\") " pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:51:58.265873 containerd[1799]: time="2025-01-30T13:51:58.265757064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:58.270100 containerd[1799]: time="2025-01-30T13:51:58.269969694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:58.275441 containerd[1799]: time="2025-01-30T13:51:58.275282716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:0,}" Jan 30 13:51:58.280668 containerd[1799]: time="2025-01-30T13:51:58.280545799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:51:58.285940 containerd[1799]: time="2025-01-30T13:51:58.285833635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:51:58.626107 containerd[1799]: time="2025-01-30T13:51:58.625912137Z" level=info msg="shim disconnected" id=6b1c0e9598cbc874b88d17147342a0a37214ef02484ce50207f04ab56ffc427a namespace=k8s.io Jan 30 13:51:58.626107 containerd[1799]: time="2025-01-30T13:51:58.626014430Z" level=warning msg="cleaning up after shim disconnected" id=6b1c0e9598cbc874b88d17147342a0a37214ef02484ce50207f04ab56ffc427a namespace=k8s.io Jan 30 13:51:58.626107 containerd[1799]: time="2025-01-30T13:51:58.626025887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:51:58.658368 containerd[1799]: time="2025-01-30T13:51:58.658295714Z" level=error msg="Failed to destroy network for sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.658530 containerd[1799]: time="2025-01-30T13:51:58.658373858Z" level=error msg="Failed to destroy network for sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.658530 containerd[1799]: time="2025-01-30T13:51:58.658378684Z" level=error msg="Failed to destroy network for sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.658603 containerd[1799]: time="2025-01-30T13:51:58.658314537Z" level=error msg="Failed to destroy network for sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.658633 containerd[1799]: time="2025-01-30T13:51:58.658609102Z" level=error msg="encountered an error cleaning up failed sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.658664 containerd[1799]: time="2025-01-30T13:51:58.658626680Z" level=error msg="encountered an error cleaning up failed sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.658664 containerd[1799]: time="2025-01-30T13:51:58.658650425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.658750 containerd[1799]: time="2025-01-30T13:51:58.658663292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.658750 containerd[1799]: time="2025-01-30T13:51:58.658679741Z" level=error msg="encountered an error cleaning up failed sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.658750 containerd[1799]: time="2025-01-30T13:51:58.658702259Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.658750 containerd[1799]: time="2025-01-30T13:51:58.658705241Z" level=error msg="encountered an error cleaning up failed sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.658876 containerd[1799]: time="2025-01-30T13:51:58.658748636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.658876 containerd[1799]: time="2025-01-30T13:51:58.658851104Z" level=error msg="Failed to destroy network for sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.658958 kubelet[3061]: E0130 13:51:58.658870 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.658958 kubelet[3061]: E0130 13:51:58.658926 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:51:58.658958 kubelet[3061]: E0130 13:51:58.658938 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:51:58.658958 kubelet[3061]: E0130 13:51:58.658864 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.659049 kubelet[3061]: E0130 13:51:58.658864 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.659049 kubelet[3061]: E0130 13:51:58.658864 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.659049 kubelet[3061]: E0130 13:51:58.658968 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:51:58.659049 kubelet[3061]: E0130 13:51:58.658973 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:51:58.659119 containerd[1799]: time="2025-01-30T13:51:58.658989836Z" level=error msg="encountered an error cleaning up failed sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.659119 containerd[1799]: time="2025-01-30T13:51:58.659016935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.659155 kubelet[3061]: E0130 13:51:58.658984 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:51:58.659155 kubelet[3061]: E0130 13:51:58.658984 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:51:58.659155 kubelet[3061]: E0130 13:51:58.659004 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-c4k8v_kube-system(04991ed4-bcd5-4f9b-b027-ba79cc5149a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-c4k8v_kube-system(04991ed4-bcd5-4f9b-b027-ba79cc5149a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-c4k8v" podUID="04991ed4-bcd5-4f9b-b027-ba79cc5149a0" Jan 30 13:51:58.659223 kubelet[3061]: E0130 13:51:58.658970 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:51:58.659223 kubelet[3061]: E0130 13:51:58.659004 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c748b76b-mcv7b_calico-apiserver(e79b48d4-f379-4135-a6bf-0a0ccaeb5c67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c748b76b-mcv7b_calico-apiserver(e79b48d4-f379-4135-a6bf-0a0ccaeb5c67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" podUID="e79b48d4-f379-4135-a6bf-0a0ccaeb5c67" Jan 30 13:51:58.659275 kubelet[3061]: E0130 13:51:58.658967 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c748b76b-s2glf_calico-apiserver(520ef51f-94d3-44ca-8df4-36fb6501930e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c748b76b-s2glf_calico-apiserver(520ef51f-94d3-44ca-8df4-36fb6501930e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" podUID="520ef51f-94d3-44ca-8df4-36fb6501930e" Jan 30 13:51:58.659275 kubelet[3061]: E0130 13:51:58.659018 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:51:58.659337 kubelet[3061]: E0130 13:51:58.659036 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5984859c66-hc7cz_calico-system(3a1bfeac-92e9-4eac-a174-cabc6e4921c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5984859c66-hc7cz_calico-system(3a1bfeac-92e9-4eac-a174-cabc6e4921c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" podUID="3a1bfeac-92e9-4eac-a174-cabc6e4921c6" Jan 30 13:51:58.659337 kubelet[3061]: E0130 13:51:58.659077 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.659337 kubelet[3061]: E0130 13:51:58.659100 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:51:58.659420 kubelet[3061]: E0130 13:51:58.659110 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:51:58.659420 kubelet[3061]: E0130 13:51:58.659128 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7vhrq_kube-system(7c64afcb-0671-44d3-8136-9ee0bad3d72c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7vhrq_kube-system(7c64afcb-0671-44d3-8136-9ee0bad3d72c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7vhrq" podUID="7c64afcb-0671-44d3-8136-9ee0bad3d72c" Jan 30 13:51:59.343917 systemd[1]: Created slice kubepods-besteffort-pod08ec3d9c_69d5_48e2_969e_46a8611fadde.slice - libcontainer container kubepods-besteffort-pod08ec3d9c_69d5_48e2_969e_46a8611fadde.slice. Jan 30 13:51:59.345010 containerd[1799]: time="2025-01-30T13:51:59.344957869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:0,}" Jan 30 13:51:59.372595 containerd[1799]: time="2025-01-30T13:51:59.372568919Z" level=error msg="Failed to destroy network for sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.372760 containerd[1799]: time="2025-01-30T13:51:59.372746010Z" level=error msg="encountered an error cleaning up failed sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.372793 containerd[1799]: time="2025-01-30T13:51:59.372784174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.372931 kubelet[3061]: E0130 13:51:59.372910 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.373112 kubelet[3061]: E0130 13:51:59.372950 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:51:59.373112 kubelet[3061]: E0130 13:51:59.372963 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:51:59.373112 kubelet[3061]: E0130 13:51:59.372997 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gpjs7_calico-system(08ec3d9c-69d5-48e2-969e-46a8611fadde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gpjs7_calico-system(08ec3d9c-69d5-48e2-969e-46a8611fadde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gpjs7" podUID="08ec3d9c-69d5-48e2-969e-46a8611fadde" Jan 30 13:51:59.392686 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f-shm.mount: Deactivated successfully. Jan 30 13:51:59.392767 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93-shm.mount: Deactivated successfully. Jan 30 13:51:59.392831 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b-shm.mount: Deactivated successfully. Jan 30 13:51:59.392887 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f-shm.mount: Deactivated successfully. Jan 30 13:51:59.392942 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe-shm.mount: Deactivated successfully. Jan 30 13:51:59.393876 kubelet[3061]: I0130 13:51:59.393862 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f" Jan 30 13:51:59.394313 containerd[1799]: time="2025-01-30T13:51:59.394292304Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\"" Jan 30 13:51:59.394450 kubelet[3061]: I0130 13:51:59.394440 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f" Jan 30 13:51:59.394547 containerd[1799]: time="2025-01-30T13:51:59.394525930Z" level=info msg="Ensure that sandbox 49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f in task-service has been cleanup successfully" Jan 30 13:51:59.394712 containerd[1799]: time="2025-01-30T13:51:59.394694331Z" level=info msg="TearDown network for sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" successfully" Jan 30 13:51:59.394745 containerd[1799]: time="2025-01-30T13:51:59.394713382Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" returns successfully" Jan 30 13:51:59.394794 containerd[1799]: time="2025-01-30T13:51:59.394714974Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\"" Jan 30 13:51:59.394988 containerd[1799]: time="2025-01-30T13:51:59.394969992Z" level=info msg="Ensure that sandbox 1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f in task-service has been cleanup successfully" Jan 30 13:51:59.395058 containerd[1799]: time="2025-01-30T13:51:59.395024685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:1,}" Jan 30 13:51:59.395106 kubelet[3061]: I0130 13:51:59.395059 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93" Jan 30 13:51:59.395152 containerd[1799]: time="2025-01-30T13:51:59.395132951Z" level=info msg="TearDown network for sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" successfully" Jan 30 13:51:59.395194 containerd[1799]: time="2025-01-30T13:51:59.395151714Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" returns successfully" Jan 30 13:51:59.395411 containerd[1799]: time="2025-01-30T13:51:59.395389593Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\"" Jan 30 13:51:59.395476 containerd[1799]: time="2025-01-30T13:51:59.395437284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:51:59.395599 containerd[1799]: time="2025-01-30T13:51:59.395564123Z" level=info msg="Ensure that sandbox 1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93 in task-service has been cleanup successfully" Jan 30 13:51:59.395720 containerd[1799]: time="2025-01-30T13:51:59.395693123Z" level=info msg="TearDown network for sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" successfully" Jan 30 13:51:59.395720 containerd[1799]: time="2025-01-30T13:51:59.395718716Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" returns successfully" Jan 30 13:51:59.395798 kubelet[3061]: I0130 13:51:59.395790 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b" Jan 30 13:51:59.395905 containerd[1799]: time="2025-01-30T13:51:59.395893638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:51:59.396017 containerd[1799]: time="2025-01-30T13:51:59.396006411Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\"" Jan 30 13:51:59.396115 containerd[1799]: time="2025-01-30T13:51:59.396104854Z" level=info msg="Ensure that sandbox 319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b in task-service has been cleanup successfully" Jan 30 13:51:59.396200 kubelet[3061]: I0130 13:51:59.396190 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe" Jan 30 13:51:59.396238 containerd[1799]: time="2025-01-30T13:51:59.396190966Z" level=info msg="TearDown network for sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" successfully" Jan 30 13:51:59.396238 containerd[1799]: time="2025-01-30T13:51:59.396199388Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" returns successfully" Jan 30 13:51:59.396284 systemd[1]: run-netns-cni\x2d6d45e6ca\x2d5066\x2d1483\x2d5640\x2def784ce72d3f.mount: Deactivated successfully. Jan 30 13:51:59.396342 systemd[1]: run-netns-cni\x2d5d7963a7\x2d8b09\x2d8d30\x2d9529\x2d0556be7607dd.mount: Deactivated successfully. Jan 30 13:51:59.396390 containerd[1799]: time="2025-01-30T13:51:59.396344795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:1,}" Jan 30 13:51:59.396444 containerd[1799]: time="2025-01-30T13:51:59.396434216Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\"" Jan 30 13:51:59.396540 containerd[1799]: time="2025-01-30T13:51:59.396530862Z" level=info msg="Ensure that sandbox df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe in task-service has been cleanup successfully" Jan 30 13:51:59.396651 containerd[1799]: time="2025-01-30T13:51:59.396638312Z" level=info msg="TearDown network for sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" successfully" Jan 30 13:51:59.396651 containerd[1799]: time="2025-01-30T13:51:59.396650352Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" returns successfully" Jan 30 13:51:59.396848 containerd[1799]: time="2025-01-30T13:51:59.396838154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:1,}" Jan 30 13:51:59.397340 kubelet[3061]: I0130 13:51:59.397329 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba" Jan 30 13:51:59.397448 containerd[1799]: time="2025-01-30T13:51:59.397437581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:51:59.397549 containerd[1799]: time="2025-01-30T13:51:59.397535619Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\"" Jan 30 13:51:59.397679 containerd[1799]: time="2025-01-30T13:51:59.397667260Z" level=info msg="Ensure that sandbox 19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba in task-service has been cleanup successfully" Jan 30 13:51:59.397775 containerd[1799]: time="2025-01-30T13:51:59.397763209Z" level=info msg="TearDown network for sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" successfully" Jan 30 13:51:59.397794 containerd[1799]: time="2025-01-30T13:51:59.397775601Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" returns successfully" Jan 30 13:51:59.397979 containerd[1799]: time="2025-01-30T13:51:59.397969365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:1,}" Jan 30 13:51:59.398208 systemd[1]: run-netns-cni\x2d0af0a9e5\x2d79e9\x2dddd9\x2d9742\x2dc9a1d743da99.mount: Deactivated successfully. Jan 30 13:51:59.398269 systemd[1]: run-netns-cni\x2de250b5a7\x2d19c8\x2d8095\x2d15a2\x2d8eb6e7e0cf6b.mount: Deactivated successfully. Jan 30 13:51:59.398325 systemd[1]: run-netns-cni\x2d97133a27\x2dc11e\x2d9bf0\x2d6d8e\x2d73b47fcce22e.mount: Deactivated successfully. Jan 30 13:51:59.400294 systemd[1]: run-netns-cni\x2d8547bb4b\x2d4ac4\x2d2c06\x2d961b\x2d9f6d0926f9a4.mount: Deactivated successfully. Jan 30 13:51:59.452615 containerd[1799]: time="2025-01-30T13:51:59.452532391Z" level=error msg="Failed to destroy network for sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.452615 containerd[1799]: time="2025-01-30T13:51:59.452558712Z" level=error msg="Failed to destroy network for sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.452777 containerd[1799]: time="2025-01-30T13:51:59.452618154Z" level=error msg="Failed to destroy network for sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.452908 containerd[1799]: time="2025-01-30T13:51:59.452873255Z" level=error msg="encountered an error cleaning up failed sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.452932 containerd[1799]: time="2025-01-30T13:51:59.452905563Z" level=error msg="encountered an error cleaning up failed sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.452961 containerd[1799]: time="2025-01-30T13:51:59.452946284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.452997 containerd[1799]: time="2025-01-30T13:51:59.452946326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.453051 containerd[1799]: time="2025-01-30T13:51:59.452913812Z" level=error msg="encountered an error cleaning up failed sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.453051 containerd[1799]: time="2025-01-30T13:51:59.453036104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.453097 containerd[1799]: time="2025-01-30T13:51:59.452984633Z" level=error msg="Failed to destroy network for sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.453151 kubelet[3061]: E0130 13:51:59.453121 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.453216 kubelet[3061]: E0130 13:51:59.453121 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.453244 containerd[1799]: time="2025-01-30T13:51:59.453233197Z" level=error msg="encountered an error cleaning up failed sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.453269 containerd[1799]: time="2025-01-30T13:51:59.453257285Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.453338 kubelet[3061]: E0130 13:51:59.453121 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.453391 kubelet[3061]: E0130 13:51:59.453365 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:51:59.453444 kubelet[3061]: E0130 13:51:59.453395 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.453444 kubelet[3061]: E0130 13:51:59.453404 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:51:59.453501 kubelet[3061]: E0130 13:51:59.453365 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:51:59.453501 kubelet[3061]: E0130 13:51:59.453436 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:51:59.453501 kubelet[3061]: E0130 13:51:59.453463 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:51:59.453501 kubelet[3061]: E0130 13:51:59.453470 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:51:59.453627 kubelet[3061]: E0130 13:51:59.453483 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c748b76b-s2glf_calico-apiserver(520ef51f-94d3-44ca-8df4-36fb6501930e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c748b76b-s2glf_calico-apiserver(520ef51f-94d3-44ca-8df4-36fb6501930e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" podUID="520ef51f-94d3-44ca-8df4-36fb6501930e" Jan 30 13:51:59.453627 kubelet[3061]: E0130 13:51:59.453504 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5984859c66-hc7cz_calico-system(3a1bfeac-92e9-4eac-a174-cabc6e4921c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5984859c66-hc7cz_calico-system(3a1bfeac-92e9-4eac-a174-cabc6e4921c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" podUID="3a1bfeac-92e9-4eac-a174-cabc6e4921c6" Jan 30 13:51:59.453719 kubelet[3061]: E0130 13:51:59.453510 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c748b76b-mcv7b_calico-apiserver(e79b48d4-f379-4135-a6bf-0a0ccaeb5c67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c748b76b-mcv7b_calico-apiserver(e79b48d4-f379-4135-a6bf-0a0ccaeb5c67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" podUID="e79b48d4-f379-4135-a6bf-0a0ccaeb5c67" Jan 30 13:51:59.453719 kubelet[3061]: E0130 13:51:59.453374 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:51:59.453719 kubelet[3061]: E0130 13:51:59.453543 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:51:59.453819 kubelet[3061]: E0130 13:51:59.453579 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-c4k8v_kube-system(04991ed4-bcd5-4f9b-b027-ba79cc5149a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-c4k8v_kube-system(04991ed4-bcd5-4f9b-b027-ba79cc5149a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-c4k8v" podUID="04991ed4-bcd5-4f9b-b027-ba79cc5149a0" Jan 30 13:51:59.455415 containerd[1799]: time="2025-01-30T13:51:59.455396301Z" level=error msg="Failed to destroy network for sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.455570 containerd[1799]: time="2025-01-30T13:51:59.455559426Z" level=error msg="encountered an error cleaning up failed sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.455595 containerd[1799]: time="2025-01-30T13:51:59.455587295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.455686 kubelet[3061]: E0130 13:51:59.455670 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.455723 kubelet[3061]: E0130 13:51:59.455702 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:51:59.455756 kubelet[3061]: E0130 13:51:59.455720 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:51:59.455843 kubelet[3061]: E0130 13:51:59.455750 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7vhrq_kube-system(7c64afcb-0671-44d3-8136-9ee0bad3d72c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7vhrq_kube-system(7c64afcb-0671-44d3-8136-9ee0bad3d72c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7vhrq" podUID="7c64afcb-0671-44d3-8136-9ee0bad3d72c" Jan 30 13:51:59.456272 containerd[1799]: time="2025-01-30T13:51:59.456258463Z" level=error msg="Failed to destroy network for sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.456425 containerd[1799]: time="2025-01-30T13:51:59.456410523Z" level=error msg="encountered an error cleaning up failed sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.456460 containerd[1799]: time="2025-01-30T13:51:59.456435289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.456537 kubelet[3061]: E0130 13:51:59.456523 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:59.456602 kubelet[3061]: E0130 13:51:59.456543 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:51:59.456602 kubelet[3061]: E0130 13:51:59.456554 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:51:59.456671 kubelet[3061]: E0130 13:51:59.456604 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gpjs7_calico-system(08ec3d9c-69d5-48e2-969e-46a8611fadde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gpjs7_calico-system(08ec3d9c-69d5-48e2-969e-46a8611fadde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gpjs7" podUID="08ec3d9c-69d5-48e2-969e-46a8611fadde" Jan 30 13:52:00.396788 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106-shm.mount: Deactivated successfully. Jan 30 13:52:00.399303 kubelet[3061]: I0130 13:52:00.399259 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1" Jan 30 13:52:00.399655 containerd[1799]: time="2025-01-30T13:52:00.399599200Z" level=info msg="StopPodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\"" Jan 30 13:52:00.399786 containerd[1799]: time="2025-01-30T13:52:00.399759090Z" level=info msg="Ensure that sandbox fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1 in task-service has been cleanup successfully" Jan 30 13:52:00.399806 kubelet[3061]: I0130 13:52:00.399774 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4" Jan 30 13:52:00.399899 containerd[1799]: time="2025-01-30T13:52:00.399860515Z" level=info msg="TearDown network for sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" successfully" Jan 30 13:52:00.399899 containerd[1799]: time="2025-01-30T13:52:00.399870621Z" level=info msg="StopPodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" returns successfully" Jan 30 13:52:00.400029 containerd[1799]: time="2025-01-30T13:52:00.399995565Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\"" Jan 30 13:52:00.400051 containerd[1799]: time="2025-01-30T13:52:00.400035173Z" level=info msg="TearDown network for sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" successfully" Jan 30 13:52:00.400051 containerd[1799]: time="2025-01-30T13:52:00.400041416Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" returns successfully" Jan 30 13:52:00.400081 containerd[1799]: time="2025-01-30T13:52:00.399997230Z" level=info msg="StopPodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\"" Jan 30 13:52:00.400159 containerd[1799]: time="2025-01-30T13:52:00.400149831Z" level=info msg="Ensure that sandbox aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4 in task-service has been cleanup successfully" Jan 30 13:52:00.400231 containerd[1799]: time="2025-01-30T13:52:00.400223835Z" level=info msg="TearDown network for sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" successfully" Jan 30 13:52:00.400253 containerd[1799]: time="2025-01-30T13:52:00.400230680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:2,}" Jan 30 13:52:00.400306 containerd[1799]: time="2025-01-30T13:52:00.400232232Z" level=info msg="StopPodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" returns successfully" Jan 30 13:52:00.400353 kubelet[3061]: I0130 13:52:00.400346 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247" Jan 30 13:52:00.400436 containerd[1799]: time="2025-01-30T13:52:00.400427292Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\"" Jan 30 13:52:00.400475 containerd[1799]: time="2025-01-30T13:52:00.400463615Z" level=info msg="TearDown network for sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" successfully" Jan 30 13:52:00.400475 containerd[1799]: time="2025-01-30T13:52:00.400470260Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" returns successfully" Jan 30 13:52:00.400523 containerd[1799]: time="2025-01-30T13:52:00.400514991Z" level=info msg="StopPodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\"" Jan 30 13:52:00.400648 containerd[1799]: time="2025-01-30T13:52:00.400637771Z" level=info msg="Ensure that sandbox 64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247 in task-service has been cleanup successfully" Jan 30 13:52:00.400668 containerd[1799]: time="2025-01-30T13:52:00.400646695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:2,}" Jan 30 13:52:00.400719 containerd[1799]: time="2025-01-30T13:52:00.400709641Z" level=info msg="TearDown network for sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" successfully" Jan 30 13:52:00.400719 containerd[1799]: time="2025-01-30T13:52:00.400717858Z" level=info msg="StopPodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" returns successfully" Jan 30 13:52:00.400878 containerd[1799]: time="2025-01-30T13:52:00.400868546Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\"" Jan 30 13:52:00.400917 containerd[1799]: time="2025-01-30T13:52:00.400910311Z" level=info msg="TearDown network for sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" successfully" Jan 30 13:52:00.400938 containerd[1799]: time="2025-01-30T13:52:00.400917154Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" returns successfully" Jan 30 13:52:00.400965 kubelet[3061]: I0130 13:52:00.400913 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02" Jan 30 13:52:00.401026 systemd[1]: run-netns-cni\x2dc6b96359\x2d2287\x2d14d7\x2d40fd\x2d9fa467e7d4d7.mount: Deactivated successfully. Jan 30 13:52:00.401088 containerd[1799]: time="2025-01-30T13:52:00.401076927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:2,}" Jan 30 13:52:00.401159 containerd[1799]: time="2025-01-30T13:52:00.401148480Z" level=info msg="StopPodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\"" Jan 30 13:52:00.401246 containerd[1799]: time="2025-01-30T13:52:00.401235840Z" level=info msg="Ensure that sandbox b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02 in task-service has been cleanup successfully" Jan 30 13:52:00.401333 containerd[1799]: time="2025-01-30T13:52:00.401323851Z" level=info msg="TearDown network for sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" successfully" Jan 30 13:52:00.401358 containerd[1799]: time="2025-01-30T13:52:00.401333925Z" level=info msg="StopPodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" returns successfully" Jan 30 13:52:00.401471 containerd[1799]: time="2025-01-30T13:52:00.401459454Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\"" Jan 30 13:52:00.401497 kubelet[3061]: I0130 13:52:00.401482 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106" Jan 30 13:52:00.401548 containerd[1799]: time="2025-01-30T13:52:00.401516794Z" level=info msg="TearDown network for sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" successfully" Jan 30 13:52:00.401568 containerd[1799]: time="2025-01-30T13:52:00.401550074Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" returns successfully" Jan 30 13:52:00.401732 containerd[1799]: time="2025-01-30T13:52:00.401718962Z" level=info msg="StopPodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\"" Jan 30 13:52:00.401790 containerd[1799]: time="2025-01-30T13:52:00.401780386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:2,}" Jan 30 13:52:00.401849 containerd[1799]: time="2025-01-30T13:52:00.401839393Z" level=info msg="Ensure that sandbox 518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106 in task-service has been cleanup successfully" Jan 30 13:52:00.401936 containerd[1799]: time="2025-01-30T13:52:00.401926485Z" level=info msg="TearDown network for sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" successfully" Jan 30 13:52:00.401972 containerd[1799]: time="2025-01-30T13:52:00.401935462Z" level=info msg="StopPodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" returns successfully" Jan 30 13:52:00.401996 kubelet[3061]: I0130 13:52:00.401959 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e" Jan 30 13:52:00.402106 containerd[1799]: time="2025-01-30T13:52:00.402095924Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\"" Jan 30 13:52:00.402148 containerd[1799]: time="2025-01-30T13:52:00.402136476Z" level=info msg="StopPodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\"" Jan 30 13:52:00.402217 containerd[1799]: time="2025-01-30T13:52:00.402138625Z" level=info msg="TearDown network for sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" successfully" Jan 30 13:52:00.402217 containerd[1799]: time="2025-01-30T13:52:00.402214278Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" returns successfully" Jan 30 13:52:00.402278 containerd[1799]: time="2025-01-30T13:52:00.402240152Z" level=info msg="Ensure that sandbox bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e in task-service has been cleanup successfully" Jan 30 13:52:00.402348 containerd[1799]: time="2025-01-30T13:52:00.402337726Z" level=info msg="TearDown network for sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" successfully" Jan 30 13:52:00.402348 containerd[1799]: time="2025-01-30T13:52:00.402347209Z" level=info msg="StopPodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" returns successfully" Jan 30 13:52:00.402406 containerd[1799]: time="2025-01-30T13:52:00.402381282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:2,}" Jan 30 13:52:00.402516 containerd[1799]: time="2025-01-30T13:52:00.402506503Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\"" Jan 30 13:52:00.402553 containerd[1799]: time="2025-01-30T13:52:00.402543941Z" level=info msg="TearDown network for sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" successfully" Jan 30 13:52:00.402553 containerd[1799]: time="2025-01-30T13:52:00.402552105Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" returns successfully" Jan 30 13:52:00.402722 containerd[1799]: time="2025-01-30T13:52:00.402712257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:2,}" Jan 30 13:52:00.402836 systemd[1]: run-netns-cni\x2d2e4e7b20\x2dbfb9\x2d5b0f\x2dc401\x2d01190319b272.mount: Deactivated successfully. Jan 30 13:52:00.402887 systemd[1]: run-netns-cni\x2defbe3a0a\x2df60e\x2dd18a\x2d7b4f\x2d1d189b13326a.mount: Deactivated successfully. Jan 30 13:52:00.402934 systemd[1]: run-netns-cni\x2de42f2599\x2d8919\x2db372\x2d3d43\x2dbc8f0adbf02c.mount: Deactivated successfully. Jan 30 13:52:00.405083 systemd[1]: run-netns-cni\x2d81ef3db2\x2d4407\x2d9026\x2dd874\x2d3d64f8667d29.mount: Deactivated successfully. Jan 30 13:52:00.405128 systemd[1]: run-netns-cni\x2d3d0caccb\x2de9c1\x2d20f6\x2db095\x2db17814581129.mount: Deactivated successfully. Jan 30 13:52:00.449963 containerd[1799]: time="2025-01-30T13:52:00.449908513Z" level=error msg="Failed to destroy network for sandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.450366 containerd[1799]: time="2025-01-30T13:52:00.450333037Z" level=error msg="encountered an error cleaning up failed sandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.450652 containerd[1799]: time="2025-01-30T13:52:00.450508724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.450908 kubelet[3061]: E0130 13:52:00.450865 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.450982 kubelet[3061]: E0130 13:52:00.450954 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:52:00.451016 kubelet[3061]: E0130 13:52:00.450987 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:52:00.451091 kubelet[3061]: E0130 13:52:00.451052 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c748b76b-mcv7b_calico-apiserver(e79b48d4-f379-4135-a6bf-0a0ccaeb5c67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c748b76b-mcv7b_calico-apiserver(e79b48d4-f379-4135-a6bf-0a0ccaeb5c67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" podUID="e79b48d4-f379-4135-a6bf-0a0ccaeb5c67" Jan 30 13:52:00.455183 containerd[1799]: time="2025-01-30T13:52:00.455143443Z" level=error msg="Failed to destroy network for sandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455411 containerd[1799]: time="2025-01-30T13:52:00.455397736Z" level=error msg="encountered an error cleaning up failed sandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455488 containerd[1799]: time="2025-01-30T13:52:00.455444239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455531 containerd[1799]: time="2025-01-30T13:52:00.455509477Z" level=error msg="Failed to destroy network for sandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455578 containerd[1799]: time="2025-01-30T13:52:00.455557435Z" level=error msg="Failed to destroy network for sandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455638 kubelet[3061]: E0130 13:52:00.455593 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455667 kubelet[3061]: E0130 13:52:00.455647 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:52:00.455694 containerd[1799]: time="2025-01-30T13:52:00.455638433Z" level=error msg="Failed to destroy network for sandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455720 kubelet[3061]: E0130 13:52:00.455668 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:52:00.455746 kubelet[3061]: E0130 13:52:00.455719 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7vhrq_kube-system(7c64afcb-0671-44d3-8136-9ee0bad3d72c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7vhrq_kube-system(7c64afcb-0671-44d3-8136-9ee0bad3d72c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7vhrq" podUID="7c64afcb-0671-44d3-8136-9ee0bad3d72c" Jan 30 13:52:00.455794 containerd[1799]: time="2025-01-30T13:52:00.455729272Z" level=error msg="encountered an error cleaning up failed sandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455794 containerd[1799]: time="2025-01-30T13:52:00.455752164Z" level=error msg="encountered an error cleaning up failed sandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455843 containerd[1799]: time="2025-01-30T13:52:00.455786739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455843 containerd[1799]: time="2025-01-30T13:52:00.455796322Z" level=error msg="encountered an error cleaning up failed sandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455843 containerd[1799]: time="2025-01-30T13:52:00.455819785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455843 containerd[1799]: time="2025-01-30T13:52:00.455756856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455986 kubelet[3061]: E0130 13:52:00.455868 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455986 kubelet[3061]: E0130 13:52:00.455892 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:52:00.455986 kubelet[3061]: E0130 13:52:00.455890 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.455986 kubelet[3061]: E0130 13:52:00.455903 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:52:00.456065 kubelet[3061]: E0130 13:52:00.455921 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:52:00.456065 kubelet[3061]: E0130 13:52:00.455925 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.456065 kubelet[3061]: E0130 13:52:00.455938 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:52:00.456065 kubelet[3061]: E0130 13:52:00.455946 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:52:00.456130 kubelet[3061]: E0130 13:52:00.455957 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:52:00.456130 kubelet[3061]: E0130 13:52:00.455964 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-c4k8v_kube-system(04991ed4-bcd5-4f9b-b027-ba79cc5149a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-c4k8v_kube-system(04991ed4-bcd5-4f9b-b027-ba79cc5149a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-c4k8v" podUID="04991ed4-bcd5-4f9b-b027-ba79cc5149a0" Jan 30 13:52:00.456130 kubelet[3061]: E0130 13:52:00.455975 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c748b76b-s2glf_calico-apiserver(520ef51f-94d3-44ca-8df4-36fb6501930e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c748b76b-s2glf_calico-apiserver(520ef51f-94d3-44ca-8df4-36fb6501930e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" podUID="520ef51f-94d3-44ca-8df4-36fb6501930e" Jan 30 13:52:00.456204 kubelet[3061]: E0130 13:52:00.455922 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5984859c66-hc7cz_calico-system(3a1bfeac-92e9-4eac-a174-cabc6e4921c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5984859c66-hc7cz_calico-system(3a1bfeac-92e9-4eac-a174-cabc6e4921c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" podUID="3a1bfeac-92e9-4eac-a174-cabc6e4921c6" Jan 30 13:52:00.456292 containerd[1799]: time="2025-01-30T13:52:00.456279975Z" level=error msg="Failed to destroy network for sandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.456425 containerd[1799]: time="2025-01-30T13:52:00.456414639Z" level=error msg="encountered an error cleaning up failed sandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.456445 containerd[1799]: time="2025-01-30T13:52:00.456436444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.456504 kubelet[3061]: E0130 13:52:00.456492 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.456545 kubelet[3061]: E0130 13:52:00.456510 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:52:00.456545 kubelet[3061]: E0130 13:52:00.456524 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:52:00.456593 kubelet[3061]: E0130 13:52:00.456547 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gpjs7_calico-system(08ec3d9c-69d5-48e2-969e-46a8611fadde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gpjs7_calico-system(08ec3d9c-69d5-48e2-969e-46a8611fadde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gpjs7" podUID="08ec3d9c-69d5-48e2-969e-46a8611fadde" Jan 30 13:52:01.396628 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d-shm.mount: Deactivated successfully. Jan 30 13:52:01.404196 kubelet[3061]: I0130 13:52:01.404182 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe" Jan 30 13:52:01.404489 containerd[1799]: time="2025-01-30T13:52:01.404469514Z" level=info msg="StopPodSandbox for \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\"" Jan 30 13:52:01.404719 containerd[1799]: time="2025-01-30T13:52:01.404659293Z" level=info msg="Ensure that sandbox 6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe in task-service has been cleanup successfully" Jan 30 13:52:01.404784 containerd[1799]: time="2025-01-30T13:52:01.404770883Z" level=info msg="TearDown network for sandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\" successfully" Jan 30 13:52:01.404810 containerd[1799]: time="2025-01-30T13:52:01.404782531Z" level=info msg="StopPodSandbox for \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\" returns successfully" Jan 30 13:52:01.404863 kubelet[3061]: I0130 13:52:01.404853 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab" Jan 30 13:52:01.404918 containerd[1799]: time="2025-01-30T13:52:01.404901318Z" level=info msg="StopPodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\"" Jan 30 13:52:01.405002 containerd[1799]: time="2025-01-30T13:52:01.404963681Z" level=info msg="TearDown network for sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" successfully" Jan 30 13:52:01.405030 containerd[1799]: time="2025-01-30T13:52:01.405001771Z" level=info msg="StopPodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" returns successfully" Jan 30 13:52:01.405071 containerd[1799]: time="2025-01-30T13:52:01.405044677Z" level=info msg="StopPodSandbox for \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\"" Jan 30 13:52:01.405177 containerd[1799]: time="2025-01-30T13:52:01.405167515Z" level=info msg="Ensure that sandbox 77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab in task-service has been cleanup successfully" Jan 30 13:52:01.405338 containerd[1799]: time="2025-01-30T13:52:01.405315100Z" level=info msg="TearDown network for sandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\" successfully" Jan 30 13:52:01.405338 containerd[1799]: time="2025-01-30T13:52:01.405333221Z" level=info msg="StopPodSandbox for \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\" returns successfully" Jan 30 13:52:01.405421 containerd[1799]: time="2025-01-30T13:52:01.405413153Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\"" Jan 30 13:52:01.405470 containerd[1799]: time="2025-01-30T13:52:01.405460367Z" level=info msg="TearDown network for sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" successfully" Jan 30 13:52:01.405500 containerd[1799]: time="2025-01-30T13:52:01.405468727Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" returns successfully" Jan 30 13:52:01.405500 containerd[1799]: time="2025-01-30T13:52:01.405474527Z" level=info msg="StopPodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\"" Jan 30 13:52:01.405571 containerd[1799]: time="2025-01-30T13:52:01.405531846Z" level=info msg="TearDown network for sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" successfully" Jan 30 13:52:01.405600 containerd[1799]: time="2025-01-30T13:52:01.405571013Z" level=info msg="StopPodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" returns successfully" Jan 30 13:52:01.405715 containerd[1799]: time="2025-01-30T13:52:01.405699100Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\"" Jan 30 13:52:01.405823 containerd[1799]: time="2025-01-30T13:52:01.405751678Z" level=info msg="TearDown network for sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" successfully" Jan 30 13:52:01.405823 containerd[1799]: time="2025-01-30T13:52:01.405794680Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" returns successfully" Jan 30 13:52:01.405890 containerd[1799]: time="2025-01-30T13:52:01.405841868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:3,}" Jan 30 13:52:01.406310 containerd[1799]: time="2025-01-30T13:52:01.406040978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:3,}" Jan 30 13:52:01.406438 kubelet[3061]: I0130 13:52:01.406285 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c" Jan 30 13:52:01.406372 systemd[1]: run-netns-cni\x2dbaa70122\x2dcfc6\x2d2681\x2d771c\x2dd1520b552a4a.mount: Deactivated successfully. Jan 30 13:52:01.406996 containerd[1799]: time="2025-01-30T13:52:01.406984245Z" level=info msg="StopPodSandbox for \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\"" Jan 30 13:52:01.407115 containerd[1799]: time="2025-01-30T13:52:01.407105528Z" level=info msg="Ensure that sandbox f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c in task-service has been cleanup successfully" Jan 30 13:52:01.407222 containerd[1799]: time="2025-01-30T13:52:01.407211974Z" level=info msg="TearDown network for sandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\" successfully" Jan 30 13:52:01.407222 containerd[1799]: time="2025-01-30T13:52:01.407220651Z" level=info msg="StopPodSandbox for \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\" returns successfully" Jan 30 13:52:01.407341 containerd[1799]: time="2025-01-30T13:52:01.407331516Z" level=info msg="StopPodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\"" Jan 30 13:52:01.407374 containerd[1799]: time="2025-01-30T13:52:01.407368249Z" level=info msg="TearDown network for sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" successfully" Jan 30 13:52:01.407393 containerd[1799]: time="2025-01-30T13:52:01.407374580Z" level=info msg="StopPodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" returns successfully" Jan 30 13:52:01.407466 kubelet[3061]: I0130 13:52:01.407456 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d" Jan 30 13:52:01.407686 containerd[1799]: time="2025-01-30T13:52:01.407520100Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\"" Jan 30 13:52:01.407686 containerd[1799]: time="2025-01-30T13:52:01.407572975Z" level=info msg="TearDown network for sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" successfully" Jan 30 13:52:01.407686 containerd[1799]: time="2025-01-30T13:52:01.407582842Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" returns successfully" Jan 30 13:52:01.407745 containerd[1799]: time="2025-01-30T13:52:01.407736004Z" level=info msg="StopPodSandbox for \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\"" Jan 30 13:52:01.407823 containerd[1799]: time="2025-01-30T13:52:01.407812852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:3,}" Jan 30 13:52:01.407840 containerd[1799]: time="2025-01-30T13:52:01.407824327Z" level=info msg="Ensure that sandbox 89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d in task-service has been cleanup successfully" Jan 30 13:52:01.407916 containerd[1799]: time="2025-01-30T13:52:01.407900645Z" level=info msg="TearDown network for sandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\" successfully" Jan 30 13:52:01.407933 containerd[1799]: time="2025-01-30T13:52:01.407920912Z" level=info msg="StopPodSandbox for \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\" returns successfully" Jan 30 13:52:01.408055 containerd[1799]: time="2025-01-30T13:52:01.408036651Z" level=info msg="StopPodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\"" Jan 30 13:52:01.408125 containerd[1799]: time="2025-01-30T13:52:01.408091404Z" level=info msg="TearDown network for sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" successfully" Jan 30 13:52:01.408156 containerd[1799]: time="2025-01-30T13:52:01.408126937Z" level=info msg="StopPodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" returns successfully" Jan 30 13:52:01.408186 kubelet[3061]: I0130 13:52:01.408148 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2" Jan 30 13:52:01.408294 containerd[1799]: time="2025-01-30T13:52:01.408281890Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\"" Jan 30 13:52:01.408391 containerd[1799]: time="2025-01-30T13:52:01.408349859Z" level=info msg="TearDown network for sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" successfully" Jan 30 13:52:01.408425 containerd[1799]: time="2025-01-30T13:52:01.408417574Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" returns successfully" Jan 30 13:52:01.408446 containerd[1799]: time="2025-01-30T13:52:01.408381480Z" level=info msg="StopPodSandbox for \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\"" Jan 30 13:52:01.408482 systemd[1]: run-netns-cni\x2d08954036\x2dfbf7\x2dd252\x2d744c\x2d624cb671e591.mount: Deactivated successfully. Jan 30 13:52:01.408540 systemd[1]: run-netns-cni\x2d0cf82125\x2d94c2\x2dc53f\x2da9ed\x2d606defee6114.mount: Deactivated successfully. Jan 30 13:52:01.408578 containerd[1799]: time="2025-01-30T13:52:01.408565636Z" level=info msg="Ensure that sandbox 5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2 in task-service has been cleanup successfully" Jan 30 13:52:01.408616 containerd[1799]: time="2025-01-30T13:52:01.408570458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:3,}" Jan 30 13:52:01.408678 containerd[1799]: time="2025-01-30T13:52:01.408666178Z" level=info msg="TearDown network for sandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\" successfully" Jan 30 13:52:01.408705 containerd[1799]: time="2025-01-30T13:52:01.408679161Z" level=info msg="StopPodSandbox for \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\" returns successfully" Jan 30 13:52:01.408792 kubelet[3061]: I0130 13:52:01.408778 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b" Jan 30 13:52:01.408813 containerd[1799]: time="2025-01-30T13:52:01.408790107Z" level=info msg="StopPodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\"" Jan 30 13:52:01.408852 containerd[1799]: time="2025-01-30T13:52:01.408842771Z" level=info msg="TearDown network for sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" successfully" Jan 30 13:52:01.408872 containerd[1799]: time="2025-01-30T13:52:01.408852831Z" level=info msg="StopPodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" returns successfully" Jan 30 13:52:01.408989 containerd[1799]: time="2025-01-30T13:52:01.408978164Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\"" Jan 30 13:52:01.409042 containerd[1799]: time="2025-01-30T13:52:01.409032916Z" level=info msg="TearDown network for sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" successfully" Jan 30 13:52:01.409061 containerd[1799]: time="2025-01-30T13:52:01.409044356Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" returns successfully" Jan 30 13:52:01.409079 containerd[1799]: time="2025-01-30T13:52:01.409036911Z" level=info msg="StopPodSandbox for \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\"" Jan 30 13:52:01.409192 containerd[1799]: time="2025-01-30T13:52:01.409182371Z" level=info msg="Ensure that sandbox 0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b in task-service has been cleanup successfully" Jan 30 13:52:01.409234 containerd[1799]: time="2025-01-30T13:52:01.409223958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:3,}" Jan 30 13:52:01.409291 containerd[1799]: time="2025-01-30T13:52:01.409280602Z" level=info msg="TearDown network for sandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\" successfully" Jan 30 13:52:01.409312 containerd[1799]: time="2025-01-30T13:52:01.409291898Z" level=info msg="StopPodSandbox for \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\" returns successfully" Jan 30 13:52:01.409420 containerd[1799]: time="2025-01-30T13:52:01.409410176Z" level=info msg="StopPodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\"" Jan 30 13:52:01.409459 containerd[1799]: time="2025-01-30T13:52:01.409451429Z" level=info msg="TearDown network for sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" successfully" Jan 30 13:52:01.409477 containerd[1799]: time="2025-01-30T13:52:01.409458804Z" level=info msg="StopPodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" returns successfully" Jan 30 13:52:01.409570 containerd[1799]: time="2025-01-30T13:52:01.409562262Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\"" Jan 30 13:52:01.409613 containerd[1799]: time="2025-01-30T13:52:01.409604357Z" level=info msg="TearDown network for sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" successfully" Jan 30 13:52:01.409637 containerd[1799]: time="2025-01-30T13:52:01.409614110Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" returns successfully" Jan 30 13:52:01.409796 containerd[1799]: time="2025-01-30T13:52:01.409784021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:3,}" Jan 30 13:52:01.410516 systemd[1]: run-netns-cni\x2df4f2240f\x2dc207\x2dfb2f\x2db462\x2d54bb827e8677.mount: Deactivated successfully. Jan 30 13:52:01.410566 systemd[1]: run-netns-cni\x2d4c2d95e1\x2de14e\x2d8f3b\x2d5fb2\x2dce42e3981131.mount: Deactivated successfully. Jan 30 13:52:01.410602 systemd[1]: run-netns-cni\x2df36189c7\x2de03a\x2d31e7\x2de6a2\x2d2c8c3b953dfc.mount: Deactivated successfully. Jan 30 13:52:01.461888 containerd[1799]: time="2025-01-30T13:52:01.461813518Z" level=error msg="Failed to destroy network for sandbox \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.462082 containerd[1799]: time="2025-01-30T13:52:01.461813514Z" level=error msg="Failed to destroy network for sandbox \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.462251 containerd[1799]: time="2025-01-30T13:52:01.462232737Z" level=error msg="encountered an error cleaning up failed sandbox \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.462301 containerd[1799]: time="2025-01-30T13:52:01.462243773Z" level=error msg="encountered an error cleaning up failed sandbox \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.462301 containerd[1799]: time="2025-01-30T13:52:01.462277689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.462301 containerd[1799]: time="2025-01-30T13:52:01.462282210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.462451 kubelet[3061]: E0130 13:52:01.462427 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.462497 kubelet[3061]: E0130 13:52:01.462479 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:52:01.462523 kubelet[3061]: E0130 13:52:01.462496 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:52:01.462541 kubelet[3061]: E0130 13:52:01.462427 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.462559 kubelet[3061]: E0130 13:52:01.462537 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c748b76b-s2glf_calico-apiserver(520ef51f-94d3-44ca-8df4-36fb6501930e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c748b76b-s2glf_calico-apiserver(520ef51f-94d3-44ca-8df4-36fb6501930e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" podUID="520ef51f-94d3-44ca-8df4-36fb6501930e" Jan 30 13:52:01.462559 kubelet[3061]: E0130 13:52:01.462549 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:52:01.462616 kubelet[3061]: E0130 13:52:01.462566 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:52:01.462616 kubelet[3061]: E0130 13:52:01.462591 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-c4k8v_kube-system(04991ed4-bcd5-4f9b-b027-ba79cc5149a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-c4k8v_kube-system(04991ed4-bcd5-4f9b-b027-ba79cc5149a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-c4k8v" podUID="04991ed4-bcd5-4f9b-b027-ba79cc5149a0" Jan 30 13:52:01.463390 containerd[1799]: time="2025-01-30T13:52:01.463373628Z" level=error msg="Failed to destroy network for sandbox \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.463536 containerd[1799]: time="2025-01-30T13:52:01.463520844Z" level=error msg="encountered an error cleaning up failed sandbox \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.463578 containerd[1799]: time="2025-01-30T13:52:01.463550287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.463635 kubelet[3061]: E0130 13:52:01.463620 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.463659 kubelet[3061]: E0130 13:52:01.463647 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:52:01.463680 kubelet[3061]: E0130 13:52:01.463659 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:52:01.463702 kubelet[3061]: E0130 13:52:01.463680 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gpjs7_calico-system(08ec3d9c-69d5-48e2-969e-46a8611fadde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gpjs7_calico-system(08ec3d9c-69d5-48e2-969e-46a8611fadde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gpjs7" podUID="08ec3d9c-69d5-48e2-969e-46a8611fadde" Jan 30 13:52:01.464984 containerd[1799]: time="2025-01-30T13:52:01.464967357Z" level=error msg="Failed to destroy network for sandbox \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.465118 containerd[1799]: time="2025-01-30T13:52:01.465107734Z" level=error msg="encountered an error cleaning up failed sandbox \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.465150 containerd[1799]: time="2025-01-30T13:52:01.465136986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.465234 kubelet[3061]: E0130 13:52:01.465219 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.465266 kubelet[3061]: E0130 13:52:01.465241 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:52:01.465266 kubelet[3061]: E0130 13:52:01.465252 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:52:01.465301 kubelet[3061]: E0130 13:52:01.465272 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5984859c66-hc7cz_calico-system(3a1bfeac-92e9-4eac-a174-cabc6e4921c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5984859c66-hc7cz_calico-system(3a1bfeac-92e9-4eac-a174-cabc6e4921c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" podUID="3a1bfeac-92e9-4eac-a174-cabc6e4921c6" Jan 30 13:52:01.465751 containerd[1799]: time="2025-01-30T13:52:01.465710309Z" level=error msg="Failed to destroy network for sandbox \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.465894 containerd[1799]: time="2025-01-30T13:52:01.465880873Z" level=error msg="encountered an error cleaning up failed sandbox \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.465919 containerd[1799]: time="2025-01-30T13:52:01.465906562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.465990 kubelet[3061]: E0130 13:52:01.465977 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.466014 kubelet[3061]: E0130 13:52:01.465999 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:52:01.466014 kubelet[3061]: E0130 13:52:01.466011 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:52:01.466057 kubelet[3061]: E0130 13:52:01.466030 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c748b76b-mcv7b_calico-apiserver(e79b48d4-f379-4135-a6bf-0a0ccaeb5c67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c748b76b-mcv7b_calico-apiserver(e79b48d4-f379-4135-a6bf-0a0ccaeb5c67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" podUID="e79b48d4-f379-4135-a6bf-0a0ccaeb5c67" Jan 30 13:52:01.466780 containerd[1799]: time="2025-01-30T13:52:01.466764664Z" level=error msg="Failed to destroy network for sandbox \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.466926 containerd[1799]: time="2025-01-30T13:52:01.466891307Z" level=error msg="encountered an error cleaning up failed sandbox \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.466926 containerd[1799]: time="2025-01-30T13:52:01.466912136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.467003 kubelet[3061]: E0130 13:52:01.466992 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.467029 kubelet[3061]: E0130 13:52:01.467011 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:52:01.467029 kubelet[3061]: E0130 13:52:01.467022 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:52:01.467073 kubelet[3061]: E0130 13:52:01.467038 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7vhrq_kube-system(7c64afcb-0671-44d3-8136-9ee0bad3d72c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7vhrq_kube-system(7c64afcb-0671-44d3-8136-9ee0bad3d72c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7vhrq" podUID="7c64afcb-0671-44d3-8136-9ee0bad3d72c" Jan 30 13:52:02.028448 update_engine[1786]: I20250130 13:52:02.028371 1786 update_attempter.cc:509] Updating boot flags... Jan 30 13:52:02.058360 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (5070) Jan 30 13:52:02.087332 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (5071) Jan 30 13:52:02.410986 kubelet[3061]: I0130 13:52:02.410917 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b" Jan 30 13:52:02.411324 containerd[1799]: time="2025-01-30T13:52:02.411293210Z" level=info msg="StopPodSandbox for \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\"" Jan 30 13:52:02.411594 containerd[1799]: time="2025-01-30T13:52:02.411477048Z" level=info msg="Ensure that sandbox 73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b in task-service has been cleanup successfully" Jan 30 13:52:02.411655 containerd[1799]: time="2025-01-30T13:52:02.411635727Z" level=info msg="TearDown network for sandbox \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\" successfully" Jan 30 13:52:02.411655 containerd[1799]: time="2025-01-30T13:52:02.411649431Z" level=info msg="StopPodSandbox for \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\" returns successfully" Jan 30 13:52:02.411811 containerd[1799]: time="2025-01-30T13:52:02.411794076Z" level=info msg="StopPodSandbox for \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\"" Jan 30 13:52:02.411867 containerd[1799]: time="2025-01-30T13:52:02.411854154Z" level=info msg="TearDown network for sandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\" successfully" Jan 30 13:52:02.411900 containerd[1799]: time="2025-01-30T13:52:02.411865351Z" level=info msg="StopPodSandbox for \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\" returns successfully" Jan 30 13:52:02.412069 containerd[1799]: time="2025-01-30T13:52:02.412047517Z" level=info msg="StopPodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\"" Jan 30 13:52:02.412164 containerd[1799]: time="2025-01-30T13:52:02.412130070Z" level=info msg="TearDown network for sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" successfully" Jan 30 13:52:02.412198 containerd[1799]: time="2025-01-30T13:52:02.412164528Z" level=info msg="StopPodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" returns successfully" Jan 30 13:52:02.412226 kubelet[3061]: I0130 13:52:02.412178 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645" Jan 30 13:52:02.412385 containerd[1799]: time="2025-01-30T13:52:02.412371426Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\"" Jan 30 13:52:02.412441 containerd[1799]: time="2025-01-30T13:52:02.412430120Z" level=info msg="TearDown network for sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" successfully" Jan 30 13:52:02.412468 containerd[1799]: time="2025-01-30T13:52:02.412440877Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" returns successfully" Jan 30 13:52:02.412596 containerd[1799]: time="2025-01-30T13:52:02.412575447Z" level=info msg="StopPodSandbox for \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\"" Jan 30 13:52:02.412808 containerd[1799]: time="2025-01-30T13:52:02.412786841Z" level=info msg="Ensure that sandbox 531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645 in task-service has been cleanup successfully" Jan 30 13:52:02.412875 containerd[1799]: time="2025-01-30T13:52:02.412803308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:4,}" Jan 30 13:52:02.412981 containerd[1799]: time="2025-01-30T13:52:02.412964992Z" level=info msg="TearDown network for sandbox \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\" successfully" Jan 30 13:52:02.413021 containerd[1799]: time="2025-01-30T13:52:02.412982159Z" level=info msg="StopPodSandbox for \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\" returns successfully" Jan 30 13:52:02.413197 containerd[1799]: time="2025-01-30T13:52:02.413180347Z" level=info msg="StopPodSandbox for \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\"" Jan 30 13:52:02.413268 containerd[1799]: time="2025-01-30T13:52:02.413261251Z" level=info msg="TearDown network for sandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\" successfully" Jan 30 13:52:02.413291 containerd[1799]: time="2025-01-30T13:52:02.413271110Z" level=info msg="StopPodSandbox for \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\" returns successfully" Jan 30 13:52:02.413407 containerd[1799]: time="2025-01-30T13:52:02.413394918Z" level=info msg="StopPodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\"" Jan 30 13:52:02.413444 kubelet[3061]: I0130 13:52:02.413434 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11" Jan 30 13:52:02.413483 containerd[1799]: time="2025-01-30T13:52:02.413451885Z" level=info msg="TearDown network for sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" successfully" Jan 30 13:52:02.413513 containerd[1799]: time="2025-01-30T13:52:02.413484291Z" level=info msg="StopPodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" returns successfully" Jan 30 13:52:02.413689 containerd[1799]: time="2025-01-30T13:52:02.413677375Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\"" Jan 30 13:52:02.413740 containerd[1799]: time="2025-01-30T13:52:02.413678821Z" level=info msg="StopPodSandbox for \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\"" Jan 30 13:52:02.413740 containerd[1799]: time="2025-01-30T13:52:02.413719068Z" level=info msg="TearDown network for sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" successfully" Jan 30 13:52:02.413740 containerd[1799]: time="2025-01-30T13:52:02.413740282Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" returns successfully" Jan 30 13:52:02.413806 containerd[1799]: time="2025-01-30T13:52:02.413793642Z" level=info msg="Ensure that sandbox d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11 in task-service has been cleanup successfully" Jan 30 13:52:02.413797 systemd[1]: run-netns-cni\x2dd25f0b4e\x2d4d6d\x2dfc8f\x2de011\x2d8aa3e191efac.mount: Deactivated successfully. Jan 30 13:52:02.413970 containerd[1799]: time="2025-01-30T13:52:02.413882259Z" level=info msg="TearDown network for sandbox \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\" successfully" Jan 30 13:52:02.413970 containerd[1799]: time="2025-01-30T13:52:02.413890057Z" level=info msg="StopPodSandbox for \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\" returns successfully" Jan 30 13:52:02.413970 containerd[1799]: time="2025-01-30T13:52:02.413936256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:4,}" Jan 30 13:52:02.414047 containerd[1799]: time="2025-01-30T13:52:02.414036093Z" level=info msg="StopPodSandbox for \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\"" Jan 30 13:52:02.414084 containerd[1799]: time="2025-01-30T13:52:02.414076546Z" level=info msg="TearDown network for sandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\" successfully" Jan 30 13:52:02.414084 containerd[1799]: time="2025-01-30T13:52:02.414083230Z" level=info msg="StopPodSandbox for \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\" returns successfully" Jan 30 13:52:02.414207 containerd[1799]: time="2025-01-30T13:52:02.414196013Z" level=info msg="StopPodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\"" Jan 30 13:52:02.414249 containerd[1799]: time="2025-01-30T13:52:02.414242156Z" level=info msg="TearDown network for sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" successfully" Jan 30 13:52:02.414274 containerd[1799]: time="2025-01-30T13:52:02.414249193Z" level=info msg="StopPodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" returns successfully" Jan 30 13:52:02.414356 containerd[1799]: time="2025-01-30T13:52:02.414346223Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\"" Jan 30 13:52:02.414403 containerd[1799]: time="2025-01-30T13:52:02.414394257Z" level=info msg="TearDown network for sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" successfully" Jan 30 13:52:02.414434 containerd[1799]: time="2025-01-30T13:52:02.414402431Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" returns successfully" Jan 30 13:52:02.414455 kubelet[3061]: I0130 13:52:02.414400 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357" Jan 30 13:52:02.414615 containerd[1799]: time="2025-01-30T13:52:02.414600920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:4,}" Jan 30 13:52:02.414653 containerd[1799]: time="2025-01-30T13:52:02.414625557Z" level=info msg="StopPodSandbox for \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\"" Jan 30 13:52:02.414728 containerd[1799]: time="2025-01-30T13:52:02.414718014Z" level=info msg="Ensure that sandbox 317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357 in task-service has been cleanup successfully" Jan 30 13:52:02.414807 containerd[1799]: time="2025-01-30T13:52:02.414799359Z" level=info msg="TearDown network for sandbox \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\" successfully" Jan 30 13:52:02.414807 containerd[1799]: time="2025-01-30T13:52:02.414807145Z" level=info msg="StopPodSandbox for \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\" returns successfully" Jan 30 13:52:02.414891 containerd[1799]: time="2025-01-30T13:52:02.414882582Z" level=info msg="StopPodSandbox for \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\"" Jan 30 13:52:02.414937 containerd[1799]: time="2025-01-30T13:52:02.414916989Z" level=info msg="TearDown network for sandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\" successfully" Jan 30 13:52:02.414957 containerd[1799]: time="2025-01-30T13:52:02.414937313Z" level=info msg="StopPodSandbox for \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\" returns successfully" Jan 30 13:52:02.415035 containerd[1799]: time="2025-01-30T13:52:02.415024061Z" level=info msg="StopPodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\"" Jan 30 13:52:02.415083 containerd[1799]: time="2025-01-30T13:52:02.415073636Z" level=info msg="TearDown network for sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" successfully" Jan 30 13:52:02.415102 containerd[1799]: time="2025-01-30T13:52:02.415083213Z" level=info msg="StopPodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" returns successfully" Jan 30 13:52:02.415176 containerd[1799]: time="2025-01-30T13:52:02.415167429Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\"" Jan 30 13:52:02.415205 kubelet[3061]: I0130 13:52:02.415198 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255" Jan 30 13:52:02.415231 containerd[1799]: time="2025-01-30T13:52:02.415210405Z" level=info msg="TearDown network for sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" successfully" Jan 30 13:52:02.415231 containerd[1799]: time="2025-01-30T13:52:02.415217564Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" returns successfully" Jan 30 13:52:02.415406 containerd[1799]: time="2025-01-30T13:52:02.415394319Z" level=info msg="StopPodSandbox for \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\"" Jan 30 13:52:02.415441 containerd[1799]: time="2025-01-30T13:52:02.415429398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:4,}" Jan 30 13:52:02.415496 containerd[1799]: time="2025-01-30T13:52:02.415488070Z" level=info msg="Ensure that sandbox dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255 in task-service has been cleanup successfully" Jan 30 13:52:02.415571 containerd[1799]: time="2025-01-30T13:52:02.415563683Z" level=info msg="TearDown network for sandbox \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\" successfully" Jan 30 13:52:02.415599 containerd[1799]: time="2025-01-30T13:52:02.415571441Z" level=info msg="StopPodSandbox for \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\" returns successfully" Jan 30 13:52:02.415680 containerd[1799]: time="2025-01-30T13:52:02.415666628Z" level=info msg="StopPodSandbox for \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\"" Jan 30 13:52:02.415729 containerd[1799]: time="2025-01-30T13:52:02.415720782Z" level=info msg="TearDown network for sandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\" successfully" Jan 30 13:52:02.415758 containerd[1799]: time="2025-01-30T13:52:02.415736707Z" level=info msg="StopPodSandbox for \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\" returns successfully" Jan 30 13:52:02.415883 containerd[1799]: time="2025-01-30T13:52:02.415865763Z" level=info msg="StopPodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\"" Jan 30 13:52:02.415927 containerd[1799]: time="2025-01-30T13:52:02.415919216Z" level=info msg="TearDown network for sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" successfully" Jan 30 13:52:02.415927 containerd[1799]: time="2025-01-30T13:52:02.415926674Z" level=info msg="StopPodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" returns successfully" Jan 30 13:52:02.416050 containerd[1799]: time="2025-01-30T13:52:02.416039660Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\"" Jan 30 13:52:02.416089 containerd[1799]: time="2025-01-30T13:52:02.416081229Z" level=info msg="TearDown network for sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" successfully" Jan 30 13:52:02.416116 containerd[1799]: time="2025-01-30T13:52:02.416088397Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" returns successfully" Jan 30 13:52:02.416138 kubelet[3061]: I0130 13:52:02.416086 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415" Jan 30 13:52:02.416222 systemd[1]: run-netns-cni\x2d5ca8081e\x2d0265\x2d4f80\x2d6c7b\x2d00f123157a99.mount: Deactivated successfully. Jan 30 13:52:02.416285 containerd[1799]: time="2025-01-30T13:52:02.416275794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:4,}" Jan 30 13:52:02.416290 systemd[1]: run-netns-cni\x2df12c44f7\x2d7217\x2d300c\x2db0cb\x2dfc185f6e9c18.mount: Deactivated successfully. Jan 30 13:52:02.416368 containerd[1799]: time="2025-01-30T13:52:02.416309661Z" level=info msg="StopPodSandbox for \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\"" Jan 30 13:52:02.416441 containerd[1799]: time="2025-01-30T13:52:02.416427797Z" level=info msg="Ensure that sandbox 269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415 in task-service has been cleanup successfully" Jan 30 13:52:02.416539 containerd[1799]: time="2025-01-30T13:52:02.416527087Z" level=info msg="TearDown network for sandbox \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\" successfully" Jan 30 13:52:02.416578 containerd[1799]: time="2025-01-30T13:52:02.416538718Z" level=info msg="StopPodSandbox for \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\" returns successfully" Jan 30 13:52:02.416679 containerd[1799]: time="2025-01-30T13:52:02.416665939Z" level=info msg="StopPodSandbox for \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\"" Jan 30 13:52:02.416731 containerd[1799]: time="2025-01-30T13:52:02.416722149Z" level=info msg="TearDown network for sandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\" successfully" Jan 30 13:52:02.416751 containerd[1799]: time="2025-01-30T13:52:02.416732396Z" level=info msg="StopPodSandbox for \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\" returns successfully" Jan 30 13:52:02.416868 containerd[1799]: time="2025-01-30T13:52:02.416856696Z" level=info msg="StopPodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\"" Jan 30 13:52:02.416910 containerd[1799]: time="2025-01-30T13:52:02.416898048Z" level=info msg="TearDown network for sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" successfully" Jan 30 13:52:02.416910 containerd[1799]: time="2025-01-30T13:52:02.416907068Z" level=info msg="StopPodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" returns successfully" Jan 30 13:52:02.417036 containerd[1799]: time="2025-01-30T13:52:02.417026066Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\"" Jan 30 13:52:02.417083 containerd[1799]: time="2025-01-30T13:52:02.417073986Z" level=info msg="TearDown network for sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" successfully" Jan 30 13:52:02.417101 containerd[1799]: time="2025-01-30T13:52:02.417083958Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" returns successfully" Jan 30 13:52:02.417247 containerd[1799]: time="2025-01-30T13:52:02.417237984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:4,}" Jan 30 13:52:02.418515 systemd[1]: run-netns-cni\x2d126d4b3f\x2d4471\x2dc63f\x2d305a\x2df02d76b8530e.mount: Deactivated successfully. Jan 30 13:52:02.418566 systemd[1]: run-netns-cni\x2da3552fbd\x2db0f9\x2dd02f\x2d628a\x2ddd8877d32eda.mount: Deactivated successfully. Jan 30 13:52:02.418603 systemd[1]: run-netns-cni\x2d8238a64c\x2da9e1\x2dab4a\x2d3988\x2d76386291639b.mount: Deactivated successfully. Jan 30 13:52:02.464824 containerd[1799]: time="2025-01-30T13:52:02.464786801Z" level=error msg="Failed to destroy network for sandbox \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.465181 containerd[1799]: time="2025-01-30T13:52:02.465140965Z" level=error msg="encountered an error cleaning up failed sandbox \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.465266 containerd[1799]: time="2025-01-30T13:52:02.465248989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.465642 kubelet[3061]: E0130 13:52:02.465576 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.465720 kubelet[3061]: E0130 13:52:02.465697 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:52:02.465776 kubelet[3061]: E0130 13:52:02.465727 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:52:02.465864 kubelet[3061]: E0130 13:52:02.465810 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7vhrq_kube-system(7c64afcb-0671-44d3-8136-9ee0bad3d72c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7vhrq_kube-system(7c64afcb-0671-44d3-8136-9ee0bad3d72c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7vhrq" podUID="7c64afcb-0671-44d3-8136-9ee0bad3d72c" Jan 30 13:52:02.469887 containerd[1799]: time="2025-01-30T13:52:02.469849387Z" level=error msg="Failed to destroy network for sandbox \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.469887 containerd[1799]: time="2025-01-30T13:52:02.469876388Z" level=error msg="Failed to destroy network for sandbox \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470139 containerd[1799]: time="2025-01-30T13:52:02.470121139Z" level=error msg="encountered an error cleaning up failed sandbox \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470188 containerd[1799]: time="2025-01-30T13:52:02.470136786Z" level=error msg="encountered an error cleaning up failed sandbox \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470188 containerd[1799]: time="2025-01-30T13:52:02.470168606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470250 containerd[1799]: time="2025-01-30T13:52:02.470171196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470351 kubelet[3061]: E0130 13:52:02.470329 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470387 kubelet[3061]: E0130 13:52:02.470369 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:52:02.470406 kubelet[3061]: E0130 13:52:02.470383 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:52:02.470424 kubelet[3061]: E0130 13:52:02.470329 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470424 kubelet[3061]: E0130 13:52:02.470412 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5984859c66-hc7cz_calico-system(3a1bfeac-92e9-4eac-a174-cabc6e4921c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5984859c66-hc7cz_calico-system(3a1bfeac-92e9-4eac-a174-cabc6e4921c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" podUID="3a1bfeac-92e9-4eac-a174-cabc6e4921c6" Jan 30 13:52:02.470470 kubelet[3061]: E0130 13:52:02.470426 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:52:02.470470 kubelet[3061]: E0130 13:52:02.470439 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:52:02.470470 kubelet[3061]: E0130 13:52:02.470456 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c748b76b-mcv7b_calico-apiserver(e79b48d4-f379-4135-a6bf-0a0ccaeb5c67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c748b76b-mcv7b_calico-apiserver(e79b48d4-f379-4135-a6bf-0a0ccaeb5c67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" podUID="e79b48d4-f379-4135-a6bf-0a0ccaeb5c67" Jan 30 13:52:02.470575 containerd[1799]: time="2025-01-30T13:52:02.470515351Z" level=error msg="Failed to destroy network for sandbox \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470608 containerd[1799]: time="2025-01-30T13:52:02.470579854Z" level=error msg="Failed to destroy network for sandbox \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470717 containerd[1799]: time="2025-01-30T13:52:02.470701887Z" level=error msg="encountered an error cleaning up failed sandbox \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470751 containerd[1799]: time="2025-01-30T13:52:02.470732288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470796 containerd[1799]: time="2025-01-30T13:52:02.470779707Z" level=error msg="encountered an error cleaning up failed sandbox \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470821 kubelet[3061]: E0130 13:52:02.470810 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470841 kubelet[3061]: E0130 13:52:02.470828 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:52:02.470865 containerd[1799]: time="2025-01-30T13:52:02.470816913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470884 kubelet[3061]: E0130 13:52:02.470839 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:52:02.470884 kubelet[3061]: E0130 13:52:02.470859 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c748b76b-s2glf_calico-apiserver(520ef51f-94d3-44ca-8df4-36fb6501930e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c748b76b-s2glf_calico-apiserver(520ef51f-94d3-44ca-8df4-36fb6501930e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" podUID="520ef51f-94d3-44ca-8df4-36fb6501930e" Jan 30 13:52:02.470929 kubelet[3061]: E0130 13:52:02.470879 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.470929 kubelet[3061]: E0130 13:52:02.470895 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:52:02.470929 kubelet[3061]: E0130 13:52:02.470903 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:52:02.471007 kubelet[3061]: E0130 13:52:02.470921 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gpjs7_calico-system(08ec3d9c-69d5-48e2-969e-46a8611fadde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gpjs7_calico-system(08ec3d9c-69d5-48e2-969e-46a8611fadde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gpjs7" podUID="08ec3d9c-69d5-48e2-969e-46a8611fadde" Jan 30 13:52:02.474178 containerd[1799]: time="2025-01-30T13:52:02.474132558Z" level=error msg="Failed to destroy network for sandbox \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.474450 containerd[1799]: time="2025-01-30T13:52:02.474406058Z" level=error msg="encountered an error cleaning up failed sandbox \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.474450 containerd[1799]: time="2025-01-30T13:52:02.474436327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.474636 kubelet[3061]: E0130 13:52:02.474584 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.474680 kubelet[3061]: E0130 13:52:02.474634 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:52:02.474680 kubelet[3061]: E0130 13:52:02.474648 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:52:02.474750 kubelet[3061]: E0130 13:52:02.474685 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-c4k8v_kube-system(04991ed4-bcd5-4f9b-b027-ba79cc5149a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-c4k8v_kube-system(04991ed4-bcd5-4f9b-b027-ba79cc5149a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-c4k8v" podUID="04991ed4-bcd5-4f9b-b027-ba79cc5149a0" Jan 30 13:52:03.394123 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09-shm.mount: Deactivated successfully. Jan 30 13:52:03.418004 kubelet[3061]: I0130 13:52:03.417987 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c" Jan 30 13:52:03.418338 containerd[1799]: time="2025-01-30T13:52:03.418314901Z" level=info msg="StopPodSandbox for \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\"" Jan 30 13:52:03.418487 containerd[1799]: time="2025-01-30T13:52:03.418476238Z" level=info msg="Ensure that sandbox 9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c in task-service has been cleanup successfully" Jan 30 13:52:03.418616 containerd[1799]: time="2025-01-30T13:52:03.418602929Z" level=info msg="TearDown network for sandbox \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\" successfully" Jan 30 13:52:03.418616 containerd[1799]: time="2025-01-30T13:52:03.418613699Z" level=info msg="StopPodSandbox for \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\" returns successfully" Jan 30 13:52:03.418780 containerd[1799]: time="2025-01-30T13:52:03.418768052Z" level=info msg="StopPodSandbox for \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\"" Jan 30 13:52:03.418835 containerd[1799]: time="2025-01-30T13:52:03.418824672Z" level=info msg="TearDown network for sandbox \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\" successfully" Jan 30 13:52:03.418869 containerd[1799]: time="2025-01-30T13:52:03.418835128Z" level=info msg="StopPodSandbox for \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\" returns successfully" Jan 30 13:52:03.418905 kubelet[3061]: I0130 13:52:03.418845 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386" Jan 30 13:52:03.418978 containerd[1799]: time="2025-01-30T13:52:03.418965781Z" level=info msg="StopPodSandbox for \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\"" Jan 30 13:52:03.419034 containerd[1799]: time="2025-01-30T13:52:03.419023290Z" level=info msg="TearDown network for sandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\" successfully" Jan 30 13:52:03.419068 containerd[1799]: time="2025-01-30T13:52:03.419034680Z" level=info msg="StopPodSandbox for \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\" returns successfully" Jan 30 13:52:03.419128 containerd[1799]: time="2025-01-30T13:52:03.419115294Z" level=info msg="StopPodSandbox for \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\"" Jan 30 13:52:03.419181 containerd[1799]: time="2025-01-30T13:52:03.419171570Z" level=info msg="StopPodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\"" Jan 30 13:52:03.419228 containerd[1799]: time="2025-01-30T13:52:03.419220009Z" level=info msg="TearDown network for sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" successfully" Jan 30 13:52:03.419252 containerd[1799]: time="2025-01-30T13:52:03.419229264Z" level=info msg="StopPodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" returns successfully" Jan 30 13:52:03.419252 containerd[1799]: time="2025-01-30T13:52:03.419237688Z" level=info msg="Ensure that sandbox ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386 in task-service has been cleanup successfully" Jan 30 13:52:03.419339 containerd[1799]: time="2025-01-30T13:52:03.419330359Z" level=info msg="TearDown network for sandbox \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\" successfully" Jan 30 13:52:03.419366 containerd[1799]: time="2025-01-30T13:52:03.419338630Z" level=info msg="StopPodSandbox for \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\" returns successfully" Jan 30 13:52:03.419366 containerd[1799]: time="2025-01-30T13:52:03.419349039Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\"" Jan 30 13:52:03.419426 containerd[1799]: time="2025-01-30T13:52:03.419412414Z" level=info msg="TearDown network for sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" successfully" Jan 30 13:52:03.419426 containerd[1799]: time="2025-01-30T13:52:03.419422016Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" returns successfully" Jan 30 13:52:03.419466 containerd[1799]: time="2025-01-30T13:52:03.419449307Z" level=info msg="StopPodSandbox for \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\"" Jan 30 13:52:03.419500 containerd[1799]: time="2025-01-30T13:52:03.419489151Z" level=info msg="TearDown network for sandbox \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\" successfully" Jan 30 13:52:03.419500 containerd[1799]: time="2025-01-30T13:52:03.419498614Z" level=info msg="StopPodSandbox for \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\" returns successfully" Jan 30 13:52:03.419621 containerd[1799]: time="2025-01-30T13:52:03.419612055Z" level=info msg="StopPodSandbox for \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\"" Jan 30 13:52:03.419664 containerd[1799]: time="2025-01-30T13:52:03.419654791Z" level=info msg="TearDown network for sandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\" successfully" Jan 30 13:52:03.419697 containerd[1799]: time="2025-01-30T13:52:03.419665112Z" level=info msg="StopPodSandbox for \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\" returns successfully" Jan 30 13:52:03.419793 kubelet[3061]: I0130 13:52:03.419784 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09" Jan 30 13:52:03.419831 containerd[1799]: time="2025-01-30T13:52:03.419802185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:5,}" Jan 30 13:52:03.419886 containerd[1799]: time="2025-01-30T13:52:03.419802298Z" level=info msg="StopPodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\"" Jan 30 13:52:03.420683 containerd[1799]: time="2025-01-30T13:52:03.420080253Z" level=info msg="TearDown network for sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" successfully" Jan 30 13:52:03.420683 containerd[1799]: time="2025-01-30T13:52:03.420133454Z" level=info msg="StopPodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" returns successfully" Jan 30 13:52:03.420683 containerd[1799]: time="2025-01-30T13:52:03.420169325Z" level=info msg="StopPodSandbox for \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\"" Jan 30 13:52:03.420683 containerd[1799]: time="2025-01-30T13:52:03.420426803Z" level=info msg="Ensure that sandbox 091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09 in task-service has been cleanup successfully" Jan 30 13:52:03.420683 containerd[1799]: time="2025-01-30T13:52:03.420609614Z" level=info msg="TearDown network for sandbox \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\" successfully" Jan 30 13:52:03.420683 containerd[1799]: time="2025-01-30T13:52:03.420622325Z" level=info msg="StopPodSandbox for \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\" returns successfully" Jan 30 13:52:03.420419 systemd[1]: run-netns-cni\x2dbcb58f5a\x2dccec\x2d7071\x2d514f\x2db751d41df75b.mount: Deactivated successfully. Jan 30 13:52:03.421988 containerd[1799]: time="2025-01-30T13:52:03.421967583Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\"" Jan 30 13:52:03.422043 containerd[1799]: time="2025-01-30T13:52:03.421994431Z" level=info msg="StopPodSandbox for \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\"" Jan 30 13:52:03.422078 containerd[1799]: time="2025-01-30T13:52:03.422035643Z" level=info msg="TearDown network for sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" successfully" Jan 30 13:52:03.422078 containerd[1799]: time="2025-01-30T13:52:03.422070961Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" returns successfully" Jan 30 13:52:03.422132 containerd[1799]: time="2025-01-30T13:52:03.422052055Z" level=info msg="TearDown network for sandbox \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\" successfully" Jan 30 13:52:03.422132 containerd[1799]: time="2025-01-30T13:52:03.422103552Z" level=info msg="StopPodSandbox for \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\" returns successfully" Jan 30 13:52:03.422229 containerd[1799]: time="2025-01-30T13:52:03.422218017Z" level=info msg="StopPodSandbox for \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\"" Jan 30 13:52:03.422291 containerd[1799]: time="2025-01-30T13:52:03.422262235Z" level=info msg="TearDown network for sandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\" successfully" Jan 30 13:52:03.422335 containerd[1799]: time="2025-01-30T13:52:03.422291744Z" level=info msg="StopPodSandbox for \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\" returns successfully" Jan 30 13:52:03.422599 containerd[1799]: time="2025-01-30T13:52:03.422585923Z" level=info msg="StopPodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\"" Jan 30 13:52:03.422679 containerd[1799]: time="2025-01-30T13:52:03.422644351Z" level=info msg="TearDown network for sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" successfully" Jan 30 13:52:03.423015 containerd[1799]: time="2025-01-30T13:52:03.422727154Z" level=info msg="StopPodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" returns successfully" Jan 30 13:52:03.423015 containerd[1799]: time="2025-01-30T13:52:03.422653979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:5,}" Jan 30 13:52:03.423241 containerd[1799]: time="2025-01-30T13:52:03.423143216Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\"" Jan 30 13:52:03.423241 containerd[1799]: time="2025-01-30T13:52:03.423201069Z" level=info msg="TearDown network for sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" successfully" Jan 30 13:52:03.423241 containerd[1799]: time="2025-01-30T13:52:03.423209056Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" returns successfully" Jan 30 13:52:03.423221 systemd[1]: run-netns-cni\x2d36e125cf\x2d09aa\x2d4c37\x2dd583\x2de65b0e596f95.mount: Deactivated successfully. Jan 30 13:52:03.423315 systemd[1]: run-netns-cni\x2dd814d76f\x2d0f1f\x2d4f52\x2dd8c3\x2d0e55c1c34fe8.mount: Deactivated successfully. Jan 30 13:52:03.423867 containerd[1799]: time="2025-01-30T13:52:03.423852422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:5,}" Jan 30 13:52:03.424465 kubelet[3061]: I0130 13:52:03.424452 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794" Jan 30 13:52:03.424725 containerd[1799]: time="2025-01-30T13:52:03.424714216Z" level=info msg="StopPodSandbox for \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\"" Jan 30 13:52:03.424846 containerd[1799]: time="2025-01-30T13:52:03.424834785Z" level=info msg="Ensure that sandbox 6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794 in task-service has been cleanup successfully" Jan 30 13:52:03.424976 containerd[1799]: time="2025-01-30T13:52:03.424964918Z" level=info msg="TearDown network for sandbox \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\" successfully" Jan 30 13:52:03.425010 containerd[1799]: time="2025-01-30T13:52:03.424976717Z" level=info msg="StopPodSandbox for \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\" returns successfully" Jan 30 13:52:03.425099 containerd[1799]: time="2025-01-30T13:52:03.425088345Z" level=info msg="StopPodSandbox for \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\"" Jan 30 13:52:03.425154 containerd[1799]: time="2025-01-30T13:52:03.425142489Z" level=info msg="TearDown network for sandbox \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\" successfully" Jan 30 13:52:03.425182 containerd[1799]: time="2025-01-30T13:52:03.425154038Z" level=info msg="StopPodSandbox for \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\" returns successfully" Jan 30 13:52:03.425265 containerd[1799]: time="2025-01-30T13:52:03.425254526Z" level=info msg="StopPodSandbox for \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\"" Jan 30 13:52:03.425312 containerd[1799]: time="2025-01-30T13:52:03.425303109Z" level=info msg="TearDown network for sandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\" successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425312357Z" level=info msg="StopPodSandbox for \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\" returns successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425456343Z" level=info msg="StopPodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\"" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425513744Z" level=info msg="TearDown network for sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425523510Z" level=info msg="StopPodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" returns successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425617720Z" level=info msg="StopPodSandbox for \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\"" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425628692Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\"" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425672673Z" level=info msg="TearDown network for sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425680365Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" returns successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425718493Z" level=info msg="Ensure that sandbox 3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70 in task-service has been cleanup successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425807305Z" level=info msg="TearDown network for sandbox \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\" successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425815293Z" level=info msg="StopPodSandbox for \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\" returns successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425868331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:5,}" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425930056Z" level=info msg="StopPodSandbox for \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\"" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425965933Z" level=info msg="TearDown network for sandbox \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\" successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.425972472Z" level=info msg="StopPodSandbox for \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\" returns successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.426045148Z" level=info msg="StopPodSandbox for \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\"" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.426080121Z" level=info msg="TearDown network for sandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\" successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.426088914Z" level=info msg="StopPodSandbox for \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\" returns successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.426218145Z" level=info msg="StopPodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\"" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.426251588Z" level=info msg="TearDown network for sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.426257024Z" level=info msg="StopPodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" returns successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.426352228Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\"" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.426401769Z" level=info msg="TearDown network for sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" successfully" Jan 30 13:52:03.426426 containerd[1799]: time="2025-01-30T13:52:03.426411244Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" returns successfully" Jan 30 13:52:03.426880 kubelet[3061]: I0130 13:52:03.425350 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70" Jan 30 13:52:03.426880 kubelet[3061]: I0130 13:52:03.426258 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e" Jan 30 13:52:03.426721 systemd[1]: run-netns-cni\x2de36c6362\x2d8ce3\x2d1b4b\x2d154a\x2d45cd52cb0c0c.mount: Deactivated successfully. Jan 30 13:52:03.426964 containerd[1799]: time="2025-01-30T13:52:03.426497599Z" level=info msg="StopPodSandbox for \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\"" Jan 30 13:52:03.426964 containerd[1799]: time="2025-01-30T13:52:03.426593663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:5,}" Jan 30 13:52:03.426964 containerd[1799]: time="2025-01-30T13:52:03.426620457Z" level=info msg="Ensure that sandbox 8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e in task-service has been cleanup successfully" Jan 30 13:52:03.426964 containerd[1799]: time="2025-01-30T13:52:03.426722108Z" level=info msg="TearDown network for sandbox \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\" successfully" Jan 30 13:52:03.426964 containerd[1799]: time="2025-01-30T13:52:03.426733216Z" level=info msg="StopPodSandbox for \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\" returns successfully" Jan 30 13:52:03.426964 containerd[1799]: time="2025-01-30T13:52:03.426870332Z" level=info msg="StopPodSandbox for \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\"" Jan 30 13:52:03.426964 containerd[1799]: time="2025-01-30T13:52:03.426921485Z" level=info msg="TearDown network for sandbox \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\" successfully" Jan 30 13:52:03.426964 containerd[1799]: time="2025-01-30T13:52:03.426930292Z" level=info msg="StopPodSandbox for \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\" returns successfully" Jan 30 13:52:03.427094 containerd[1799]: time="2025-01-30T13:52:03.427043233Z" level=info msg="StopPodSandbox for \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\"" Jan 30 13:52:03.427128 containerd[1799]: time="2025-01-30T13:52:03.427093977Z" level=info msg="TearDown network for sandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\" successfully" Jan 30 13:52:03.427147 containerd[1799]: time="2025-01-30T13:52:03.427128371Z" level=info msg="StopPodSandbox for \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\" returns successfully" Jan 30 13:52:03.427233 containerd[1799]: time="2025-01-30T13:52:03.427223285Z" level=info msg="StopPodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\"" Jan 30 13:52:03.427283 containerd[1799]: time="2025-01-30T13:52:03.427275076Z" level=info msg="TearDown network for sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" successfully" Jan 30 13:52:03.427324 containerd[1799]: time="2025-01-30T13:52:03.427283072Z" level=info msg="StopPodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" returns successfully" Jan 30 13:52:03.427423 containerd[1799]: time="2025-01-30T13:52:03.427412973Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\"" Jan 30 13:52:03.427466 containerd[1799]: time="2025-01-30T13:52:03.427456896Z" level=info msg="TearDown network for sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" successfully" Jan 30 13:52:03.427496 containerd[1799]: time="2025-01-30T13:52:03.427465950Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" returns successfully" Jan 30 13:52:03.427636 containerd[1799]: time="2025-01-30T13:52:03.427626159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:5,}" Jan 30 13:52:03.428678 systemd[1]: run-netns-cni\x2d4d71de7f\x2d3176\x2de60e\x2d93f2\x2dac26ec175633.mount: Deactivated successfully. Jan 30 13:52:03.428731 systemd[1]: run-netns-cni\x2da23a733d\x2daaf5\x2d6c92\x2d4650\x2de7f3fae2bd81.mount: Deactivated successfully. Jan 30 13:52:03.458203 containerd[1799]: time="2025-01-30T13:52:03.458103964Z" level=error msg="Failed to destroy network for sandbox \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.458590 containerd[1799]: time="2025-01-30T13:52:03.458564884Z" level=error msg="encountered an error cleaning up failed sandbox \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.458678 containerd[1799]: time="2025-01-30T13:52:03.458660107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.459096 kubelet[3061]: E0130 13:52:03.459036 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.459189 kubelet[3061]: E0130 13:52:03.459170 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:52:03.459215 kubelet[3061]: E0130 13:52:03.459202 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:52:03.459322 kubelet[3061]: E0130 13:52:03.459281 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c748b76b-mcv7b_calico-apiserver(e79b48d4-f379-4135-a6bf-0a0ccaeb5c67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c748b76b-mcv7b_calico-apiserver(e79b48d4-f379-4135-a6bf-0a0ccaeb5c67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" podUID="e79b48d4-f379-4135-a6bf-0a0ccaeb5c67" Jan 30 13:52:03.493564 containerd[1799]: time="2025-01-30T13:52:03.493518907Z" level=error msg="Failed to destroy network for sandbox \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.493702 containerd[1799]: time="2025-01-30T13:52:03.493519975Z" level=error msg="Failed to destroy network for sandbox \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.493802 containerd[1799]: time="2025-01-30T13:52:03.493782482Z" level=error msg="encountered an error cleaning up failed sandbox \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.493845 containerd[1799]: time="2025-01-30T13:52:03.493835408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.493869 containerd[1799]: time="2025-01-30T13:52:03.493794808Z" level=error msg="encountered an error cleaning up failed sandbox \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.493899 containerd[1799]: time="2025-01-30T13:52:03.493887663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.493969 containerd[1799]: time="2025-01-30T13:52:03.493933064Z" level=error msg="Failed to destroy network for sandbox \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.494071 kubelet[3061]: E0130 13:52:03.494039 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.494122 kubelet[3061]: E0130 13:52:03.494108 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:52:03.494157 kubelet[3061]: E0130 13:52:03.494039 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.494194 kubelet[3061]: E0130 13:52:03.494153 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:52:03.494194 kubelet[3061]: E0130 13:52:03.494170 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:52:03.494249 containerd[1799]: time="2025-01-30T13:52:03.494147825Z" level=error msg="encountered an error cleaning up failed sandbox \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.494249 containerd[1799]: time="2025-01-30T13:52:03.494181407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.494294 kubelet[3061]: E0130 13:52:03.494129 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:52:03.494294 kubelet[3061]: E0130 13:52:03.494198 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5984859c66-hc7cz_calico-system(3a1bfeac-92e9-4eac-a174-cabc6e4921c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5984859c66-hc7cz_calico-system(3a1bfeac-92e9-4eac-a174-cabc6e4921c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" podUID="3a1bfeac-92e9-4eac-a174-cabc6e4921c6" Jan 30 13:52:03.494294 kubelet[3061]: E0130 13:52:03.494220 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gpjs7_calico-system(08ec3d9c-69d5-48e2-969e-46a8611fadde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gpjs7_calico-system(08ec3d9c-69d5-48e2-969e-46a8611fadde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gpjs7" podUID="08ec3d9c-69d5-48e2-969e-46a8611fadde" Jan 30 13:52:03.494395 kubelet[3061]: E0130 13:52:03.494258 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.494395 kubelet[3061]: E0130 13:52:03.494287 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:52:03.494395 kubelet[3061]: E0130 13:52:03.494304 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:52:03.494486 kubelet[3061]: E0130 13:52:03.494339 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7vhrq_kube-system(7c64afcb-0671-44d3-8136-9ee0bad3d72c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7vhrq_kube-system(7c64afcb-0671-44d3-8136-9ee0bad3d72c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7vhrq" podUID="7c64afcb-0671-44d3-8136-9ee0bad3d72c" Jan 30 13:52:03.494536 containerd[1799]: time="2025-01-30T13:52:03.494459785Z" level=error msg="Failed to destroy network for sandbox \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.494536 containerd[1799]: time="2025-01-30T13:52:03.494509305Z" level=error msg="Failed to destroy network for sandbox \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.494643 containerd[1799]: time="2025-01-30T13:52:03.494627376Z" level=error msg="encountered an error cleaning up failed sandbox \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.494674 containerd[1799]: time="2025-01-30T13:52:03.494654702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.494711 containerd[1799]: time="2025-01-30T13:52:03.494657317Z" level=error msg="encountered an error cleaning up failed sandbox \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.494741 containerd[1799]: time="2025-01-30T13:52:03.494706912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.494778 kubelet[3061]: E0130 13:52:03.494766 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.494803 kubelet[3061]: E0130 13:52:03.494787 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:52:03.494803 kubelet[3061]: E0130 13:52:03.494797 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:52:03.494842 kubelet[3061]: E0130 13:52:03.494812 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c748b76b-s2glf_calico-apiserver(520ef51f-94d3-44ca-8df4-36fb6501930e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c748b76b-s2glf_calico-apiserver(520ef51f-94d3-44ca-8df4-36fb6501930e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" podUID="520ef51f-94d3-44ca-8df4-36fb6501930e" Jan 30 13:52:03.494842 kubelet[3061]: E0130 13:52:03.494765 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.494842 kubelet[3061]: E0130 13:52:03.494831 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:52:03.494911 kubelet[3061]: E0130 13:52:03.494840 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:52:03.494911 kubelet[3061]: E0130 13:52:03.494853 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-c4k8v_kube-system(04991ed4-bcd5-4f9b-b027-ba79cc5149a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-c4k8v_kube-system(04991ed4-bcd5-4f9b-b027-ba79cc5149a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-c4k8v" podUID="04991ed4-bcd5-4f9b-b027-ba79cc5149a0" Jan 30 13:52:04.428293 kubelet[3061]: I0130 13:52:04.428247 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21" Jan 30 13:52:04.428568 containerd[1799]: time="2025-01-30T13:52:04.428550074Z" level=info msg="StopPodSandbox for \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\"" Jan 30 13:52:04.428701 containerd[1799]: time="2025-01-30T13:52:04.428688630Z" level=info msg="Ensure that sandbox 782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21 in task-service has been cleanup successfully" Jan 30 13:52:04.428824 containerd[1799]: time="2025-01-30T13:52:04.428814530Z" level=info msg="TearDown network for sandbox \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\" successfully" Jan 30 13:52:04.428854 containerd[1799]: time="2025-01-30T13:52:04.428824464Z" level=info msg="StopPodSandbox for \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\" returns successfully" Jan 30 13:52:04.428940 containerd[1799]: time="2025-01-30T13:52:04.428931055Z" level=info msg="StopPodSandbox for \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\"" Jan 30 13:52:04.428993 containerd[1799]: time="2025-01-30T13:52:04.428965760Z" level=info msg="TearDown network for sandbox \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\" successfully" Jan 30 13:52:04.429010 containerd[1799]: time="2025-01-30T13:52:04.428993431Z" level=info msg="StopPodSandbox for \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\" returns successfully" Jan 30 13:52:04.429109 containerd[1799]: time="2025-01-30T13:52:04.429099322Z" level=info msg="StopPodSandbox for \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\"" Jan 30 13:52:04.429177 containerd[1799]: time="2025-01-30T13:52:04.429150007Z" level=info msg="TearDown network for sandbox \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\" successfully" Jan 30 13:52:04.429196 containerd[1799]: time="2025-01-30T13:52:04.429178925Z" level=info msg="StopPodSandbox for \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\" returns successfully" Jan 30 13:52:04.429270 kubelet[3061]: I0130 13:52:04.429260 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80" Jan 30 13:52:04.429358 containerd[1799]: time="2025-01-30T13:52:04.429344740Z" level=info msg="StopPodSandbox for \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\"" Jan 30 13:52:04.429429 containerd[1799]: time="2025-01-30T13:52:04.429403147Z" level=info msg="TearDown network for sandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\" successfully" Jan 30 13:52:04.429448 containerd[1799]: time="2025-01-30T13:52:04.429430672Z" level=info msg="StopPodSandbox for \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\" returns successfully" Jan 30 13:52:04.429476 containerd[1799]: time="2025-01-30T13:52:04.429455356Z" level=info msg="StopPodSandbox for \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\"" Jan 30 13:52:04.429561 containerd[1799]: time="2025-01-30T13:52:04.429551836Z" level=info msg="Ensure that sandbox feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80 in task-service has been cleanup successfully" Jan 30 13:52:04.429601 containerd[1799]: time="2025-01-30T13:52:04.429557610Z" level=info msg="StopPodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\"" Jan 30 13:52:04.429622 containerd[1799]: time="2025-01-30T13:52:04.429615093Z" level=info msg="TearDown network for sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" successfully" Jan 30 13:52:04.429720 containerd[1799]: time="2025-01-30T13:52:04.429624181Z" level=info msg="StopPodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" returns successfully" Jan 30 13:52:04.429720 containerd[1799]: time="2025-01-30T13:52:04.429624898Z" level=info msg="TearDown network for sandbox \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\" successfully" Jan 30 13:52:04.429720 containerd[1799]: time="2025-01-30T13:52:04.429675046Z" level=info msg="StopPodSandbox for \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\" returns successfully" Jan 30 13:52:04.429841 containerd[1799]: time="2025-01-30T13:52:04.429832139Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\"" Jan 30 13:52:04.429874 containerd[1799]: time="2025-01-30T13:52:04.429835668Z" level=info msg="StopPodSandbox for \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\"" Jan 30 13:52:04.429874 containerd[1799]: time="2025-01-30T13:52:04.429869421Z" level=info msg="TearDown network for sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" successfully" Jan 30 13:52:04.429911 containerd[1799]: time="2025-01-30T13:52:04.429875853Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" returns successfully" Jan 30 13:52:04.429911 containerd[1799]: time="2025-01-30T13:52:04.429887929Z" level=info msg="TearDown network for sandbox \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\" successfully" Jan 30 13:52:04.429948 containerd[1799]: time="2025-01-30T13:52:04.429915533Z" level=info msg="StopPodSandbox for \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\" returns successfully" Jan 30 13:52:04.430091 containerd[1799]: time="2025-01-30T13:52:04.430074353Z" level=info msg="StopPodSandbox for \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\"" Jan 30 13:52:04.430148 containerd[1799]: time="2025-01-30T13:52:04.430138672Z" level=info msg="TearDown network for sandbox \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\" successfully" Jan 30 13:52:04.430171 containerd[1799]: time="2025-01-30T13:52:04.430077166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:6,}" Jan 30 13:52:04.430228 containerd[1799]: time="2025-01-30T13:52:04.430149618Z" level=info msg="StopPodSandbox for \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\" returns successfully" Jan 30 13:52:04.430372 containerd[1799]: time="2025-01-30T13:52:04.430329319Z" level=info msg="StopPodSandbox for \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\"" Jan 30 13:52:04.430401 containerd[1799]: time="2025-01-30T13:52:04.430378123Z" level=info msg="TearDown network for sandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\" successfully" Jan 30 13:52:04.430401 containerd[1799]: time="2025-01-30T13:52:04.430388954Z" level=info msg="StopPodSandbox for \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\" returns successfully" Jan 30 13:52:04.430445 kubelet[3061]: I0130 13:52:04.430407 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672" Jan 30 13:52:04.430465 systemd[1]: run-netns-cni\x2d187c63c1\x2d2bd6\x2dba01\x2d036e\x2db219783aca8c.mount: Deactivated successfully. Jan 30 13:52:04.430594 containerd[1799]: time="2025-01-30T13:52:04.430509867Z" level=info msg="StopPodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\"" Jan 30 13:52:04.430594 containerd[1799]: time="2025-01-30T13:52:04.430555530Z" level=info msg="TearDown network for sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" successfully" Jan 30 13:52:04.430594 containerd[1799]: time="2025-01-30T13:52:04.430564498Z" level=info msg="StopPodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" returns successfully" Jan 30 13:52:04.430704 containerd[1799]: time="2025-01-30T13:52:04.430691880Z" level=info msg="StopPodSandbox for \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\"" Jan 30 13:52:04.430744 containerd[1799]: time="2025-01-30T13:52:04.430700381Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\"" Jan 30 13:52:04.430787 containerd[1799]: time="2025-01-30T13:52:04.430757954Z" level=info msg="TearDown network for sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" successfully" Jan 30 13:52:04.430820 containerd[1799]: time="2025-01-30T13:52:04.430786982Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" returns successfully" Jan 30 13:52:04.430843 containerd[1799]: time="2025-01-30T13:52:04.430833463Z" level=info msg="Ensure that sandbox f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672 in task-service has been cleanup successfully" Jan 30 13:52:04.430948 containerd[1799]: time="2025-01-30T13:52:04.430937791Z" level=info msg="TearDown network for sandbox \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\" successfully" Jan 30 13:52:04.430978 containerd[1799]: time="2025-01-30T13:52:04.430948373Z" level=info msg="StopPodSandbox for \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\" returns successfully" Jan 30 13:52:04.430978 containerd[1799]: time="2025-01-30T13:52:04.430969056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:6,}" Jan 30 13:52:04.431063 containerd[1799]: time="2025-01-30T13:52:04.431052788Z" level=info msg="StopPodSandbox for \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\"" Jan 30 13:52:04.431111 containerd[1799]: time="2025-01-30T13:52:04.431101419Z" level=info msg="TearDown network for sandbox \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\" successfully" Jan 30 13:52:04.431144 containerd[1799]: time="2025-01-30T13:52:04.431110998Z" level=info msg="StopPodSandbox for \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\" returns successfully" Jan 30 13:52:04.431217 containerd[1799]: time="2025-01-30T13:52:04.431206841Z" level=info msg="StopPodSandbox for \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\"" Jan 30 13:52:04.431285 containerd[1799]: time="2025-01-30T13:52:04.431253468Z" level=info msg="TearDown network for sandbox \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\" successfully" Jan 30 13:52:04.431285 containerd[1799]: time="2025-01-30T13:52:04.431283926Z" level=info msg="StopPodSandbox for \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\" returns successfully" Jan 30 13:52:04.431460 containerd[1799]: time="2025-01-30T13:52:04.431448975Z" level=info msg="StopPodSandbox for \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\"" Jan 30 13:52:04.431512 containerd[1799]: time="2025-01-30T13:52:04.431502880Z" level=info msg="TearDown network for sandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\" successfully" Jan 30 13:52:04.431546 containerd[1799]: time="2025-01-30T13:52:04.431511510Z" level=info msg="StopPodSandbox for \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\" returns successfully" Jan 30 13:52:04.431629 containerd[1799]: time="2025-01-30T13:52:04.431621086Z" level=info msg="StopPodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\"" Jan 30 13:52:04.431661 containerd[1799]: time="2025-01-30T13:52:04.431654826Z" level=info msg="TearDown network for sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" successfully" Jan 30 13:52:04.431683 containerd[1799]: time="2025-01-30T13:52:04.431660802Z" level=info msg="StopPodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" returns successfully" Jan 30 13:52:04.431706 kubelet[3061]: I0130 13:52:04.431657 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c" Jan 30 13:52:04.431782 containerd[1799]: time="2025-01-30T13:52:04.431772501Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\"" Jan 30 13:52:04.431821 containerd[1799]: time="2025-01-30T13:52:04.431813743Z" level=info msg="TearDown network for sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" successfully" Jan 30 13:52:04.431821 containerd[1799]: time="2025-01-30T13:52:04.431820468Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" returns successfully" Jan 30 13:52:04.431877 containerd[1799]: time="2025-01-30T13:52:04.431847611Z" level=info msg="StopPodSandbox for \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\"" Jan 30 13:52:04.431961 containerd[1799]: time="2025-01-30T13:52:04.431951729Z" level=info msg="Ensure that sandbox 0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c in task-service has been cleanup successfully" Jan 30 13:52:04.432008 containerd[1799]: time="2025-01-30T13:52:04.431997635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:6,}" Jan 30 13:52:04.432061 containerd[1799]: time="2025-01-30T13:52:04.432051275Z" level=info msg="TearDown network for sandbox \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\" successfully" Jan 30 13:52:04.432096 containerd[1799]: time="2025-01-30T13:52:04.432060833Z" level=info msg="StopPodSandbox for \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\" returns successfully" Jan 30 13:52:04.432168 containerd[1799]: time="2025-01-30T13:52:04.432160082Z" level=info msg="StopPodSandbox for \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\"" Jan 30 13:52:04.432201 containerd[1799]: time="2025-01-30T13:52:04.432194598Z" level=info msg="TearDown network for sandbox \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\" successfully" Jan 30 13:52:04.432224 containerd[1799]: time="2025-01-30T13:52:04.432201000Z" level=info msg="StopPodSandbox for \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\" returns successfully" Jan 30 13:52:04.432350 containerd[1799]: time="2025-01-30T13:52:04.432338432Z" level=info msg="StopPodSandbox for \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\"" Jan 30 13:52:04.432397 containerd[1799]: time="2025-01-30T13:52:04.432388602Z" level=info msg="TearDown network for sandbox \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\" successfully" Jan 30 13:52:04.432423 containerd[1799]: time="2025-01-30T13:52:04.432398289Z" level=info msg="StopPodSandbox for \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\" returns successfully" Jan 30 13:52:04.432522 containerd[1799]: time="2025-01-30T13:52:04.432512121Z" level=info msg="StopPodSandbox for \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\"" Jan 30 13:52:04.432556 containerd[1799]: time="2025-01-30T13:52:04.432549977Z" level=info msg="TearDown network for sandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\" successfully" Jan 30 13:52:04.432579 containerd[1799]: time="2025-01-30T13:52:04.432555836Z" level=info msg="StopPodSandbox for \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\" returns successfully" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.432673891Z" level=info msg="StopPodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\"" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.432718015Z" level=info msg="TearDown network for sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" successfully" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.432739450Z" level=info msg="StopPodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" returns successfully" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.432860720Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\"" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.432915571Z" level=info msg="TearDown network for sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" successfully" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.432925013Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" returns successfully" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.433029522Z" level=info msg="StopPodSandbox for \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\"" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.433133197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:6,}" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.433158858Z" level=info msg="Ensure that sandbox 41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4 in task-service has been cleanup successfully" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.433255457Z" level=info msg="TearDown network for sandbox \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\" successfully" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.433266177Z" level=info msg="StopPodSandbox for \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\" returns successfully" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.433364158Z" level=info msg="StopPodSandbox for \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\"" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.433401200Z" level=info msg="TearDown network for sandbox \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\" successfully" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.433407057Z" level=info msg="StopPodSandbox for \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\" returns successfully" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.433481935Z" level=info msg="StopPodSandbox for \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\"" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.433535528Z" level=info msg="TearDown network for sandbox \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\" successfully" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.433545681Z" level=info msg="StopPodSandbox for \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\" returns successfully" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.433659305Z" level=info msg="StopPodSandbox for \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\"" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.433694518Z" level=info msg="TearDown network for sandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\" successfully" Jan 30 13:52:04.433694 containerd[1799]: time="2025-01-30T13:52:04.433700466Z" level=info msg="StopPodSandbox for \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\" returns successfully" Jan 30 13:52:04.434070 kubelet[3061]: I0130 13:52:04.432794 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4" Jan 30 13:52:04.434070 kubelet[3061]: I0130 13:52:04.433799 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314" Jan 30 13:52:04.432631 systemd[1]: run-netns-cni\x2da10710b4\x2ddeaa\x2dd30f\x2d503f\x2d7bbf5d0ab076.mount: Deactivated successfully. Jan 30 13:52:04.434147 containerd[1799]: time="2025-01-30T13:52:04.433803517Z" level=info msg="StopPodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\"" Jan 30 13:52:04.434147 containerd[1799]: time="2025-01-30T13:52:04.433843175Z" level=info msg="TearDown network for sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" successfully" Jan 30 13:52:04.434147 containerd[1799]: time="2025-01-30T13:52:04.433850378Z" level=info msg="StopPodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" returns successfully" Jan 30 13:52:04.434147 containerd[1799]: time="2025-01-30T13:52:04.433940144Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\"" Jan 30 13:52:04.434147 containerd[1799]: time="2025-01-30T13:52:04.433979028Z" level=info msg="StopPodSandbox for \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\"" Jan 30 13:52:04.434147 containerd[1799]: time="2025-01-30T13:52:04.433985562Z" level=info msg="TearDown network for sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" successfully" Jan 30 13:52:04.434147 containerd[1799]: time="2025-01-30T13:52:04.433995856Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" returns successfully" Jan 30 13:52:04.434147 containerd[1799]: time="2025-01-30T13:52:04.434109442Z" level=info msg="Ensure that sandbox cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314 in task-service has been cleanup successfully" Jan 30 13:52:04.432698 systemd[1]: run-netns-cni\x2d0b657ab7\x2dcaf5\x2d19e3\x2d8919\x2d30fd0ae208ff.mount: Deactivated successfully. Jan 30 13:52:04.434348 containerd[1799]: time="2025-01-30T13:52:04.434145655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:6,}" Jan 30 13:52:04.434348 containerd[1799]: time="2025-01-30T13:52:04.434338274Z" level=info msg="TearDown network for sandbox \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\" successfully" Jan 30 13:52:04.434401 containerd[1799]: time="2025-01-30T13:52:04.434351667Z" level=info msg="StopPodSandbox for \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\" returns successfully" Jan 30 13:52:04.434555 containerd[1799]: time="2025-01-30T13:52:04.434538242Z" level=info msg="StopPodSandbox for \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\"" Jan 30 13:52:04.435306 containerd[1799]: time="2025-01-30T13:52:04.434635047Z" level=info msg="TearDown network for sandbox \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\" successfully" Jan 30 13:52:04.435306 containerd[1799]: time="2025-01-30T13:52:04.434653777Z" level=info msg="StopPodSandbox for \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\" returns successfully" Jan 30 13:52:04.434880 systemd[1]: run-netns-cni\x2d41e1f444\x2ddaf6\x2d27d5\x2daa86\x2d0869e0fdc787.mount: Deactivated successfully. Jan 30 13:52:04.434968 systemd[1]: run-netns-cni\x2da5a37bfc\x2defba\x2dc774\x2d69f9\x2dd6c51769bc27.mount: Deactivated successfully. Jan 30 13:52:04.435487 containerd[1799]: time="2025-01-30T13:52:04.435476249Z" level=info msg="StopPodSandbox for \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\"" Jan 30 13:52:04.435546 containerd[1799]: time="2025-01-30T13:52:04.435523028Z" level=info msg="TearDown network for sandbox \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\" successfully" Jan 30 13:52:04.435575 containerd[1799]: time="2025-01-30T13:52:04.435545582Z" level=info msg="StopPodSandbox for \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\" returns successfully" Jan 30 13:52:04.435668 containerd[1799]: time="2025-01-30T13:52:04.435657336Z" level=info msg="StopPodSandbox for \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\"" Jan 30 13:52:04.435716 containerd[1799]: time="2025-01-30T13:52:04.435698067Z" level=info msg="TearDown network for sandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\" successfully" Jan 30 13:52:04.435737 containerd[1799]: time="2025-01-30T13:52:04.435716086Z" level=info msg="StopPodSandbox for \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\" returns successfully" Jan 30 13:52:04.435839 containerd[1799]: time="2025-01-30T13:52:04.435829041Z" level=info msg="StopPodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\"" Jan 30 13:52:04.435873 containerd[1799]: time="2025-01-30T13:52:04.435863956Z" level=info msg="TearDown network for sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" successfully" Jan 30 13:52:04.435873 containerd[1799]: time="2025-01-30T13:52:04.435869338Z" level=info msg="StopPodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" returns successfully" Jan 30 13:52:04.435972 containerd[1799]: time="2025-01-30T13:52:04.435960956Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\"" Jan 30 13:52:04.436006 containerd[1799]: time="2025-01-30T13:52:04.435997776Z" level=info msg="TearDown network for sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" successfully" Jan 30 13:52:04.436030 containerd[1799]: time="2025-01-30T13:52:04.436005607Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" returns successfully" Jan 30 13:52:04.436197 containerd[1799]: time="2025-01-30T13:52:04.436187717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:6,}" Jan 30 13:52:04.437059 systemd[1]: run-netns-cni\x2d527e0fd3\x2d6d2b\x2dfd79\x2d7f70\x2deb343c7f0c2c.mount: Deactivated successfully. Jan 30 13:52:04.484848 containerd[1799]: time="2025-01-30T13:52:04.484767005Z" level=error msg="Failed to destroy network for sandbox \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.485153 containerd[1799]: time="2025-01-30T13:52:04.485135198Z" level=error msg="encountered an error cleaning up failed sandbox \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.485258 containerd[1799]: time="2025-01-30T13:52:04.485228076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.485836 kubelet[3061]: E0130 13:52:04.485742 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.485970 kubelet[3061]: E0130 13:52:04.485952 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:52:04.485995 kubelet[3061]: E0130 13:52:04.485983 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" Jan 30 13:52:04.486070 kubelet[3061]: E0130 13:52:04.486045 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c748b76b-s2glf_calico-apiserver(520ef51f-94d3-44ca-8df4-36fb6501930e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c748b76b-s2glf_calico-apiserver(520ef51f-94d3-44ca-8df4-36fb6501930e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" podUID="520ef51f-94d3-44ca-8df4-36fb6501930e" Jan 30 13:52:04.493979 containerd[1799]: time="2025-01-30T13:52:04.493945023Z" level=error msg="Failed to destroy network for sandbox \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494157 containerd[1799]: time="2025-01-30T13:52:04.494133112Z" level=error msg="Failed to destroy network for sandbox \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494206 containerd[1799]: time="2025-01-30T13:52:04.494194131Z" level=error msg="encountered an error cleaning up failed sandbox \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494243 containerd[1799]: time="2025-01-30T13:52:04.494233634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494363 containerd[1799]: time="2025-01-30T13:52:04.494345171Z" level=error msg="encountered an error cleaning up failed sandbox \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494402 containerd[1799]: time="2025-01-30T13:52:04.494386503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494471 kubelet[3061]: E0130 13:52:04.494410 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494504 kubelet[3061]: E0130 13:52:04.494484 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:52:04.494504 kubelet[3061]: E0130 13:52:04.494495 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494553 containerd[1799]: time="2025-01-30T13:52:04.494493614Z" level=error msg="Failed to destroy network for sandbox \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494583 kubelet[3061]: E0130 13:52:04.494503 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-c4k8v" Jan 30 13:52:04.494583 kubelet[3061]: E0130 13:52:04.494515 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:52:04.494583 kubelet[3061]: E0130 13:52:04.494527 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" Jan 30 13:52:04.494646 kubelet[3061]: E0130 13:52:04.494536 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-c4k8v_kube-system(04991ed4-bcd5-4f9b-b027-ba79cc5149a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-c4k8v_kube-system(04991ed4-bcd5-4f9b-b027-ba79cc5149a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-c4k8v" podUID="04991ed4-bcd5-4f9b-b027-ba79cc5149a0" Jan 30 13:52:04.494646 kubelet[3061]: E0130 13:52:04.494548 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c748b76b-mcv7b_calico-apiserver(e79b48d4-f379-4135-a6bf-0a0ccaeb5c67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c748b76b-mcv7b_calico-apiserver(e79b48d4-f379-4135-a6bf-0a0ccaeb5c67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" podUID="e79b48d4-f379-4135-a6bf-0a0ccaeb5c67" Jan 30 13:52:04.494718 containerd[1799]: time="2025-01-30T13:52:04.494622231Z" level=error msg="Failed to destroy network for sandbox \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494718 containerd[1799]: time="2025-01-30T13:52:04.494643643Z" level=error msg="encountered an error cleaning up failed sandbox \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494718 containerd[1799]: time="2025-01-30T13:52:04.494670238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494783 kubelet[3061]: E0130 13:52:04.494755 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494808 kubelet[3061]: E0130 13:52:04.494782 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:52:04.494808 kubelet[3061]: E0130 13:52:04.494795 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" Jan 30 13:52:04.494855 containerd[1799]: time="2025-01-30T13:52:04.494786709Z" level=error msg="encountered an error cleaning up failed sandbox \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494855 containerd[1799]: time="2025-01-30T13:52:04.494813316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494944 kubelet[3061]: E0130 13:52:04.494813 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5984859c66-hc7cz_calico-system(3a1bfeac-92e9-4eac-a174-cabc6e4921c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5984859c66-hc7cz_calico-system(3a1bfeac-92e9-4eac-a174-cabc6e4921c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" podUID="3a1bfeac-92e9-4eac-a174-cabc6e4921c6" Jan 30 13:52:04.494944 kubelet[3061]: E0130 13:52:04.494871 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.494944 kubelet[3061]: E0130 13:52:04.494888 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:52:04.495040 containerd[1799]: time="2025-01-30T13:52:04.494920257Z" level=error msg="Failed to destroy network for sandbox \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.495068 kubelet[3061]: E0130 13:52:04.494897 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vhrq" Jan 30 13:52:04.495068 kubelet[3061]: E0130 13:52:04.494914 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7vhrq_kube-system(7c64afcb-0671-44d3-8136-9ee0bad3d72c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7vhrq_kube-system(7c64afcb-0671-44d3-8136-9ee0bad3d72c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7vhrq" podUID="7c64afcb-0671-44d3-8136-9ee0bad3d72c" Jan 30 13:52:04.495138 containerd[1799]: time="2025-01-30T13:52:04.495064770Z" level=error msg="encountered an error cleaning up failed sandbox \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.495138 containerd[1799]: time="2025-01-30T13:52:04.495086484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.495190 kubelet[3061]: E0130 13:52:04.495144 3061 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:04.495190 kubelet[3061]: E0130 13:52:04.495162 3061 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:52:04.495190 kubelet[3061]: E0130 13:52:04.495174 3061 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gpjs7" Jan 30 13:52:04.495269 kubelet[3061]: E0130 13:52:04.495191 3061 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gpjs7_calico-system(08ec3d9c-69d5-48e2-969e-46a8611fadde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gpjs7_calico-system(08ec3d9c-69d5-48e2-969e-46a8611fadde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gpjs7" podUID="08ec3d9c-69d5-48e2-969e-46a8611fadde" Jan 30 13:52:04.635152 containerd[1799]: time="2025-01-30T13:52:04.635128680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:04.635349 containerd[1799]: time="2025-01-30T13:52:04.635334452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:52:04.635664 containerd[1799]: time="2025-01-30T13:52:04.635650753Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:04.636575 containerd[1799]: time="2025-01-30T13:52:04.636562899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:04.636965 containerd[1799]: time="2025-01-30T13:52:04.636952601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.239497889s" Jan 30 13:52:04.637003 containerd[1799]: time="2025-01-30T13:52:04.636967174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:52:04.640269 containerd[1799]: time="2025-01-30T13:52:04.640256535Z" level=info msg="CreateContainer within sandbox \"2707cbfd6ea0c487e9745846805c123eaab26afbc52bbc78940c10e52a740ced\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:52:04.648176 containerd[1799]: time="2025-01-30T13:52:04.648138402Z" level=info msg="CreateContainer within sandbox \"2707cbfd6ea0c487e9745846805c123eaab26afbc52bbc78940c10e52a740ced\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4a63f147db1db0caa7d6ee9902dfdc7b5c9ade5a8028be98040d0621ca14f216\"" Jan 30 13:52:04.648364 containerd[1799]: time="2025-01-30T13:52:04.648352315Z" level=info msg="StartContainer for \"4a63f147db1db0caa7d6ee9902dfdc7b5c9ade5a8028be98040d0621ca14f216\"" Jan 30 13:52:04.678495 systemd[1]: Started cri-containerd-4a63f147db1db0caa7d6ee9902dfdc7b5c9ade5a8028be98040d0621ca14f216.scope - libcontainer container 4a63f147db1db0caa7d6ee9902dfdc7b5c9ade5a8028be98040d0621ca14f216. Jan 30 13:52:04.698974 containerd[1799]: time="2025-01-30T13:52:04.698943214Z" level=info msg="StartContainer for \"4a63f147db1db0caa7d6ee9902dfdc7b5c9ade5a8028be98040d0621ca14f216\" returns successfully" Jan 30 13:52:04.762914 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:52:04.762968 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:52:05.398970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2350944623.mount: Deactivated successfully. Jan 30 13:52:05.443119 kubelet[3061]: I0130 13:52:05.443072 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9" Jan 30 13:52:05.444331 containerd[1799]: time="2025-01-30T13:52:05.444229735Z" level=info msg="StopPodSandbox for \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\"" Jan 30 13:52:05.444988 containerd[1799]: time="2025-01-30T13:52:05.444917786Z" level=info msg="Ensure that sandbox e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9 in task-service has been cleanup successfully" Jan 30 13:52:05.445372 containerd[1799]: time="2025-01-30T13:52:05.445307192Z" level=info msg="TearDown network for sandbox \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\" successfully" Jan 30 13:52:05.445478 containerd[1799]: time="2025-01-30T13:52:05.445370956Z" level=info msg="StopPodSandbox for \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\" returns successfully" Jan 30 13:52:05.446084 containerd[1799]: time="2025-01-30T13:52:05.445993049Z" level=info msg="StopPodSandbox for \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\"" Jan 30 13:52:05.446399 containerd[1799]: time="2025-01-30T13:52:05.446238133Z" level=info msg="TearDown network for sandbox \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\" successfully" Jan 30 13:52:05.446565 containerd[1799]: time="2025-01-30T13:52:05.446398167Z" level=info msg="StopPodSandbox for \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\" returns successfully" Jan 30 13:52:05.447137 containerd[1799]: time="2025-01-30T13:52:05.447067716Z" level=info msg="StopPodSandbox for \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\"" Jan 30 13:52:05.447374 containerd[1799]: time="2025-01-30T13:52:05.447308252Z" level=info msg="TearDown network for sandbox \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\" successfully" Jan 30 13:52:05.447539 containerd[1799]: time="2025-01-30T13:52:05.447379933Z" level=info msg="StopPodSandbox for \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\" returns successfully" Jan 30 13:52:05.448094 containerd[1799]: time="2025-01-30T13:52:05.448028843Z" level=info msg="StopPodSandbox for \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\"" Jan 30 13:52:05.448375 containerd[1799]: time="2025-01-30T13:52:05.448283085Z" level=info msg="TearDown network for sandbox \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\" successfully" Jan 30 13:52:05.448545 containerd[1799]: time="2025-01-30T13:52:05.448374252Z" level=info msg="StopPodSandbox for \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\" returns successfully" Jan 30 13:52:05.449131 containerd[1799]: time="2025-01-30T13:52:05.449065759Z" level=info msg="StopPodSandbox for \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\"" Jan 30 13:52:05.449362 containerd[1799]: time="2025-01-30T13:52:05.449260068Z" level=info msg="TearDown network for sandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\" successfully" Jan 30 13:52:05.449362 containerd[1799]: time="2025-01-30T13:52:05.449291616Z" level=info msg="StopPodSandbox for \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\" returns successfully" Jan 30 13:52:05.449758 kubelet[3061]: I0130 13:52:05.449709 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180" Jan 30 13:52:05.449894 containerd[1799]: time="2025-01-30T13:52:05.449795174Z" level=info msg="StopPodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\"" Jan 30 13:52:05.450144 containerd[1799]: time="2025-01-30T13:52:05.450088163Z" level=info msg="TearDown network for sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" successfully" Jan 30 13:52:05.450255 containerd[1799]: time="2025-01-30T13:52:05.450148869Z" level=info msg="StopPodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" returns successfully" Jan 30 13:52:05.450982 containerd[1799]: time="2025-01-30T13:52:05.450915439Z" level=info msg="StopPodSandbox for \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\"" Jan 30 13:52:05.451146 containerd[1799]: time="2025-01-30T13:52:05.450934596Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\"" Jan 30 13:52:05.451382 containerd[1799]: time="2025-01-30T13:52:05.451301942Z" level=info msg="TearDown network for sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" successfully" Jan 30 13:52:05.451382 containerd[1799]: time="2025-01-30T13:52:05.451370618Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" returns successfully" Jan 30 13:52:05.451638 containerd[1799]: time="2025-01-30T13:52:05.451436739Z" level=info msg="Ensure that sandbox 93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180 in task-service has been cleanup successfully" Jan 30 13:52:05.451926 containerd[1799]: time="2025-01-30T13:52:05.451869182Z" level=info msg="TearDown network for sandbox \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\" successfully" Jan 30 13:52:05.452068 containerd[1799]: time="2025-01-30T13:52:05.451916710Z" level=info msg="StopPodSandbox for \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\" returns successfully" Jan 30 13:52:05.452494 containerd[1799]: time="2025-01-30T13:52:05.452380514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:7,}" Jan 30 13:52:05.452650 containerd[1799]: time="2025-01-30T13:52:05.452498145Z" level=info msg="StopPodSandbox for \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\"" Jan 30 13:52:05.452780 containerd[1799]: time="2025-01-30T13:52:05.452700105Z" level=info msg="TearDown network for sandbox \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\" successfully" Jan 30 13:52:05.452780 containerd[1799]: time="2025-01-30T13:52:05.452736043Z" level=info msg="StopPodSandbox for \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\" returns successfully" Jan 30 13:52:05.453436 containerd[1799]: time="2025-01-30T13:52:05.453413550Z" level=info msg="StopPodSandbox for \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\"" Jan 30 13:52:05.453527 containerd[1799]: time="2025-01-30T13:52:05.453501325Z" level=info msg="TearDown network for sandbox \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\" successfully" Jan 30 13:52:05.453564 containerd[1799]: time="2025-01-30T13:52:05.453527267Z" level=info msg="StopPodSandbox for \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\" returns successfully" Jan 30 13:52:05.453626 systemd[1]: run-netns-cni\x2d58d769a3\x2dfefb\x2d3cb2\x2d8896\x2db9843cce0586.mount: Deactivated successfully. Jan 30 13:52:05.453783 containerd[1799]: time="2025-01-30T13:52:05.453737770Z" level=info msg="StopPodSandbox for \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\"" Jan 30 13:52:05.453812 containerd[1799]: time="2025-01-30T13:52:05.453796594Z" level=info msg="TearDown network for sandbox \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\" successfully" Jan 30 13:52:05.453812 containerd[1799]: time="2025-01-30T13:52:05.453808149Z" level=info msg="StopPodSandbox for \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\" returns successfully" Jan 30 13:52:05.453927 containerd[1799]: time="2025-01-30T13:52:05.453912001Z" level=info msg="StopPodSandbox for \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\"" Jan 30 13:52:05.453978 containerd[1799]: time="2025-01-30T13:52:05.453968136Z" level=info msg="TearDown network for sandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\" successfully" Jan 30 13:52:05.454008 containerd[1799]: time="2025-01-30T13:52:05.453979544Z" level=info msg="StopPodSandbox for \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\" returns successfully" Jan 30 13:52:05.454064 kubelet[3061]: I0130 13:52:05.454054 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86" Jan 30 13:52:05.454115 containerd[1799]: time="2025-01-30T13:52:05.454104163Z" level=info msg="StopPodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\"" Jan 30 13:52:05.454164 containerd[1799]: time="2025-01-30T13:52:05.454155025Z" level=info msg="TearDown network for sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" successfully" Jan 30 13:52:05.454194 containerd[1799]: time="2025-01-30T13:52:05.454163991Z" level=info msg="StopPodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" returns successfully" Jan 30 13:52:05.454316 containerd[1799]: time="2025-01-30T13:52:05.454304400Z" level=info msg="StopPodSandbox for \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\"" Jan 30 13:52:05.454358 containerd[1799]: time="2025-01-30T13:52:05.454346988Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\"" Jan 30 13:52:05.454409 containerd[1799]: time="2025-01-30T13:52:05.454399401Z" level=info msg="TearDown network for sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" successfully" Jan 30 13:52:05.454432 containerd[1799]: time="2025-01-30T13:52:05.454409616Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" returns successfully" Jan 30 13:52:05.454432 containerd[1799]: time="2025-01-30T13:52:05.454416408Z" level=info msg="Ensure that sandbox 21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86 in task-service has been cleanup successfully" Jan 30 13:52:05.454527 containerd[1799]: time="2025-01-30T13:52:05.454516373Z" level=info msg="TearDown network for sandbox \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\" successfully" Jan 30 13:52:05.454544 containerd[1799]: time="2025-01-30T13:52:05.454528338Z" level=info msg="StopPodSandbox for \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\" returns successfully" Jan 30 13:52:05.454594 containerd[1799]: time="2025-01-30T13:52:05.454582291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:7,}" Jan 30 13:52:05.454618 containerd[1799]: time="2025-01-30T13:52:05.454610327Z" level=info msg="StopPodSandbox for \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\"" Jan 30 13:52:05.454661 containerd[1799]: time="2025-01-30T13:52:05.454653805Z" level=info msg="TearDown network for sandbox \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\" successfully" Jan 30 13:52:05.454680 containerd[1799]: time="2025-01-30T13:52:05.454661331Z" level=info msg="StopPodSandbox for \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\" returns successfully" Jan 30 13:52:05.454761 containerd[1799]: time="2025-01-30T13:52:05.454752499Z" level=info msg="StopPodSandbox for \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\"" Jan 30 13:52:05.454813 containerd[1799]: time="2025-01-30T13:52:05.454803551Z" level=info msg="TearDown network for sandbox \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\" successfully" Jan 30 13:52:05.454829 containerd[1799]: time="2025-01-30T13:52:05.454814384Z" level=info msg="StopPodSandbox for \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\" returns successfully" Jan 30 13:52:05.454912 containerd[1799]: time="2025-01-30T13:52:05.454903550Z" level=info msg="StopPodSandbox for \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\"" Jan 30 13:52:05.454951 containerd[1799]: time="2025-01-30T13:52:05.454944203Z" level=info msg="TearDown network for sandbox \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\" successfully" Jan 30 13:52:05.454969 containerd[1799]: time="2025-01-30T13:52:05.454951319Z" level=info msg="StopPodSandbox for \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\" returns successfully" Jan 30 13:52:05.455057 containerd[1799]: time="2025-01-30T13:52:05.455049248Z" level=info msg="StopPodSandbox for \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\"" Jan 30 13:52:05.455093 containerd[1799]: time="2025-01-30T13:52:05.455086547Z" level=info msg="TearDown network for sandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\" successfully" Jan 30 13:52:05.455110 containerd[1799]: time="2025-01-30T13:52:05.455093346Z" level=info msg="StopPodSandbox for \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\" returns successfully" Jan 30 13:52:05.455209 containerd[1799]: time="2025-01-30T13:52:05.455196734Z" level=info msg="StopPodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\"" Jan 30 13:52:05.455260 containerd[1799]: time="2025-01-30T13:52:05.455251033Z" level=info msg="TearDown network for sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" successfully" Jan 30 13:52:05.455284 containerd[1799]: time="2025-01-30T13:52:05.455261511Z" level=info msg="StopPodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" returns successfully" Jan 30 13:52:05.455308 kubelet[3061]: I0130 13:52:05.455292 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2" Jan 30 13:52:05.455436 containerd[1799]: time="2025-01-30T13:52:05.455425377Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\"" Jan 30 13:52:05.455504 containerd[1799]: time="2025-01-30T13:52:05.455489729Z" level=info msg="TearDown network for sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" successfully" Jan 30 13:52:05.455552 containerd[1799]: time="2025-01-30T13:52:05.455500828Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" returns successfully" Jan 30 13:52:05.455831 systemd[1]: run-netns-cni\x2dbed00ae4\x2d2c20\x2d0eeb\x2de1ab\x2d61fe847e5296.mount: Deactivated successfully. Jan 30 13:52:05.455926 systemd[1]: run-netns-cni\x2db5a07e1c\x2d9fe1\x2d6238\x2da0af\x2d1f2edf99a89e.mount: Deactivated successfully. Jan 30 13:52:05.456848 containerd[1799]: time="2025-01-30T13:52:05.455849439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:7,}" Jan 30 13:52:05.456848 containerd[1799]: time="2025-01-30T13:52:05.455903939Z" level=info msg="StopPodSandbox for \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\"" Jan 30 13:52:05.456848 containerd[1799]: time="2025-01-30T13:52:05.456062591Z" level=info msg="Ensure that sandbox a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2 in task-service has been cleanup successfully" Jan 30 13:52:05.456848 containerd[1799]: time="2025-01-30T13:52:05.456227600Z" level=info msg="TearDown network for sandbox \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\" successfully" Jan 30 13:52:05.456848 containerd[1799]: time="2025-01-30T13:52:05.456238211Z" level=info msg="StopPodSandbox for \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\" returns successfully" Jan 30 13:52:05.456951 containerd[1799]: time="2025-01-30T13:52:05.456912864Z" level=info msg="StopPodSandbox for \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\"" Jan 30 13:52:05.456975 containerd[1799]: time="2025-01-30T13:52:05.456953525Z" level=info msg="TearDown network for sandbox \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\" successfully" Jan 30 13:52:05.456975 containerd[1799]: time="2025-01-30T13:52:05.456959787Z" level=info msg="StopPodSandbox for \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\" returns successfully" Jan 30 13:52:05.457136 containerd[1799]: time="2025-01-30T13:52:05.457121746Z" level=info msg="StopPodSandbox for \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\"" Jan 30 13:52:05.457182 containerd[1799]: time="2025-01-30T13:52:05.457173274Z" level=info msg="TearDown network for sandbox \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\" successfully" Jan 30 13:52:05.457226 containerd[1799]: time="2025-01-30T13:52:05.457182226Z" level=info msg="StopPodSandbox for \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\" returns successfully" Jan 30 13:52:05.457483 containerd[1799]: time="2025-01-30T13:52:05.457469190Z" level=info msg="StopPodSandbox for \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\"" Jan 30 13:52:05.457528 containerd[1799]: time="2025-01-30T13:52:05.457518593Z" level=info msg="TearDown network for sandbox \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\" successfully" Jan 30 13:52:05.457557 containerd[1799]: time="2025-01-30T13:52:05.457527875Z" level=info msg="StopPodSandbox for \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\" returns successfully" Jan 30 13:52:05.457646 containerd[1799]: time="2025-01-30T13:52:05.457636169Z" level=info msg="StopPodSandbox for \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\"" Jan 30 13:52:05.457681 containerd[1799]: time="2025-01-30T13:52:05.457674249Z" level=info msg="TearDown network for sandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\" successfully" Jan 30 13:52:05.457720 containerd[1799]: time="2025-01-30T13:52:05.457681439Z" level=info msg="StopPodSandbox for \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\" returns successfully" Jan 30 13:52:05.457783 containerd[1799]: time="2025-01-30T13:52:05.457774160Z" level=info msg="StopPodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\"" Jan 30 13:52:05.457817 containerd[1799]: time="2025-01-30T13:52:05.457810391Z" level=info msg="TearDown network for sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" successfully" Jan 30 13:52:05.457848 containerd[1799]: time="2025-01-30T13:52:05.457819808Z" level=info msg="StopPodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" returns successfully" Jan 30 13:52:05.457914 containerd[1799]: time="2025-01-30T13:52:05.457905434Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\"" Jan 30 13:52:05.457943 kubelet[3061]: I0130 13:52:05.457918 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5" Jan 30 13:52:05.457983 containerd[1799]: time="2025-01-30T13:52:05.457940818Z" level=info msg="TearDown network for sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" successfully" Jan 30 13:52:05.457983 containerd[1799]: time="2025-01-30T13:52:05.457950235Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" returns successfully" Jan 30 13:52:05.458135 containerd[1799]: time="2025-01-30T13:52:05.458124126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:7,}" Jan 30 13:52:05.458181 containerd[1799]: time="2025-01-30T13:52:05.458170243Z" level=info msg="StopPodSandbox for \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\"" Jan 30 13:52:05.458259 containerd[1799]: time="2025-01-30T13:52:05.458249980Z" level=info msg="Ensure that sandbox 696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5 in task-service has been cleanup successfully" Jan 30 13:52:05.458341 containerd[1799]: time="2025-01-30T13:52:05.458332141Z" level=info msg="TearDown network for sandbox \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\" successfully" Jan 30 13:52:05.458341 containerd[1799]: time="2025-01-30T13:52:05.458340264Z" level=info msg="StopPodSandbox for \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\" returns successfully" Jan 30 13:52:05.458442 containerd[1799]: time="2025-01-30T13:52:05.458430968Z" level=info msg="StopPodSandbox for \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\"" Jan 30 13:52:05.458489 containerd[1799]: time="2025-01-30T13:52:05.458478749Z" level=info msg="TearDown network for sandbox \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\" successfully" Jan 30 13:52:05.458522 containerd[1799]: time="2025-01-30T13:52:05.458489408Z" level=info msg="StopPodSandbox for \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\" returns successfully" Jan 30 13:52:05.458635 containerd[1799]: time="2025-01-30T13:52:05.458622890Z" level=info msg="StopPodSandbox for \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\"" Jan 30 13:52:05.458640 systemd[1]: run-netns-cni\x2d7d6b9dc4\x2dd26a\x2d79e7\x2daf1b\x2d7b62566bea0e.mount: Deactivated successfully. Jan 30 13:52:05.458716 containerd[1799]: time="2025-01-30T13:52:05.458669476Z" level=info msg="TearDown network for sandbox \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\" successfully" Jan 30 13:52:05.458716 containerd[1799]: time="2025-01-30T13:52:05.458679232Z" level=info msg="StopPodSandbox for \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\" returns successfully" Jan 30 13:52:05.458798 containerd[1799]: time="2025-01-30T13:52:05.458788698Z" level=info msg="StopPodSandbox for \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\"" Jan 30 13:52:05.458839 containerd[1799]: time="2025-01-30T13:52:05.458830573Z" level=info msg="TearDown network for sandbox \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\" successfully" Jan 30 13:52:05.458869 containerd[1799]: time="2025-01-30T13:52:05.458838710Z" level=info msg="StopPodSandbox for \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\" returns successfully" Jan 30 13:52:05.458983 containerd[1799]: time="2025-01-30T13:52:05.458970266Z" level=info msg="StopPodSandbox for \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\"" Jan 30 13:52:05.459026 containerd[1799]: time="2025-01-30T13:52:05.459017189Z" level=info msg="TearDown network for sandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\" successfully" Jan 30 13:52:05.459054 containerd[1799]: time="2025-01-30T13:52:05.459026322Z" level=info msg="StopPodSandbox for \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\" returns successfully" Jan 30 13:52:05.459186 containerd[1799]: time="2025-01-30T13:52:05.459170458Z" level=info msg="StopPodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\"" Jan 30 13:52:05.459238 containerd[1799]: time="2025-01-30T13:52:05.459229780Z" level=info msg="TearDown network for sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" successfully" Jan 30 13:52:05.459269 containerd[1799]: time="2025-01-30T13:52:05.459237690Z" level=info msg="StopPodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" returns successfully" Jan 30 13:52:05.459350 containerd[1799]: time="2025-01-30T13:52:05.459339990Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\"" Jan 30 13:52:05.459384 containerd[1799]: time="2025-01-30T13:52:05.459374503Z" level=info msg="TearDown network for sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" successfully" Jan 30 13:52:05.459384 containerd[1799]: time="2025-01-30T13:52:05.459381279Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" returns successfully" Jan 30 13:52:05.459544 containerd[1799]: time="2025-01-30T13:52:05.459531906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:7,}" Jan 30 13:52:05.459951 kubelet[3061]: I0130 13:52:05.459936 3061 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112" Jan 30 13:52:05.460180 containerd[1799]: time="2025-01-30T13:52:05.460167889Z" level=info msg="StopPodSandbox for \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\"" Jan 30 13:52:05.460283 containerd[1799]: time="2025-01-30T13:52:05.460273752Z" level=info msg="Ensure that sandbox 7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112 in task-service has been cleanup successfully" Jan 30 13:52:05.460371 containerd[1799]: time="2025-01-30T13:52:05.460359893Z" level=info msg="TearDown network for sandbox \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\" successfully" Jan 30 13:52:05.460371 containerd[1799]: time="2025-01-30T13:52:05.460368984Z" level=info msg="StopPodSandbox for \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\" returns successfully" Jan 30 13:52:05.460520 containerd[1799]: time="2025-01-30T13:52:05.460506207Z" level=info msg="StopPodSandbox for \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\"" Jan 30 13:52:05.460569 containerd[1799]: time="2025-01-30T13:52:05.460560224Z" level=info msg="TearDown network for sandbox \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\" successfully" Jan 30 13:52:05.460594 containerd[1799]: time="2025-01-30T13:52:05.460570666Z" level=info msg="StopPodSandbox for \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\" returns successfully" Jan 30 13:52:05.460695 containerd[1799]: time="2025-01-30T13:52:05.460684521Z" level=info msg="StopPodSandbox for \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\"" Jan 30 13:52:05.460754 containerd[1799]: time="2025-01-30T13:52:05.460744251Z" level=info msg="TearDown network for sandbox \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\" successfully" Jan 30 13:52:05.460791 containerd[1799]: time="2025-01-30T13:52:05.460752662Z" level=info msg="StopPodSandbox for \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\" returns successfully" Jan 30 13:52:05.460881 containerd[1799]: time="2025-01-30T13:52:05.460868882Z" level=info msg="StopPodSandbox for \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\"" Jan 30 13:52:05.460931 containerd[1799]: time="2025-01-30T13:52:05.460921970Z" level=info msg="TearDown network for sandbox \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\" successfully" Jan 30 13:52:05.460953 containerd[1799]: time="2025-01-30T13:52:05.460932124Z" level=info msg="StopPodSandbox for \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\" returns successfully" Jan 30 13:52:05.460951 systemd[1]: run-netns-cni\x2d850cb8a9\x2d951a\x2d4db1\x2dcdfa\x2db15e0672a5b0.mount: Deactivated successfully. Jan 30 13:52:05.461033 containerd[1799]: time="2025-01-30T13:52:05.461024465Z" level=info msg="StopPodSandbox for \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\"" Jan 30 13:52:05.461070 containerd[1799]: time="2025-01-30T13:52:05.461063957Z" level=info msg="TearDown network for sandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\" successfully" Jan 30 13:52:05.461089 containerd[1799]: time="2025-01-30T13:52:05.461070048Z" level=info msg="StopPodSandbox for \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\" returns successfully" Jan 30 13:52:05.461194 containerd[1799]: time="2025-01-30T13:52:05.461182598Z" level=info msg="StopPodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\"" Jan 30 13:52:05.461229 containerd[1799]: time="2025-01-30T13:52:05.461222720Z" level=info msg="TearDown network for sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" successfully" Jan 30 13:52:05.461251 containerd[1799]: time="2025-01-30T13:52:05.461229306Z" level=info msg="StopPodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" returns successfully" Jan 30 13:52:05.461378 containerd[1799]: time="2025-01-30T13:52:05.461366011Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\"" Jan 30 13:52:05.461433 containerd[1799]: time="2025-01-30T13:52:05.461423198Z" level=info msg="TearDown network for sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" successfully" Jan 30 13:52:05.461452 containerd[1799]: time="2025-01-30T13:52:05.461434689Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" returns successfully" Jan 30 13:52:05.461650 containerd[1799]: time="2025-01-30T13:52:05.461639841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:7,}" Jan 30 13:52:05.468837 kubelet[3061]: I0130 13:52:05.468785 3061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gn94v" podStartSLOduration=1.853253201 podStartE2EDuration="14.468768009s" podCreationTimestamp="2025-01-30 13:51:51 +0000 UTC" firstStartedPulling="2025-01-30 13:51:52.021798395 +0000 UTC m=+12.722002306" lastFinishedPulling="2025-01-30 13:52:04.637313206 +0000 UTC m=+25.337517114" observedRunningTime="2025-01-30 13:52:05.468431111 +0000 UTC m=+26.168635022" watchObservedRunningTime="2025-01-30 13:52:05.468768009 +0000 UTC m=+26.168971913" Jan 30 13:52:05.569657 systemd-networkd[1712]: cali441b1c358b3: Link UP Jan 30 13:52:05.570258 systemd-networkd[1712]: cali441b1c358b3: Gained carrier Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.479 [INFO][6062] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.488 [INFO][6062] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--mcv7b-eth0 calico-apiserver-68c748b76b- calico-apiserver e79b48d4-f379-4135-a6bf-0a0ccaeb5c67 646 0 2025-01-30 13:51:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68c748b76b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.0-a-f55746354a calico-apiserver-68c748b76b-mcv7b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali441b1c358b3 [] []}} ContainerID="e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-mcv7b" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--mcv7b-" Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.488 [INFO][6062] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-mcv7b" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--mcv7b-eth0" Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.508 [INFO][6187] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" HandleID="k8s-pod-network.e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" Workload="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--mcv7b-eth0" Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.515 [INFO][6187] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" HandleID="k8s-pod-network.e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" Workload="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--mcv7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f52f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.0-a-f55746354a", "pod":"calico-apiserver-68c748b76b-mcv7b", "timestamp":"2025-01-30 13:52:05.508683001 +0000 UTC"}, Hostname:"ci-4186.1.0-a-f55746354a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.515 [INFO][6187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.515 [INFO][6187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.515 [INFO][6187] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-f55746354a' Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.516 [INFO][6187] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.518 [INFO][6187] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.520 [INFO][6187] ipam/ipam.go 489: Trying affinity for 192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.521 [INFO][6187] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.522 [INFO][6187] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.522 [INFO][6187] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.523 [INFO][6187] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410 Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.541 [INFO][6187] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.547 [INFO][6187] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.65/26] block=192.168.62.64/26 handle="k8s-pod-network.e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.547 [INFO][6187] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.65/26] handle="k8s-pod-network.e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.586778 containerd[1799]: 2025-01-30 13:52:05.547 [INFO][6187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:05.588188 containerd[1799]: 2025-01-30 13:52:05.547 [INFO][6187] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.65/26] IPv6=[] ContainerID="e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" HandleID="k8s-pod-network.e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" Workload="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--mcv7b-eth0" Jan 30 13:52:05.588188 containerd[1799]: 2025-01-30 13:52:05.554 [INFO][6062] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-mcv7b" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--mcv7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--mcv7b-eth0", GenerateName:"calico-apiserver-68c748b76b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e79b48d4-f379-4135-a6bf-0a0ccaeb5c67", ResourceVersion:"646", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68c748b76b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f55746354a", ContainerID:"", Pod:"calico-apiserver-68c748b76b-mcv7b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali441b1c358b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:05.588188 containerd[1799]: 2025-01-30 13:52:05.554 [INFO][6062] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.65/32] ContainerID="e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-mcv7b" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--mcv7b-eth0" Jan 30 13:52:05.588188 containerd[1799]: 2025-01-30 13:52:05.554 [INFO][6062] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali441b1c358b3 ContainerID="e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-mcv7b" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--mcv7b-eth0" Jan 30 13:52:05.588188 containerd[1799]: 2025-01-30 13:52:05.570 [INFO][6062] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-mcv7b" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--mcv7b-eth0" Jan 30 13:52:05.588663 containerd[1799]: 2025-01-30 13:52:05.570 [INFO][6062] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-mcv7b" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--mcv7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--mcv7b-eth0", GenerateName:"calico-apiserver-68c748b76b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e79b48d4-f379-4135-a6bf-0a0ccaeb5c67", ResourceVersion:"646", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68c748b76b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f55746354a", ContainerID:"e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410", Pod:"calico-apiserver-68c748b76b-mcv7b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali441b1c358b3", MAC:"4a:d8:00:1b:23:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:05.588663 containerd[1799]: 2025-01-30 13:52:05.584 [INFO][6062] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-mcv7b" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--mcv7b-eth0" Jan 30 13:52:05.601456 containerd[1799]: time="2025-01-30T13:52:05.601416065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:05.601456 containerd[1799]: time="2025-01-30T13:52:05.601445047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:05.601456 containerd[1799]: time="2025-01-30T13:52:05.601451767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:05.601575 containerd[1799]: time="2025-01-30T13:52:05.601534586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:05.622438 systemd[1]: Started cri-containerd-e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410.scope - libcontainer container e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410. Jan 30 13:52:05.632370 systemd-networkd[1712]: cali55c2db0343f: Link UP Jan 30 13:52:05.632489 systemd-networkd[1712]: cali55c2db0343f: Gained carrier Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.481 [INFO][6074] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.488 [INFO][6074] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--f55746354a-k8s-calico--kube--controllers--5984859c66--hc7cz-eth0 calico-kube-controllers-5984859c66- calico-system 3a1bfeac-92e9-4eac-a174-cabc6e4921c6 649 0 2025-01-30 13:51:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5984859c66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4186.1.0-a-f55746354a calico-kube-controllers-5984859c66-hc7cz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali55c2db0343f [] []}} ContainerID="9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" Namespace="calico-system" Pod="calico-kube-controllers-5984859c66-hc7cz" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--kube--controllers--5984859c66--hc7cz-" Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.488 [INFO][6074] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" Namespace="calico-system" Pod="calico-kube-controllers-5984859c66-hc7cz" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--kube--controllers--5984859c66--hc7cz-eth0" Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.508 [INFO][6189] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" HandleID="k8s-pod-network.9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" Workload="ci--4186.1.0--a--f55746354a-k8s-calico--kube--controllers--5984859c66--hc7cz-eth0" Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.515 [INFO][6189] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" HandleID="k8s-pod-network.9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" Workload="ci--4186.1.0--a--f55746354a-k8s-calico--kube--controllers--5984859c66--hc7cz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c9ef0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.0-a-f55746354a", "pod":"calico-kube-controllers-5984859c66-hc7cz", "timestamp":"2025-01-30 13:52:05.508377708 +0000 UTC"}, Hostname:"ci-4186.1.0-a-f55746354a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.515 [INFO][6189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.547 [INFO][6189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.547 [INFO][6189] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-f55746354a' Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.617 [INFO][6189] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.620 [INFO][6189] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.623 [INFO][6189] ipam/ipam.go 489: Trying affinity for 192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.624 [INFO][6189] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.625 [INFO][6189] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.625 [INFO][6189] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.626 [INFO][6189] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.628 [INFO][6189] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.630 [INFO][6189] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.66/26] block=192.168.62.64/26 handle="k8s-pod-network.9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.630 [INFO][6189] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.66/26] handle="k8s-pod-network.9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.637798 containerd[1799]: 2025-01-30 13:52:05.630 [INFO][6189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:05.638190 containerd[1799]: 2025-01-30 13:52:05.630 [INFO][6189] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.66/26] IPv6=[] ContainerID="9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" HandleID="k8s-pod-network.9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" Workload="ci--4186.1.0--a--f55746354a-k8s-calico--kube--controllers--5984859c66--hc7cz-eth0" Jan 30 13:52:05.638190 containerd[1799]: 2025-01-30 13:52:05.631 [INFO][6074] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" Namespace="calico-system" Pod="calico-kube-controllers-5984859c66-hc7cz" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--kube--controllers--5984859c66--hc7cz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f55746354a-k8s-calico--kube--controllers--5984859c66--hc7cz-eth0", GenerateName:"calico-kube-controllers-5984859c66-", Namespace:"calico-system", SelfLink:"", UID:"3a1bfeac-92e9-4eac-a174-cabc6e4921c6", ResourceVersion:"649", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5984859c66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f55746354a", ContainerID:"", Pod:"calico-kube-controllers-5984859c66-hc7cz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55c2db0343f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:05.638190 containerd[1799]: 2025-01-30 13:52:05.631 [INFO][6074] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.66/32] ContainerID="9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" Namespace="calico-system" Pod="calico-kube-controllers-5984859c66-hc7cz" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--kube--controllers--5984859c66--hc7cz-eth0" Jan 30 13:52:05.638190 containerd[1799]: 2025-01-30 13:52:05.631 [INFO][6074] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55c2db0343f ContainerID="9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" Namespace="calico-system" Pod="calico-kube-controllers-5984859c66-hc7cz" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--kube--controllers--5984859c66--hc7cz-eth0" Jan 30 13:52:05.638190 containerd[1799]: 2025-01-30 13:52:05.632 [INFO][6074] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" Namespace="calico-system" Pod="calico-kube-controllers-5984859c66-hc7cz" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--kube--controllers--5984859c66--hc7cz-eth0" Jan 30 13:52:05.638298 containerd[1799]: 2025-01-30 13:52:05.632 [INFO][6074] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" Namespace="calico-system" Pod="calico-kube-controllers-5984859c66-hc7cz" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--kube--controllers--5984859c66--hc7cz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f55746354a-k8s-calico--kube--controllers--5984859c66--hc7cz-eth0", GenerateName:"calico-kube-controllers-5984859c66-", Namespace:"calico-system", SelfLink:"", UID:"3a1bfeac-92e9-4eac-a174-cabc6e4921c6", ResourceVersion:"649", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5984859c66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f55746354a", ContainerID:"9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb", Pod:"calico-kube-controllers-5984859c66-hc7cz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55c2db0343f", MAC:"c6:06:9c:cc:a8:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:05.638298 containerd[1799]: 2025-01-30 13:52:05.636 [INFO][6074] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb" Namespace="calico-system" Pod="calico-kube-controllers-5984859c66-hc7cz" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--kube--controllers--5984859c66--hc7cz-eth0" Jan 30 13:52:05.647188 containerd[1799]: time="2025-01-30T13:52:05.647151152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-mcv7b,Uid:e79b48d4-f379-4135-a6bf-0a0ccaeb5c67,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410\"" Jan 30 13:52:05.647692 containerd[1799]: time="2025-01-30T13:52:05.647651467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:05.648000 containerd[1799]: time="2025-01-30T13:52:05.647989516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:52:05.648025 containerd[1799]: time="2025-01-30T13:52:05.647959032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:05.648025 containerd[1799]: time="2025-01-30T13:52:05.647973783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:05.648083 containerd[1799]: time="2025-01-30T13:52:05.648021077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:05.665602 systemd[1]: Started cri-containerd-9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb.scope - libcontainer container 9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb. Jan 30 13:52:05.687392 containerd[1799]: time="2025-01-30T13:52:05.687304961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5984859c66-hc7cz,Uid:3a1bfeac-92e9-4eac-a174-cabc6e4921c6,Namespace:calico-system,Attempt:7,} returns sandbox id \"9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb\"" Jan 30 13:52:05.741994 systemd-networkd[1712]: caliad4503c6bfc: Link UP Jan 30 13:52:05.742224 systemd-networkd[1712]: caliad4503c6bfc: Gained carrier Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.487 [INFO][6113] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.493 [INFO][6113] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--c4k8v-eth0 coredns-6f6b679f8f- kube-system 04991ed4-bcd5-4f9b-b027-ba79cc5149a0 648 0 2025-01-30 13:51:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.0-a-f55746354a coredns-6f6b679f8f-c4k8v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad4503c6bfc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" Namespace="kube-system" Pod="coredns-6f6b679f8f-c4k8v" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--c4k8v-" Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.493 [INFO][6113] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" Namespace="kube-system" Pod="coredns-6f6b679f8f-c4k8v" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--c4k8v-eth0" Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.512 [INFO][6205] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" HandleID="k8s-pod-network.45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" Workload="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--c4k8v-eth0" Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.517 [INFO][6205] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" HandleID="k8s-pod-network.45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" Workload="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--c4k8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000299970), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.0-a-f55746354a", "pod":"coredns-6f6b679f8f-c4k8v", "timestamp":"2025-01-30 13:52:05.512959 +0000 UTC"}, Hostname:"ci-4186.1.0-a-f55746354a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.517 [INFO][6205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.630 [INFO][6205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.630 [INFO][6205] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-f55746354a' Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.718 [INFO][6205] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.721 [INFO][6205] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.725 [INFO][6205] ipam/ipam.go 489: Trying affinity for 192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.726 [INFO][6205] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.729 [INFO][6205] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.729 [INFO][6205] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.731 [INFO][6205] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11 Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.734 [INFO][6205] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.739 [INFO][6205] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.67/26] block=192.168.62.64/26 handle="k8s-pod-network.45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.739 [INFO][6205] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.67/26] handle="k8s-pod-network.45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.739 [INFO][6205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:05.751538 containerd[1799]: 2025-01-30 13:52:05.739 [INFO][6205] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.67/26] IPv6=[] ContainerID="45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" HandleID="k8s-pod-network.45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" Workload="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--c4k8v-eth0" Jan 30 13:52:05.752687 containerd[1799]: 2025-01-30 13:52:05.740 [INFO][6113] cni-plugin/k8s.go 386: Populated endpoint ContainerID="45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" Namespace="kube-system" Pod="coredns-6f6b679f8f-c4k8v" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--c4k8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--c4k8v-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"04991ed4-bcd5-4f9b-b027-ba79cc5149a0", ResourceVersion:"648", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f55746354a", ContainerID:"", Pod:"coredns-6f6b679f8f-c4k8v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad4503c6bfc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:05.752687 containerd[1799]: 2025-01-30 13:52:05.740 [INFO][6113] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.67/32] ContainerID="45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" Namespace="kube-system" Pod="coredns-6f6b679f8f-c4k8v" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--c4k8v-eth0" Jan 30 13:52:05.752687 containerd[1799]: 2025-01-30 13:52:05.740 [INFO][6113] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad4503c6bfc ContainerID="45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" Namespace="kube-system" Pod="coredns-6f6b679f8f-c4k8v" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--c4k8v-eth0" Jan 30 13:52:05.752687 containerd[1799]: 2025-01-30 13:52:05.742 [INFO][6113] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" Namespace="kube-system" Pod="coredns-6f6b679f8f-c4k8v" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--c4k8v-eth0" Jan 30 13:52:05.752965 containerd[1799]: 2025-01-30 13:52:05.742 [INFO][6113] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" Namespace="kube-system" Pod="coredns-6f6b679f8f-c4k8v" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--c4k8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--c4k8v-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"04991ed4-bcd5-4f9b-b027-ba79cc5149a0", ResourceVersion:"648", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f55746354a", ContainerID:"45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11", Pod:"coredns-6f6b679f8f-c4k8v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad4503c6bfc", MAC:"22:a5:f4:e7:40:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:05.752965 containerd[1799]: 2025-01-30 13:52:05.749 [INFO][6113] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11" Namespace="kube-system" Pod="coredns-6f6b679f8f-c4k8v" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--c4k8v-eth0" Jan 30 13:52:05.764549 containerd[1799]: time="2025-01-30T13:52:05.764479498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:05.764549 containerd[1799]: time="2025-01-30T13:52:05.764508868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:05.764549 containerd[1799]: time="2025-01-30T13:52:05.764518544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:05.764839 containerd[1799]: time="2025-01-30T13:52:05.764792327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:05.789608 systemd[1]: Started cri-containerd-45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11.scope - libcontainer container 45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11. Jan 30 13:52:05.819857 containerd[1799]: time="2025-01-30T13:52:05.819829305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-c4k8v,Uid:04991ed4-bcd5-4f9b-b027-ba79cc5149a0,Namespace:kube-system,Attempt:7,} returns sandbox id \"45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11\"" Jan 30 13:52:05.821389 containerd[1799]: time="2025-01-30T13:52:05.821349898Z" level=info msg="CreateContainer within sandbox \"45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:52:05.826933 containerd[1799]: time="2025-01-30T13:52:05.826887526Z" level=info msg="CreateContainer within sandbox \"45668acc8910ee6844484db2eb01e44b1409bc4f26d509f47c776200d5826c11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"284316aa65a09222b9bb86b02d3da7ad5919ba20c3045b4f68d28b544e44293c\"" Jan 30 13:52:05.827124 containerd[1799]: time="2025-01-30T13:52:05.827111742Z" level=info msg="StartContainer for \"284316aa65a09222b9bb86b02d3da7ad5919ba20c3045b4f68d28b544e44293c\"" Jan 30 13:52:05.836218 systemd-networkd[1712]: cali71b9fe8faf8: Link UP Jan 30 13:52:05.836338 systemd-networkd[1712]: cali71b9fe8faf8: Gained carrier Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.486 [INFO][6090] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.493 [INFO][6090] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--7vhrq-eth0 coredns-6f6b679f8f- kube-system 7c64afcb-0671-44d3-8136-9ee0bad3d72c 644 0 2025-01-30 13:51:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.0-a-f55746354a coredns-6f6b679f8f-7vhrq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali71b9fe8faf8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vhrq" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--7vhrq-" Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.493 [INFO][6090] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vhrq" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--7vhrq-eth0" Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.514 [INFO][6200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" HandleID="k8s-pod-network.4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" Workload="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--7vhrq-eth0" Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.518 [INFO][6200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" HandleID="k8s-pod-network.4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" Workload="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--7vhrq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375c20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.0-a-f55746354a", "pod":"coredns-6f6b679f8f-7vhrq", "timestamp":"2025-01-30 13:52:05.514541311 +0000 UTC"}, Hostname:"ci-4186.1.0-a-f55746354a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.518 [INFO][6200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.739 [INFO][6200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.739 [INFO][6200] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-f55746354a' Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.818 [INFO][6200] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.821 [INFO][6200] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.825 [INFO][6200] ipam/ipam.go 489: Trying affinity for 192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.826 [INFO][6200] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.827 [INFO][6200] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.827 [INFO][6200] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.828 [INFO][6200] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7 Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.830 [INFO][6200] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.834 [INFO][6200] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.68/26] block=192.168.62.64/26 handle="k8s-pod-network.4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.834 [INFO][6200] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.68/26] handle="k8s-pod-network.4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.834 [INFO][6200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:05.841725 containerd[1799]: 2025-01-30 13:52:05.834 [INFO][6200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.68/26] IPv6=[] ContainerID="4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" HandleID="k8s-pod-network.4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" Workload="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--7vhrq-eth0" Jan 30 13:52:05.842125 containerd[1799]: 2025-01-30 13:52:05.835 [INFO][6090] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vhrq" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--7vhrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--7vhrq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7c64afcb-0671-44d3-8136-9ee0bad3d72c", ResourceVersion:"644", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f55746354a", ContainerID:"", Pod:"coredns-6f6b679f8f-7vhrq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71b9fe8faf8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:05.842125 containerd[1799]: 2025-01-30 13:52:05.835 [INFO][6090] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.68/32] ContainerID="4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vhrq" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--7vhrq-eth0" Jan 30 13:52:05.842125 containerd[1799]: 2025-01-30 13:52:05.835 [INFO][6090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71b9fe8faf8 ContainerID="4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vhrq" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--7vhrq-eth0" Jan 30 13:52:05.842125 containerd[1799]: 2025-01-30 13:52:05.836 [INFO][6090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vhrq" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--7vhrq-eth0" Jan 30 13:52:05.842219 containerd[1799]: 2025-01-30 13:52:05.836 [INFO][6090] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vhrq" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--7vhrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--7vhrq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7c64afcb-0671-44d3-8136-9ee0bad3d72c", ResourceVersion:"644", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f55746354a", ContainerID:"4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7", Pod:"coredns-6f6b679f8f-7vhrq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71b9fe8faf8", MAC:"82:47:90:79:c3:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:05.842219 containerd[1799]: 2025-01-30 13:52:05.841 [INFO][6090] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vhrq" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-coredns--6f6b679f8f--7vhrq-eth0" Jan 30 13:52:05.847550 systemd[1]: Started cri-containerd-284316aa65a09222b9bb86b02d3da7ad5919ba20c3045b4f68d28b544e44293c.scope - libcontainer container 284316aa65a09222b9bb86b02d3da7ad5919ba20c3045b4f68d28b544e44293c. Jan 30 13:52:05.851812 containerd[1799]: time="2025-01-30T13:52:05.851729636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:05.851812 containerd[1799]: time="2025-01-30T13:52:05.851756314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:05.851812 containerd[1799]: time="2025-01-30T13:52:05.851765990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:05.851812 containerd[1799]: time="2025-01-30T13:52:05.851809208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:05.871717 systemd[1]: Started cri-containerd-4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7.scope - libcontainer container 4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7. Jan 30 13:52:05.872841 containerd[1799]: time="2025-01-30T13:52:05.872814943Z" level=info msg="StartContainer for \"284316aa65a09222b9bb86b02d3da7ad5919ba20c3045b4f68d28b544e44293c\" returns successfully" Jan 30 13:52:05.895062 containerd[1799]: time="2025-01-30T13:52:05.895012251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vhrq,Uid:7c64afcb-0671-44d3-8136-9ee0bad3d72c,Namespace:kube-system,Attempt:7,} returns sandbox id \"4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7\"" Jan 30 13:52:05.896937 containerd[1799]: time="2025-01-30T13:52:05.896899145Z" level=info msg="CreateContainer within sandbox \"4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:52:05.908010 containerd[1799]: time="2025-01-30T13:52:05.907937668Z" level=info msg="CreateContainer within sandbox \"4665162a62195b18c9671b94d244d76c5169831a6c5b734ba869f3831a69fdd7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed038fd80fe46d842e38e07800df3ab63aa5516cc4ebef775c57cd67d8707b4b\"" Jan 30 13:52:05.908577 containerd[1799]: time="2025-01-30T13:52:05.908544782Z" level=info msg="StartContainer for \"ed038fd80fe46d842e38e07800df3ab63aa5516cc4ebef775c57cd67d8707b4b\"" Jan 30 13:52:05.945199 systemd-networkd[1712]: califbae370f337: Link UP Jan 30 13:52:05.945300 systemd-networkd[1712]: califbae370f337: Gained carrier Jan 30 13:52:05.948474 systemd[1]: Started cri-containerd-ed038fd80fe46d842e38e07800df3ab63aa5516cc4ebef775c57cd67d8707b4b.scope - libcontainer container ed038fd80fe46d842e38e07800df3ab63aa5516cc4ebef775c57cd67d8707b4b. Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.486 [INFO][6101] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.492 [INFO][6101] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--f55746354a-k8s-csi--node--driver--gpjs7-eth0 csi-node-driver- calico-system 08ec3d9c-69d5-48e2-969e-46a8611fadde 579 0 2025-01-30 13:51:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4186.1.0-a-f55746354a csi-node-driver-gpjs7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califbae370f337 [] []}} ContainerID="addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" Namespace="calico-system" Pod="csi-node-driver-gpjs7" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-csi--node--driver--gpjs7-" Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.492 [INFO][6101] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" Namespace="calico-system" Pod="csi-node-driver-gpjs7" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-csi--node--driver--gpjs7-eth0" Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.514 [INFO][6201] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" HandleID="k8s-pod-network.addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" Workload="ci--4186.1.0--a--f55746354a-k8s-csi--node--driver--gpjs7-eth0" Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.518 [INFO][6201] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" HandleID="k8s-pod-network.addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" Workload="ci--4186.1.0--a--f55746354a-k8s-csi--node--driver--gpjs7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374d30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.0-a-f55746354a", "pod":"csi-node-driver-gpjs7", "timestamp":"2025-01-30 13:52:05.514530398 +0000 UTC"}, Hostname:"ci-4186.1.0-a-f55746354a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.518 [INFO][6201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.834 [INFO][6201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.834 [INFO][6201] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-f55746354a' Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.921 [INFO][6201] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.928 [INFO][6201] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.934 [INFO][6201] ipam/ipam.go 489: Trying affinity for 192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.935 [INFO][6201] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.937 [INFO][6201] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.937 [INFO][6201] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.937 [INFO][6201] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.940 [INFO][6201] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.943 [INFO][6201] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.69/26] block=192.168.62.64/26 handle="k8s-pod-network.addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.943 [INFO][6201] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.69/26] handle="k8s-pod-network.addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:05.951150 containerd[1799]: 2025-01-30 13:52:05.943 [INFO][6201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:05.951561 containerd[1799]: 2025-01-30 13:52:05.943 [INFO][6201] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.69/26] IPv6=[] ContainerID="addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" HandleID="k8s-pod-network.addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" Workload="ci--4186.1.0--a--f55746354a-k8s-csi--node--driver--gpjs7-eth0" Jan 30 13:52:05.951561 containerd[1799]: 2025-01-30 13:52:05.944 [INFO][6101] cni-plugin/k8s.go 386: Populated endpoint ContainerID="addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" Namespace="calico-system" Pod="csi-node-driver-gpjs7" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-csi--node--driver--gpjs7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f55746354a-k8s-csi--node--driver--gpjs7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08ec3d9c-69d5-48e2-969e-46a8611fadde", ResourceVersion:"579", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f55746354a", ContainerID:"", Pod:"csi-node-driver-gpjs7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califbae370f337", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:05.951561 containerd[1799]: 2025-01-30 13:52:05.944 [INFO][6101] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.69/32] ContainerID="addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" Namespace="calico-system" Pod="csi-node-driver-gpjs7" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-csi--node--driver--gpjs7-eth0" Jan 30 13:52:05.951561 containerd[1799]: 2025-01-30 13:52:05.944 [INFO][6101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbae370f337 ContainerID="addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" Namespace="calico-system" Pod="csi-node-driver-gpjs7" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-csi--node--driver--gpjs7-eth0" Jan 30 13:52:05.951561 containerd[1799]: 2025-01-30 13:52:05.945 [INFO][6101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" Namespace="calico-system" Pod="csi-node-driver-gpjs7" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-csi--node--driver--gpjs7-eth0" Jan 30 13:52:05.951561 containerd[1799]: 2025-01-30 13:52:05.945 [INFO][6101] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" Namespace="calico-system" Pod="csi-node-driver-gpjs7" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-csi--node--driver--gpjs7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f55746354a-k8s-csi--node--driver--gpjs7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08ec3d9c-69d5-48e2-969e-46a8611fadde", ResourceVersion:"579", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f55746354a", ContainerID:"addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c", Pod:"csi-node-driver-gpjs7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califbae370f337", MAC:"36:a0:af:94:f7:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:05.951707 containerd[1799]: 2025-01-30 13:52:05.950 [INFO][6101] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c" Namespace="calico-system" Pod="csi-node-driver-gpjs7" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-csi--node--driver--gpjs7-eth0" Jan 30 13:52:05.961051 containerd[1799]: time="2025-01-30T13:52:05.961023274Z" level=info msg="StartContainer for \"ed038fd80fe46d842e38e07800df3ab63aa5516cc4ebef775c57cd67d8707b4b\" returns successfully" Jan 30 13:52:05.961659 containerd[1799]: time="2025-01-30T13:52:05.961367217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:05.961699 containerd[1799]: time="2025-01-30T13:52:05.961647971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:05.961699 containerd[1799]: time="2025-01-30T13:52:05.961661402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:05.961747 containerd[1799]: time="2025-01-30T13:52:05.961717698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:05.988509 systemd[1]: Started cri-containerd-addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c.scope - libcontainer container addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c. Jan 30 13:52:06.009746 containerd[1799]: time="2025-01-30T13:52:06.009725991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gpjs7,Uid:08ec3d9c-69d5-48e2-969e-46a8611fadde,Namespace:calico-system,Attempt:7,} returns sandbox id \"addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c\"" Jan 30 13:52:06.042973 systemd-networkd[1712]: cali083431b74e2: Link UP Jan 30 13:52:06.043082 systemd-networkd[1712]: cali083431b74e2: Gained carrier Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:05.488 [INFO][6116] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:05.494 [INFO][6116] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--s2glf-eth0 calico-apiserver-68c748b76b- calico-apiserver 520ef51f-94d3-44ca-8df4-36fb6501930e 647 0 2025-01-30 13:51:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68c748b76b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.0-a-f55746354a calico-apiserver-68c748b76b-s2glf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali083431b74e2 [] []}} ContainerID="ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-s2glf" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--s2glf-" Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:05.494 [INFO][6116] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-s2glf" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--s2glf-eth0" Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:05.514 [INFO][6225] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" HandleID="k8s-pod-network.ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" Workload="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--s2glf-eth0" Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:05.518 [INFO][6225] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" HandleID="k8s-pod-network.ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" Workload="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--s2glf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000506b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.0-a-f55746354a", "pod":"calico-apiserver-68c748b76b-s2glf", "timestamp":"2025-01-30 13:52:05.514549123 +0000 UTC"}, Hostname:"ci-4186.1.0-a-f55746354a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:05.518 [INFO][6225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:05.943 [INFO][6225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:05.943 [INFO][6225] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-f55746354a' Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:06.019 [INFO][6225] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:06.027 [INFO][6225] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:06.032 [INFO][6225] ipam/ipam.go 489: Trying affinity for 192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:06.033 [INFO][6225] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:06.034 [INFO][6225] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:06.034 [INFO][6225] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:06.035 [INFO][6225] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9 Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:06.037 [INFO][6225] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:06.041 [INFO][6225] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.70/26] block=192.168.62.64/26 handle="k8s-pod-network.ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:06.041 [INFO][6225] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.70/26] handle="k8s-pod-network.ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" host="ci-4186.1.0-a-f55746354a" Jan 30 13:52:06.048745 containerd[1799]: 2025-01-30 13:52:06.041 [INFO][6225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:06.049224 containerd[1799]: 2025-01-30 13:52:06.041 [INFO][6225] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.70/26] IPv6=[] ContainerID="ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" HandleID="k8s-pod-network.ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" Workload="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--s2glf-eth0" Jan 30 13:52:06.049224 containerd[1799]: 2025-01-30 13:52:06.042 [INFO][6116] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-s2glf" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--s2glf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--s2glf-eth0", GenerateName:"calico-apiserver-68c748b76b-", Namespace:"calico-apiserver", SelfLink:"", UID:"520ef51f-94d3-44ca-8df4-36fb6501930e", ResourceVersion:"647", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68c748b76b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f55746354a", ContainerID:"", Pod:"calico-apiserver-68c748b76b-s2glf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali083431b74e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:06.049224 containerd[1799]: 2025-01-30 13:52:06.042 [INFO][6116] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.70/32] ContainerID="ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-s2glf" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--s2glf-eth0" Jan 30 13:52:06.049224 containerd[1799]: 2025-01-30 13:52:06.042 [INFO][6116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali083431b74e2 ContainerID="ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-s2glf" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--s2glf-eth0" Jan 30 13:52:06.049224 containerd[1799]: 2025-01-30 13:52:06.043 [INFO][6116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-s2glf" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--s2glf-eth0" Jan 30 13:52:06.049355 containerd[1799]: 2025-01-30 13:52:06.043 [INFO][6116] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-s2glf" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--s2glf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--s2glf-eth0", GenerateName:"calico-apiserver-68c748b76b-", Namespace:"calico-apiserver", SelfLink:"", UID:"520ef51f-94d3-44ca-8df4-36fb6501930e", ResourceVersion:"647", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68c748b76b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-f55746354a", ContainerID:"ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9", Pod:"calico-apiserver-68c748b76b-s2glf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali083431b74e2", MAC:"2e:95:a1:db:f8:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:06.049355 containerd[1799]: 2025-01-30 13:52:06.047 [INFO][6116] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9" Namespace="calico-apiserver" Pod="calico-apiserver-68c748b76b-s2glf" WorkloadEndpoint="ci--4186.1.0--a--f55746354a-k8s-calico--apiserver--68c748b76b--s2glf-eth0" Jan 30 13:52:06.059044 containerd[1799]: time="2025-01-30T13:52:06.058957517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:06.059191 containerd[1799]: time="2025-01-30T13:52:06.059178088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:06.059211 containerd[1799]: time="2025-01-30T13:52:06.059189231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:06.059241 containerd[1799]: time="2025-01-30T13:52:06.059231977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:06.087613 systemd[1]: Started cri-containerd-ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9.scope - libcontainer container ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9. Jan 30 13:52:06.120817 containerd[1799]: time="2025-01-30T13:52:06.120790518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c748b76b-s2glf,Uid:520ef51f-94d3-44ca-8df4-36fb6501930e,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9\"" Jan 30 13:52:06.421182 systemd[1]: run-netns-cni\x2db46bc84b\x2da387\x2d34ce\x2d0d18\x2db4d404c36d9d.mount: Deactivated successfully. Jan 30 13:52:06.470771 kubelet[3061]: I0130 13:52:06.470751 3061 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:52:06.473051 kubelet[3061]: I0130 13:52:06.473006 3061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7vhrq" podStartSLOduration=21.472993389 podStartE2EDuration="21.472993389s" podCreationTimestamp="2025-01-30 13:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:06.472740899 +0000 UTC m=+27.172944816" watchObservedRunningTime="2025-01-30 13:52:06.472993389 +0000 UTC m=+27.173197297" Jan 30 13:52:06.479530 kubelet[3061]: I0130 13:52:06.479484 3061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-c4k8v" podStartSLOduration=21.479468864 podStartE2EDuration="21.479468864s" podCreationTimestamp="2025-01-30 13:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:06.47914786 +0000 UTC m=+27.179351779" watchObservedRunningTime="2025-01-30 13:52:06.479468864 +0000 UTC m=+27.179672774" Jan 30 13:52:06.936451 systemd-networkd[1712]: cali55c2db0343f: Gained IPv6LL Jan 30 13:52:07.064515 systemd-networkd[1712]: cali441b1c358b3: Gained IPv6LL Jan 30 13:52:07.128624 systemd-networkd[1712]: cali71b9fe8faf8: Gained IPv6LL Jan 30 13:52:07.576389 systemd-networkd[1712]: caliad4503c6bfc: Gained IPv6LL Jan 30 13:52:07.832427 systemd-networkd[1712]: califbae370f337: Gained IPv6LL Jan 30 13:52:07.859826 containerd[1799]: time="2025-01-30T13:52:07.859802675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:07.860057 containerd[1799]: time="2025-01-30T13:52:07.860011148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:52:07.860422 containerd[1799]: time="2025-01-30T13:52:07.860406892Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:07.861413 containerd[1799]: time="2025-01-30T13:52:07.861401356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:07.861862 containerd[1799]: time="2025-01-30T13:52:07.861847808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.213842131s" Jan 30 13:52:07.861939 containerd[1799]: time="2025-01-30T13:52:07.861866268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:52:07.862354 containerd[1799]: time="2025-01-30T13:52:07.862344078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:52:07.862809 containerd[1799]: time="2025-01-30T13:52:07.862794355Z" level=info msg="CreateContainer within sandbox \"e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:52:07.866807 containerd[1799]: time="2025-01-30T13:52:07.866763112Z" level=info msg="CreateContainer within sandbox \"e8158f7b81cbb227c704a881d07eac1c0dff3a07ce7b7ac48a2d5166c936b410\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"665ea7c1b7622f92c310a7693a24bf3ea9cb5b07fe4c921982254bc91ea42caa\"" Jan 30 13:52:07.867004 containerd[1799]: time="2025-01-30T13:52:07.866965729Z" level=info msg="StartContainer for \"665ea7c1b7622f92c310a7693a24bf3ea9cb5b07fe4c921982254bc91ea42caa\"" Jan 30 13:52:07.899840 systemd[1]: Started cri-containerd-665ea7c1b7622f92c310a7693a24bf3ea9cb5b07fe4c921982254bc91ea42caa.scope - libcontainer container 665ea7c1b7622f92c310a7693a24bf3ea9cb5b07fe4c921982254bc91ea42caa. Jan 30 13:52:07.986548 containerd[1799]: time="2025-01-30T13:52:07.986514783Z" level=info msg="StartContainer for \"665ea7c1b7622f92c310a7693a24bf3ea9cb5b07fe4c921982254bc91ea42caa\" returns successfully" Jan 30 13:52:08.026458 systemd-networkd[1712]: cali083431b74e2: Gained IPv6LL Jan 30 13:52:08.486017 kubelet[3061]: I0130 13:52:08.485934 3061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68c748b76b-mcv7b" podStartSLOduration=15.271548273 podStartE2EDuration="17.485910334s" podCreationTimestamp="2025-01-30 13:51:51 +0000 UTC" firstStartedPulling="2025-01-30 13:52:05.647887156 +0000 UTC m=+26.348091064" lastFinishedPulling="2025-01-30 13:52:07.862249217 +0000 UTC m=+28.562453125" observedRunningTime="2025-01-30 13:52:08.485819649 +0000 UTC m=+29.186023571" watchObservedRunningTime="2025-01-30 13:52:08.485910334 +0000 UTC m=+29.186114256" Jan 30 13:52:09.479496 kubelet[3061]: I0130 13:52:09.479439 3061 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:52:10.225278 containerd[1799]: time="2025-01-30T13:52:10.225253638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:10.225579 containerd[1799]: time="2025-01-30T13:52:10.225559912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:52:10.225952 containerd[1799]: time="2025-01-30T13:52:10.225908828Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:10.226856 containerd[1799]: time="2025-01-30T13:52:10.226816096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:10.227260 containerd[1799]: time="2025-01-30T13:52:10.227223161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.364863659s" Jan 30 13:52:10.227260 containerd[1799]: time="2025-01-30T13:52:10.227238174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:52:10.227782 containerd[1799]: time="2025-01-30T13:52:10.227773117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:52:10.230873 containerd[1799]: time="2025-01-30T13:52:10.230797725Z" level=info msg="CreateContainer within sandbox \"9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:52:10.234926 containerd[1799]: time="2025-01-30T13:52:10.234909353Z" level=info msg="CreateContainer within sandbox \"9b3942fa8b8af8a5e7539dbca9419cea18d746c38d48f741ab8915611d3f9cdb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"014332d2bc34ffafdc71c1abcd48d26929264d8796931de450bdf0e3675ffd4e\"" Jan 30 13:52:10.235101 containerd[1799]: time="2025-01-30T13:52:10.235089632Z" level=info msg="StartContainer for \"014332d2bc34ffafdc71c1abcd48d26929264d8796931de450bdf0e3675ffd4e\"" Jan 30 13:52:10.263609 systemd[1]: Started cri-containerd-014332d2bc34ffafdc71c1abcd48d26929264d8796931de450bdf0e3675ffd4e.scope - libcontainer container 014332d2bc34ffafdc71c1abcd48d26929264d8796931de450bdf0e3675ffd4e. Jan 30 13:52:10.290205 containerd[1799]: time="2025-01-30T13:52:10.290181057Z" level=info msg="StartContainer for \"014332d2bc34ffafdc71c1abcd48d26929264d8796931de450bdf0e3675ffd4e\" returns successfully" Jan 30 13:52:10.494153 kubelet[3061]: I0130 13:52:10.494094 3061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5984859c66-hc7cz" podStartSLOduration=14.954206425 podStartE2EDuration="19.494073339s" podCreationTimestamp="2025-01-30 13:51:51 +0000 UTC" firstStartedPulling="2025-01-30 13:52:05.687852112 +0000 UTC m=+26.388056019" lastFinishedPulling="2025-01-30 13:52:10.227719026 +0000 UTC m=+30.927922933" observedRunningTime="2025-01-30 13:52:10.493688029 +0000 UTC m=+31.193891958" watchObservedRunningTime="2025-01-30 13:52:10.494073339 +0000 UTC m=+31.194277260" Jan 30 13:52:11.664784 containerd[1799]: time="2025-01-30T13:52:11.664730439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:11.665001 containerd[1799]: time="2025-01-30T13:52:11.664936255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:52:11.665356 containerd[1799]: time="2025-01-30T13:52:11.665340502Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:11.666315 containerd[1799]: time="2025-01-30T13:52:11.666272774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:11.666717 containerd[1799]: time="2025-01-30T13:52:11.666675115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.4388877s" Jan 30 13:52:11.666717 containerd[1799]: time="2025-01-30T13:52:11.666692295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:52:11.667203 containerd[1799]: time="2025-01-30T13:52:11.667160164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:52:11.667788 containerd[1799]: time="2025-01-30T13:52:11.667775160Z" level=info msg="CreateContainer within sandbox \"addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:52:11.672976 containerd[1799]: time="2025-01-30T13:52:11.672958328Z" level=info msg="CreateContainer within sandbox \"addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c08e24b539f2b56afc8c793d332a1961a40be70dd5dda3ac341826ff9bdb183f\"" Jan 30 13:52:11.673201 containerd[1799]: time="2025-01-30T13:52:11.673158941Z" level=info msg="StartContainer for \"c08e24b539f2b56afc8c793d332a1961a40be70dd5dda3ac341826ff9bdb183f\"" Jan 30 13:52:11.708500 systemd[1]: Started cri-containerd-c08e24b539f2b56afc8c793d332a1961a40be70dd5dda3ac341826ff9bdb183f.scope - libcontainer container c08e24b539f2b56afc8c793d332a1961a40be70dd5dda3ac341826ff9bdb183f. Jan 30 13:52:11.734777 containerd[1799]: time="2025-01-30T13:52:11.734742043Z" level=info msg="StartContainer for \"c08e24b539f2b56afc8c793d332a1961a40be70dd5dda3ac341826ff9bdb183f\" returns successfully" Jan 30 13:52:12.031529 containerd[1799]: time="2025-01-30T13:52:12.031474309Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:12.031728 containerd[1799]: time="2025-01-30T13:52:12.031677173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:52:12.032885 containerd[1799]: time="2025-01-30T13:52:12.032865333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 365.687252ms" Jan 30 13:52:12.032885 containerd[1799]: time="2025-01-30T13:52:12.032882025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:52:12.033541 containerd[1799]: time="2025-01-30T13:52:12.033529591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:52:12.034036 containerd[1799]: time="2025-01-30T13:52:12.034023868Z" level=info msg="CreateContainer within sandbox \"ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:52:12.038925 containerd[1799]: time="2025-01-30T13:52:12.038883051Z" level=info msg="CreateContainer within sandbox \"ad7433c49eaf0fa90d81feca615ec4f31a3153e8c2add15cc3cd24b0bf7d1cd9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f0cf12160c859ceee28e183353d76c1fdda62e97dfec293c75dde48f4729edd3\"" Jan 30 13:52:12.039091 containerd[1799]: time="2025-01-30T13:52:12.039048238Z" level=info msg="StartContainer for \"f0cf12160c859ceee28e183353d76c1fdda62e97dfec293c75dde48f4729edd3\"" Jan 30 13:52:12.067810 systemd[1]: Started cri-containerd-f0cf12160c859ceee28e183353d76c1fdda62e97dfec293c75dde48f4729edd3.scope - libcontainer container f0cf12160c859ceee28e183353d76c1fdda62e97dfec293c75dde48f4729edd3. Jan 30 13:52:12.185977 containerd[1799]: time="2025-01-30T13:52:12.185924561Z" level=info msg="StartContainer for \"f0cf12160c859ceee28e183353d76c1fdda62e97dfec293c75dde48f4729edd3\" returns successfully" Jan 30 13:52:12.503509 kubelet[3061]: I0130 13:52:12.503431 3061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68c748b76b-s2glf" podStartSLOduration=15.591426794 podStartE2EDuration="21.50339285s" podCreationTimestamp="2025-01-30 13:51:51 +0000 UTC" firstStartedPulling="2025-01-30 13:52:06.121504106 +0000 UTC m=+26.821708018" lastFinishedPulling="2025-01-30 13:52:12.033470167 +0000 UTC m=+32.733674074" observedRunningTime="2025-01-30 13:52:12.502953521 +0000 UTC m=+33.203157430" watchObservedRunningTime="2025-01-30 13:52:12.50339285 +0000 UTC m=+33.203596755" Jan 30 13:52:13.498770 kubelet[3061]: I0130 13:52:13.498752 3061 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:52:13.538745 containerd[1799]: time="2025-01-30T13:52:13.538692013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:13.539047 containerd[1799]: time="2025-01-30T13:52:13.538868608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:52:13.539433 containerd[1799]: time="2025-01-30T13:52:13.539394207Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:13.540454 containerd[1799]: time="2025-01-30T13:52:13.540414207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:13.540916 containerd[1799]: time="2025-01-30T13:52:13.540872263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.507329065s" Jan 30 13:52:13.540916 containerd[1799]: time="2025-01-30T13:52:13.540887268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:52:13.542255 containerd[1799]: time="2025-01-30T13:52:13.542242736Z" level=info msg="CreateContainer within sandbox \"addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:52:13.547430 containerd[1799]: time="2025-01-30T13:52:13.547369073Z" level=info msg="CreateContainer within sandbox \"addae9e319f73d63113c53ab81b71383cf32fa0ac681cd939a5dedacd7e9390c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"24f87d3fb6059dca8ef2867fb3bc0b7e07558c51012782ce9ad2dc2322d1bb27\"" Jan 30 13:52:13.547677 containerd[1799]: time="2025-01-30T13:52:13.547666877Z" level=info msg="StartContainer for \"24f87d3fb6059dca8ef2867fb3bc0b7e07558c51012782ce9ad2dc2322d1bb27\"" Jan 30 13:52:13.578616 systemd[1]: Started cri-containerd-24f87d3fb6059dca8ef2867fb3bc0b7e07558c51012782ce9ad2dc2322d1bb27.scope - libcontainer container 24f87d3fb6059dca8ef2867fb3bc0b7e07558c51012782ce9ad2dc2322d1bb27. Jan 30 13:52:13.597580 containerd[1799]: time="2025-01-30T13:52:13.597550965Z" level=info msg="StartContainer for \"24f87d3fb6059dca8ef2867fb3bc0b7e07558c51012782ce9ad2dc2322d1bb27\" returns successfully" Jan 30 13:52:13.639423 kubelet[3061]: I0130 13:52:13.639288 3061 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:52:14.270332 kernel: bpftool[7452]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:52:14.374041 kubelet[3061]: I0130 13:52:14.374001 3061 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:52:14.374041 kubelet[3061]: I0130 13:52:14.374020 3061 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:52:14.418432 systemd-networkd[1712]: vxlan.calico: Link UP Jan 30 13:52:14.418436 systemd-networkd[1712]: vxlan.calico: Gained carrier Jan 30 13:52:14.507714 kubelet[3061]: I0130 13:52:14.507655 3061 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gpjs7" podStartSLOduration=15.976578941 podStartE2EDuration="23.507644256s" podCreationTimestamp="2025-01-30 13:51:51 +0000 UTC" firstStartedPulling="2025-01-30 13:52:06.010288257 +0000 UTC m=+26.710492165" lastFinishedPulling="2025-01-30 13:52:13.541353572 +0000 UTC m=+34.241557480" observedRunningTime="2025-01-30 13:52:14.507254128 +0000 UTC m=+35.207458036" watchObservedRunningTime="2025-01-30 13:52:14.507644256 +0000 UTC m=+35.207848161" Jan 30 13:52:15.768577 systemd-networkd[1712]: vxlan.calico: Gained IPv6LL Jan 30 13:52:19.843488 kubelet[3061]: I0130 13:52:19.843403 3061 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:52:34.908894 kubelet[3061]: I0130 13:52:34.908774 3061 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:52:39.338575 containerd[1799]: time="2025-01-30T13:52:39.338527678Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\"" Jan 30 13:52:39.339050 containerd[1799]: time="2025-01-30T13:52:39.338586006Z" level=info msg="TearDown network for sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" successfully" Jan 30 13:52:39.339050 containerd[1799]: time="2025-01-30T13:52:39.338592755Z" level=info msg="StopPodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" returns successfully" Jan 30 13:52:39.339050 containerd[1799]: time="2025-01-30T13:52:39.338949030Z" level=info msg="RemovePodSandbox for \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\"" Jan 30 13:52:39.339050 containerd[1799]: time="2025-01-30T13:52:39.338976499Z" level=info msg="Forcibly stopping sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\"" Jan 30 13:52:39.339124 containerd[1799]: time="2025-01-30T13:52:39.339039391Z" level=info msg="TearDown network for sandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" successfully" Jan 30 13:52:39.340495 containerd[1799]: time="2025-01-30T13:52:39.340461429Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.340533 containerd[1799]: time="2025-01-30T13:52:39.340497368Z" level=info msg="RemovePodSandbox \"1fbe4b6f4debccc446b9078f9b3ef0160baa7224e82635725f447682753bbd0f\" returns successfully" Jan 30 13:52:39.340794 containerd[1799]: time="2025-01-30T13:52:39.340780665Z" level=info msg="StopPodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\"" Jan 30 13:52:39.340905 containerd[1799]: time="2025-01-30T13:52:39.340875247Z" level=info msg="TearDown network for sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" successfully" Jan 30 13:52:39.340905 containerd[1799]: time="2025-01-30T13:52:39.340895823Z" level=info msg="StopPodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" returns successfully" Jan 30 13:52:39.341116 containerd[1799]: time="2025-01-30T13:52:39.341087479Z" level=info msg="RemovePodSandbox for \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\"" Jan 30 13:52:39.341183 containerd[1799]: time="2025-01-30T13:52:39.341117612Z" level=info msg="Forcibly stopping sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\"" Jan 30 13:52:39.341215 containerd[1799]: time="2025-01-30T13:52:39.341185105Z" level=info msg="TearDown network for sandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" successfully" Jan 30 13:52:39.342520 containerd[1799]: time="2025-01-30T13:52:39.342483989Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.342559 containerd[1799]: time="2025-01-30T13:52:39.342521614Z" level=info msg="RemovePodSandbox \"bc7cad5ae4a5b54596aac095dee080361c190062fef63beb2db123877b67b55e\" returns successfully" Jan 30 13:52:39.342647 containerd[1799]: time="2025-01-30T13:52:39.342629929Z" level=info msg="StopPodSandbox for \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\"" Jan 30 13:52:39.342698 containerd[1799]: time="2025-01-30T13:52:39.342689796Z" level=info msg="TearDown network for sandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\" successfully" Jan 30 13:52:39.342698 containerd[1799]: time="2025-01-30T13:52:39.342696529Z" level=info msg="StopPodSandbox for \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\" returns successfully" Jan 30 13:52:39.342935 containerd[1799]: time="2025-01-30T13:52:39.342897805Z" level=info msg="RemovePodSandbox for \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\"" Jan 30 13:52:39.342935 containerd[1799]: time="2025-01-30T13:52:39.342910011Z" level=info msg="Forcibly stopping sandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\"" Jan 30 13:52:39.343036 containerd[1799]: time="2025-01-30T13:52:39.342963741Z" level=info msg="TearDown network for sandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\" successfully" Jan 30 13:52:39.344177 containerd[1799]: time="2025-01-30T13:52:39.344133719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.344177 containerd[1799]: time="2025-01-30T13:52:39.344175286Z" level=info msg="RemovePodSandbox \"f3b22c21d2f8cb9759d3cdc24418ae9a2413b5ac98e46c6c4f159462523afa9c\" returns successfully" Jan 30 13:52:39.344385 containerd[1799]: time="2025-01-30T13:52:39.344314665Z" level=info msg="StopPodSandbox for \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\"" Jan 30 13:52:39.344385 containerd[1799]: time="2025-01-30T13:52:39.344376036Z" level=info msg="TearDown network for sandbox \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\" successfully" Jan 30 13:52:39.344385 containerd[1799]: time="2025-01-30T13:52:39.344382975Z" level=info msg="StopPodSandbox for \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\" returns successfully" Jan 30 13:52:39.344527 containerd[1799]: time="2025-01-30T13:52:39.344478100Z" level=info msg="RemovePodSandbox for \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\"" Jan 30 13:52:39.344527 containerd[1799]: time="2025-01-30T13:52:39.344503490Z" level=info msg="Forcibly stopping sandbox \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\"" Jan 30 13:52:39.344585 containerd[1799]: time="2025-01-30T13:52:39.344551523Z" level=info msg="TearDown network for sandbox \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\" successfully" Jan 30 13:52:39.345759 containerd[1799]: time="2025-01-30T13:52:39.345720384Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.345759 containerd[1799]: time="2025-01-30T13:52:39.345738004Z" level=info msg="RemovePodSandbox \"531ac2ece2a1d37662ade56d4bbe224b4b508421aae4f4039c2c59ae504b8645\" returns successfully" Jan 30 13:52:39.345965 containerd[1799]: time="2025-01-30T13:52:39.345928820Z" level=info msg="StopPodSandbox for \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\"" Jan 30 13:52:39.346006 containerd[1799]: time="2025-01-30T13:52:39.345994755Z" level=info msg="TearDown network for sandbox \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\" successfully" Jan 30 13:52:39.346006 containerd[1799]: time="2025-01-30T13:52:39.346000778Z" level=info msg="StopPodSandbox for \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\" returns successfully" Jan 30 13:52:39.346143 containerd[1799]: time="2025-01-30T13:52:39.346133597Z" level=info msg="RemovePodSandbox for \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\"" Jan 30 13:52:39.346163 containerd[1799]: time="2025-01-30T13:52:39.346145548Z" level=info msg="Forcibly stopping sandbox \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\"" Jan 30 13:52:39.346192 containerd[1799]: time="2025-01-30T13:52:39.346177336Z" level=info msg="TearDown network for sandbox \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\" successfully" Jan 30 13:52:39.347446 containerd[1799]: time="2025-01-30T13:52:39.347403164Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.347446 containerd[1799]: time="2025-01-30T13:52:39.347421167Z" level=info msg="RemovePodSandbox \"8d2aa6fc0cd2a9f783b2e13344094aae7bf65e617d265f6a0092f547733ed99e\" returns successfully" Jan 30 13:52:39.347673 containerd[1799]: time="2025-01-30T13:52:39.347610284Z" level=info msg="StopPodSandbox for \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\"" Jan 30 13:52:39.347726 containerd[1799]: time="2025-01-30T13:52:39.347703534Z" level=info msg="TearDown network for sandbox \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\" successfully" Jan 30 13:52:39.347746 containerd[1799]: time="2025-01-30T13:52:39.347726324Z" level=info msg="StopPodSandbox for \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\" returns successfully" Jan 30 13:52:39.347954 containerd[1799]: time="2025-01-30T13:52:39.347925529Z" level=info msg="RemovePodSandbox for \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\"" Jan 30 13:52:39.347954 containerd[1799]: time="2025-01-30T13:52:39.347952821Z" level=info msg="Forcibly stopping sandbox \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\"" Jan 30 13:52:39.348049 containerd[1799]: time="2025-01-30T13:52:39.348013993Z" level=info msg="TearDown network for sandbox \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\" successfully" Jan 30 13:52:39.349221 containerd[1799]: time="2025-01-30T13:52:39.349210690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.349250 containerd[1799]: time="2025-01-30T13:52:39.349228163Z" level=info msg="RemovePodSandbox \"782a4a8007f74644aba5ba9822bf65b6eaeb89773ea745807c86ef74fdc6bc21\" returns successfully" Jan 30 13:52:39.349451 containerd[1799]: time="2025-01-30T13:52:39.349416966Z" level=info msg="StopPodSandbox for \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\"" Jan 30 13:52:39.349503 containerd[1799]: time="2025-01-30T13:52:39.349496670Z" level=info msg="TearDown network for sandbox \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\" successfully" Jan 30 13:52:39.349536 containerd[1799]: time="2025-01-30T13:52:39.349503798Z" level=info msg="StopPodSandbox for \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\" returns successfully" Jan 30 13:52:39.349634 containerd[1799]: time="2025-01-30T13:52:39.349624252Z" level=info msg="RemovePodSandbox for \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\"" Jan 30 13:52:39.349666 containerd[1799]: time="2025-01-30T13:52:39.349637800Z" level=info msg="Forcibly stopping sandbox \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\"" Jan 30 13:52:39.349686 containerd[1799]: time="2025-01-30T13:52:39.349672018Z" level=info msg="TearDown network for sandbox \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\" successfully" Jan 30 13:52:39.350842 containerd[1799]: time="2025-01-30T13:52:39.350830128Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.350881 containerd[1799]: time="2025-01-30T13:52:39.350849060Z" level=info msg="RemovePodSandbox \"7d2d8491c8ca20bb18ce48de6f5faf4833d89447d9d6d0ccba840a34669ea112\" returns successfully" Jan 30 13:52:39.351033 containerd[1799]: time="2025-01-30T13:52:39.351021431Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\"" Jan 30 13:52:39.351069 containerd[1799]: time="2025-01-30T13:52:39.351063369Z" level=info msg="TearDown network for sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" successfully" Jan 30 13:52:39.351104 containerd[1799]: time="2025-01-30T13:52:39.351069575Z" level=info msg="StopPodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" returns successfully" Jan 30 13:52:39.351180 containerd[1799]: time="2025-01-30T13:52:39.351169994Z" level=info msg="RemovePodSandbox for \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\"" Jan 30 13:52:39.351211 containerd[1799]: time="2025-01-30T13:52:39.351181326Z" level=info msg="Forcibly stopping sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\"" Jan 30 13:52:39.351239 containerd[1799]: time="2025-01-30T13:52:39.351215208Z" level=info msg="TearDown network for sandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" successfully" Jan 30 13:52:39.352442 containerd[1799]: time="2025-01-30T13:52:39.352429485Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.352482 containerd[1799]: time="2025-01-30T13:52:39.352447497Z" level=info msg="RemovePodSandbox \"1b392373238046a76dd3ba751db9193e87b4ec63ef0a30dbf7c46e5e20fd5b93\" returns successfully" Jan 30 13:52:39.352603 containerd[1799]: time="2025-01-30T13:52:39.352591153Z" level=info msg="StopPodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\"" Jan 30 13:52:39.352648 containerd[1799]: time="2025-01-30T13:52:39.352639338Z" level=info msg="TearDown network for sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" successfully" Jan 30 13:52:39.352712 containerd[1799]: time="2025-01-30T13:52:39.352647488Z" level=info msg="StopPodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" returns successfully" Jan 30 13:52:39.352802 containerd[1799]: time="2025-01-30T13:52:39.352793057Z" level=info msg="RemovePodSandbox for \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\"" Jan 30 13:52:39.352841 containerd[1799]: time="2025-01-30T13:52:39.352804850Z" level=info msg="Forcibly stopping sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\"" Jan 30 13:52:39.352865 containerd[1799]: time="2025-01-30T13:52:39.352848935Z" level=info msg="TearDown network for sandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" successfully" Jan 30 13:52:39.354448 containerd[1799]: time="2025-01-30T13:52:39.354409350Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.354504 containerd[1799]: time="2025-01-30T13:52:39.354492310Z" level=info msg="RemovePodSandbox \"fff753a38e8dba8e7c9b14f9c069441fe8dfb74e466f05df9234a3949abfe4b1\" returns successfully" Jan 30 13:52:39.354923 containerd[1799]: time="2025-01-30T13:52:39.354895080Z" level=info msg="StopPodSandbox for \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\"" Jan 30 13:52:39.355004 containerd[1799]: time="2025-01-30T13:52:39.354979870Z" level=info msg="TearDown network for sandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\" successfully" Jan 30 13:52:39.355026 containerd[1799]: time="2025-01-30T13:52:39.355003889Z" level=info msg="StopPodSandbox for \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\" returns successfully" Jan 30 13:52:39.355152 containerd[1799]: time="2025-01-30T13:52:39.355141746Z" level=info msg="RemovePodSandbox for \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\"" Jan 30 13:52:39.355172 containerd[1799]: time="2025-01-30T13:52:39.355154933Z" level=info msg="Forcibly stopping sandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\"" Jan 30 13:52:39.355202 containerd[1799]: time="2025-01-30T13:52:39.355187417Z" level=info msg="TearDown network for sandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\" successfully" Jan 30 13:52:39.356362 containerd[1799]: time="2025-01-30T13:52:39.356314521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.356398 containerd[1799]: time="2025-01-30T13:52:39.356364158Z" level=info msg="RemovePodSandbox \"89836683f1f49153860904b4e993d90716e58b6f579283dbdc4da8f2e8a4630d\" returns successfully" Jan 30 13:52:39.356538 containerd[1799]: time="2025-01-30T13:52:39.356526660Z" level=info msg="StopPodSandbox for \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\"" Jan 30 13:52:39.356672 containerd[1799]: time="2025-01-30T13:52:39.356603407Z" level=info msg="TearDown network for sandbox \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\" successfully" Jan 30 13:52:39.356672 containerd[1799]: time="2025-01-30T13:52:39.356637839Z" level=info msg="StopPodSandbox for \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\" returns successfully" Jan 30 13:52:39.356946 containerd[1799]: time="2025-01-30T13:52:39.356902428Z" level=info msg="RemovePodSandbox for \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\"" Jan 30 13:52:39.356946 containerd[1799]: time="2025-01-30T13:52:39.356913835Z" level=info msg="Forcibly stopping sandbox \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\"" Jan 30 13:52:39.357018 containerd[1799]: time="2025-01-30T13:52:39.356967082Z" level=info msg="TearDown network for sandbox \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\" successfully" Jan 30 13:52:39.358278 containerd[1799]: time="2025-01-30T13:52:39.358267220Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.358310 containerd[1799]: time="2025-01-30T13:52:39.358286739Z" level=info msg="RemovePodSandbox \"d7646e5ccd20edde0e316092c390330a513679cb215280157fe3c3b2689b0c11\" returns successfully" Jan 30 13:52:39.358511 containerd[1799]: time="2025-01-30T13:52:39.358486223Z" level=info msg="StopPodSandbox for \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\"" Jan 30 13:52:39.358578 containerd[1799]: time="2025-01-30T13:52:39.358572369Z" level=info msg="TearDown network for sandbox \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\" successfully" Jan 30 13:52:39.358630 containerd[1799]: time="2025-01-30T13:52:39.358578530Z" level=info msg="StopPodSandbox for \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\" returns successfully" Jan 30 13:52:39.358868 containerd[1799]: time="2025-01-30T13:52:39.358832098Z" level=info msg="RemovePodSandbox for \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\"" Jan 30 13:52:39.358868 containerd[1799]: time="2025-01-30T13:52:39.358864035Z" level=info msg="Forcibly stopping sandbox \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\"" Jan 30 13:52:39.358919 containerd[1799]: time="2025-01-30T13:52:39.358898336Z" level=info msg="TearDown network for sandbox \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\" successfully" Jan 30 13:52:39.360056 containerd[1799]: time="2025-01-30T13:52:39.360022030Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.360094 containerd[1799]: time="2025-01-30T13:52:39.360057855Z" level=info msg="RemovePodSandbox \"9d6b82e2c41772027ba9eddc297f8ec0bf153c4e69fbd2acca7eeec1da79872c\" returns successfully" Jan 30 13:52:39.360214 containerd[1799]: time="2025-01-30T13:52:39.360179867Z" level=info msg="StopPodSandbox for \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\"" Jan 30 13:52:39.360247 containerd[1799]: time="2025-01-30T13:52:39.360233353Z" level=info msg="TearDown network for sandbox \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\" successfully" Jan 30 13:52:39.360247 containerd[1799]: time="2025-01-30T13:52:39.360239036Z" level=info msg="StopPodSandbox for \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\" returns successfully" Jan 30 13:52:39.360417 containerd[1799]: time="2025-01-30T13:52:39.360337310Z" level=info msg="RemovePodSandbox for \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\"" Jan 30 13:52:39.360417 containerd[1799]: time="2025-01-30T13:52:39.360371946Z" level=info msg="Forcibly stopping sandbox \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\"" Jan 30 13:52:39.360500 containerd[1799]: time="2025-01-30T13:52:39.360423577Z" level=info msg="TearDown network for sandbox \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\" successfully" Jan 30 13:52:39.361751 containerd[1799]: time="2025-01-30T13:52:39.361701796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.361751 containerd[1799]: time="2025-01-30T13:52:39.361743571Z" level=info msg="RemovePodSandbox \"feea83fc124e1faa9f7d09de19afab1728477a121ea3d2d311cfa6f506a0ba80\" returns successfully" Jan 30 13:52:39.361934 containerd[1799]: time="2025-01-30T13:52:39.361890031Z" level=info msg="StopPodSandbox for \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\"" Jan 30 13:52:39.361967 containerd[1799]: time="2025-01-30T13:52:39.361949782Z" level=info msg="TearDown network for sandbox \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\" successfully" Jan 30 13:52:39.361988 containerd[1799]: time="2025-01-30T13:52:39.361967452Z" level=info msg="StopPodSandbox for \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\" returns successfully" Jan 30 13:52:39.362060 containerd[1799]: time="2025-01-30T13:52:39.362050529Z" level=info msg="RemovePodSandbox for \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\"" Jan 30 13:52:39.362082 containerd[1799]: time="2025-01-30T13:52:39.362062043Z" level=info msg="Forcibly stopping sandbox \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\"" Jan 30 13:52:39.362105 containerd[1799]: time="2025-01-30T13:52:39.362090908Z" level=info msg="TearDown network for sandbox \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\" successfully" Jan 30 13:52:39.363152 containerd[1799]: time="2025-01-30T13:52:39.363140784Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.363176 containerd[1799]: time="2025-01-30T13:52:39.363158400Z" level=info msg="RemovePodSandbox \"e07fc5d2e682e8e89c7ca314e52d28ce977233393d473035daede783f996b8d9\" returns successfully" Jan 30 13:52:39.363271 containerd[1799]: time="2025-01-30T13:52:39.363261885Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\"" Jan 30 13:52:39.363307 containerd[1799]: time="2025-01-30T13:52:39.363300410Z" level=info msg="TearDown network for sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" successfully" Jan 30 13:52:39.363339 containerd[1799]: time="2025-01-30T13:52:39.363307503Z" level=info msg="StopPodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" returns successfully" Jan 30 13:52:39.363412 containerd[1799]: time="2025-01-30T13:52:39.363401150Z" level=info msg="RemovePodSandbox for \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\"" Jan 30 13:52:39.363432 containerd[1799]: time="2025-01-30T13:52:39.363416000Z" level=info msg="Forcibly stopping sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\"" Jan 30 13:52:39.363469 containerd[1799]: time="2025-01-30T13:52:39.363453210Z" level=info msg="TearDown network for sandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" successfully" Jan 30 13:52:39.364650 containerd[1799]: time="2025-01-30T13:52:39.364639248Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.364678 containerd[1799]: time="2025-01-30T13:52:39.364657419Z" level=info msg="RemovePodSandbox \"19ba49c649826c3ba09e85ed3df486a3b50f8c48902e68ec6bac9415db7acfba\" returns successfully" Jan 30 13:52:39.364765 containerd[1799]: time="2025-01-30T13:52:39.364755223Z" level=info msg="StopPodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\"" Jan 30 13:52:39.364809 containerd[1799]: time="2025-01-30T13:52:39.364799164Z" level=info msg="TearDown network for sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" successfully" Jan 30 13:52:39.364831 containerd[1799]: time="2025-01-30T13:52:39.364808859Z" level=info msg="StopPodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" returns successfully" Jan 30 13:52:39.364892 containerd[1799]: time="2025-01-30T13:52:39.364884490Z" level=info msg="RemovePodSandbox for \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\"" Jan 30 13:52:39.364915 containerd[1799]: time="2025-01-30T13:52:39.364894645Z" level=info msg="Forcibly stopping sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\"" Jan 30 13:52:39.364935 containerd[1799]: time="2025-01-30T13:52:39.364922136Z" level=info msg="TearDown network for sandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" successfully" Jan 30 13:52:39.366108 containerd[1799]: time="2025-01-30T13:52:39.366098035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.366136 containerd[1799]: time="2025-01-30T13:52:39.366123267Z" level=info msg="RemovePodSandbox \"b43efa9ca23593c1b78b4cae0cb41ee3b1b29161fe1292039f2f46798b1cfa02\" returns successfully" Jan 30 13:52:39.366262 containerd[1799]: time="2025-01-30T13:52:39.366253354Z" level=info msg="StopPodSandbox for \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\"" Jan 30 13:52:39.366297 containerd[1799]: time="2025-01-30T13:52:39.366290510Z" level=info msg="TearDown network for sandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\" successfully" Jan 30 13:52:39.366323 containerd[1799]: time="2025-01-30T13:52:39.366296854Z" level=info msg="StopPodSandbox for \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\" returns successfully" Jan 30 13:52:39.366409 containerd[1799]: time="2025-01-30T13:52:39.366397580Z" level=info msg="RemovePodSandbox for \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\"" Jan 30 13:52:39.366450 containerd[1799]: time="2025-01-30T13:52:39.366410953Z" level=info msg="Forcibly stopping sandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\"" Jan 30 13:52:39.366469 containerd[1799]: time="2025-01-30T13:52:39.366449143Z" level=info msg="TearDown network for sandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\" successfully" Jan 30 13:52:39.367551 containerd[1799]: time="2025-01-30T13:52:39.367520002Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.367580 containerd[1799]: time="2025-01-30T13:52:39.367555777Z" level=info msg="RemovePodSandbox \"6717bb10cea422bdf428bb55bde32e9acfb4ff5dcb939b122a266428485199fe\" returns successfully" Jan 30 13:52:39.367728 containerd[1799]: time="2025-01-30T13:52:39.367695073Z" level=info msg="StopPodSandbox for \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\"" Jan 30 13:52:39.367751 containerd[1799]: time="2025-01-30T13:52:39.367744716Z" level=info msg="TearDown network for sandbox \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\" successfully" Jan 30 13:52:39.367771 containerd[1799]: time="2025-01-30T13:52:39.367751220Z" level=info msg="StopPodSandbox for \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\" returns successfully" Jan 30 13:52:39.367847 containerd[1799]: time="2025-01-30T13:52:39.367836703Z" level=info msg="RemovePodSandbox for \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\"" Jan 30 13:52:39.367870 containerd[1799]: time="2025-01-30T13:52:39.367847277Z" level=info msg="Forcibly stopping sandbox \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\"" Jan 30 13:52:39.367893 containerd[1799]: time="2025-01-30T13:52:39.367877624Z" level=info msg="TearDown network for sandbox \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\" successfully" Jan 30 13:52:39.369122 containerd[1799]: time="2025-01-30T13:52:39.369106933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.369177 containerd[1799]: time="2025-01-30T13:52:39.369153310Z" level=info msg="RemovePodSandbox \"dc0da971b9cece35f3766c23bcebbc5f7e5f8e19e7fb33bb89e6cea6d8c6e255\" returns successfully" Jan 30 13:52:39.369265 containerd[1799]: time="2025-01-30T13:52:39.369255301Z" level=info msg="StopPodSandbox for \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\"" Jan 30 13:52:39.369297 containerd[1799]: time="2025-01-30T13:52:39.369291507Z" level=info msg="TearDown network for sandbox \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\" successfully" Jan 30 13:52:39.369315 containerd[1799]: time="2025-01-30T13:52:39.369297298Z" level=info msg="StopPodSandbox for \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\" returns successfully" Jan 30 13:52:39.369423 containerd[1799]: time="2025-01-30T13:52:39.369412991Z" level=info msg="RemovePodSandbox for \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\"" Jan 30 13:52:39.369449 containerd[1799]: time="2025-01-30T13:52:39.369424177Z" level=info msg="Forcibly stopping sandbox \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\"" Jan 30 13:52:39.369471 containerd[1799]: time="2025-01-30T13:52:39.369455492Z" level=info msg="TearDown network for sandbox \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\" successfully" Jan 30 13:52:39.370818 containerd[1799]: time="2025-01-30T13:52:39.370807835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.370848 containerd[1799]: time="2025-01-30T13:52:39.370825415Z" level=info msg="RemovePodSandbox \"6dda49e7f736011edbfdb376f17b5aef8573ae71d1c9911755607b94ca388794\" returns successfully" Jan 30 13:52:39.370955 containerd[1799]: time="2025-01-30T13:52:39.370945976Z" level=info msg="StopPodSandbox for \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\"" Jan 30 13:52:39.370988 containerd[1799]: time="2025-01-30T13:52:39.370981495Z" level=info msg="TearDown network for sandbox \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\" successfully" Jan 30 13:52:39.370988 containerd[1799]: time="2025-01-30T13:52:39.370987231Z" level=info msg="StopPodSandbox for \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\" returns successfully" Jan 30 13:52:39.371090 containerd[1799]: time="2025-01-30T13:52:39.371079855Z" level=info msg="RemovePodSandbox for \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\"" Jan 30 13:52:39.371113 containerd[1799]: time="2025-01-30T13:52:39.371093497Z" level=info msg="Forcibly stopping sandbox \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\"" Jan 30 13:52:39.371141 containerd[1799]: time="2025-01-30T13:52:39.371126053Z" level=info msg="TearDown network for sandbox \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\" successfully" Jan 30 13:52:39.372299 containerd[1799]: time="2025-01-30T13:52:39.372263922Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.372338 containerd[1799]: time="2025-01-30T13:52:39.372301475Z" level=info msg="RemovePodSandbox \"cd8db096b50e0b18cb27730c4b93213a5224cf930289c9206d0b2ba1ba377314\" returns successfully" Jan 30 13:52:39.372565 containerd[1799]: time="2025-01-30T13:52:39.372502145Z" level=info msg="StopPodSandbox for \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\"" Jan 30 13:52:39.372610 containerd[1799]: time="2025-01-30T13:52:39.372569978Z" level=info msg="TearDown network for sandbox \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\" successfully" Jan 30 13:52:39.372610 containerd[1799]: time="2025-01-30T13:52:39.372576558Z" level=info msg="StopPodSandbox for \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\" returns successfully" Jan 30 13:52:39.372762 containerd[1799]: time="2025-01-30T13:52:39.372686980Z" level=info msg="RemovePodSandbox for \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\"" Jan 30 13:52:39.372762 containerd[1799]: time="2025-01-30T13:52:39.372711759Z" level=info msg="Forcibly stopping sandbox \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\"" Jan 30 13:52:39.372812 containerd[1799]: time="2025-01-30T13:52:39.372770870Z" level=info msg="TearDown network for sandbox \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\" successfully" Jan 30 13:52:39.373954 containerd[1799]: time="2025-01-30T13:52:39.373909330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.373954 containerd[1799]: time="2025-01-30T13:52:39.373950852Z" level=info msg="RemovePodSandbox \"a3d56e4ef8b5a90779fb0a579509dadf4129f0d74f98de60805083a489e0c6a2\" returns successfully" Jan 30 13:52:39.374105 containerd[1799]: time="2025-01-30T13:52:39.374046531Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\"" Jan 30 13:52:39.374139 containerd[1799]: time="2025-01-30T13:52:39.374126581Z" level=info msg="TearDown network for sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" successfully" Jan 30 13:52:39.374139 containerd[1799]: time="2025-01-30T13:52:39.374132617Z" level=info msg="StopPodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" returns successfully" Jan 30 13:52:39.374236 containerd[1799]: time="2025-01-30T13:52:39.374226667Z" level=info msg="RemovePodSandbox for \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\"" Jan 30 13:52:39.374261 containerd[1799]: time="2025-01-30T13:52:39.374236358Z" level=info msg="Forcibly stopping sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\"" Jan 30 13:52:39.374282 containerd[1799]: time="2025-01-30T13:52:39.374267007Z" level=info msg="TearDown network for sandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" successfully" Jan 30 13:52:39.375414 containerd[1799]: time="2025-01-30T13:52:39.375378015Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.375414 containerd[1799]: time="2025-01-30T13:52:39.375395284Z" level=info msg="RemovePodSandbox \"49611613d0b86faf9baca76695e536dbd42996988430c0fb006e9b41a3f8cd6f\" returns successfully" Jan 30 13:52:39.375708 containerd[1799]: time="2025-01-30T13:52:39.375650675Z" level=info msg="StopPodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\"" Jan 30 13:52:39.375708 containerd[1799]: time="2025-01-30T13:52:39.375692092Z" level=info msg="TearDown network for sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" successfully" Jan 30 13:52:39.375708 containerd[1799]: time="2025-01-30T13:52:39.375698005Z" level=info msg="StopPodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" returns successfully" Jan 30 13:52:39.375801 containerd[1799]: time="2025-01-30T13:52:39.375792782Z" level=info msg="RemovePodSandbox for \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\"" Jan 30 13:52:39.375862 containerd[1799]: time="2025-01-30T13:52:39.375801980Z" level=info msg="Forcibly stopping sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\"" Jan 30 13:52:39.375888 containerd[1799]: time="2025-01-30T13:52:39.375868712Z" level=info msg="TearDown network for sandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" successfully" Jan 30 13:52:39.377044 containerd[1799]: time="2025-01-30T13:52:39.377001865Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.377044 containerd[1799]: time="2025-01-30T13:52:39.377041552Z" level=info msg="RemovePodSandbox \"518bd13f18c12b3ff5812bda12fbf68646fc2b5acee7bdfa69d55e09b02c3106\" returns successfully" Jan 30 13:52:39.377231 containerd[1799]: time="2025-01-30T13:52:39.377220243Z" level=info msg="StopPodSandbox for \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\"" Jan 30 13:52:39.377267 containerd[1799]: time="2025-01-30T13:52:39.377260726Z" level=info msg="TearDown network for sandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\" successfully" Jan 30 13:52:39.377285 containerd[1799]: time="2025-01-30T13:52:39.377267337Z" level=info msg="StopPodSandbox for \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\" returns successfully" Jan 30 13:52:39.377398 containerd[1799]: time="2025-01-30T13:52:39.377386982Z" level=info msg="RemovePodSandbox for \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\"" Jan 30 13:52:39.377398 containerd[1799]: time="2025-01-30T13:52:39.377398608Z" level=info msg="Forcibly stopping sandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\"" Jan 30 13:52:39.377495 containerd[1799]: time="2025-01-30T13:52:39.377459166Z" level=info msg="TearDown network for sandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\" successfully" Jan 30 13:52:39.378630 containerd[1799]: time="2025-01-30T13:52:39.378594789Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.378677 containerd[1799]: time="2025-01-30T13:52:39.378630936Z" level=info msg="RemovePodSandbox \"77d3bdab01e87cf9f58ac127e76f20a4f1c143b582541c40654c2bae1423a0ab\" returns successfully" Jan 30 13:52:39.378808 containerd[1799]: time="2025-01-30T13:52:39.378797531Z" level=info msg="StopPodSandbox for \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\"" Jan 30 13:52:39.378883 containerd[1799]: time="2025-01-30T13:52:39.378838311Z" level=info msg="TearDown network for sandbox \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\" successfully" Jan 30 13:52:39.378883 containerd[1799]: time="2025-01-30T13:52:39.378844637Z" level=info msg="StopPodSandbox for \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\" returns successfully" Jan 30 13:52:39.379005 containerd[1799]: time="2025-01-30T13:52:39.378996743Z" level=info msg="RemovePodSandbox for \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\"" Jan 30 13:52:39.379025 containerd[1799]: time="2025-01-30T13:52:39.379006637Z" level=info msg="Forcibly stopping sandbox \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\"" Jan 30 13:52:39.379070 containerd[1799]: time="2025-01-30T13:52:39.379054262Z" level=info msg="TearDown network for sandbox \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\" successfully" Jan 30 13:52:39.380236 containerd[1799]: time="2025-01-30T13:52:39.380186249Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.380236 containerd[1799]: time="2025-01-30T13:52:39.380204072Z" level=info msg="RemovePodSandbox \"269235f0b5149813dba723e09521079e0cfbe76d2f07fe3be3718598d22d5415\" returns successfully" Jan 30 13:52:39.380429 containerd[1799]: time="2025-01-30T13:52:39.380417652Z" level=info msg="StopPodSandbox for \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\"" Jan 30 13:52:39.380484 containerd[1799]: time="2025-01-30T13:52:39.380476796Z" level=info msg="TearDown network for sandbox \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\" successfully" Jan 30 13:52:39.380507 containerd[1799]: time="2025-01-30T13:52:39.380484100Z" level=info msg="StopPodSandbox for \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\" returns successfully" Jan 30 13:52:39.380712 containerd[1799]: time="2025-01-30T13:52:39.380670296Z" level=info msg="RemovePodSandbox for \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\"" Jan 30 13:52:39.380712 containerd[1799]: time="2025-01-30T13:52:39.380682336Z" level=info msg="Forcibly stopping sandbox \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\"" Jan 30 13:52:39.380757 containerd[1799]: time="2025-01-30T13:52:39.380713685Z" level=info msg="TearDown network for sandbox \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\" successfully" Jan 30 13:52:39.381860 containerd[1799]: time="2025-01-30T13:52:39.381819128Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.381860 containerd[1799]: time="2025-01-30T13:52:39.381859697Z" level=info msg="RemovePodSandbox \"3895c92990d2c7c7737cab983c091888e806838e037312f5f949a04242b82a70\" returns successfully" Jan 30 13:52:39.382011 containerd[1799]: time="2025-01-30T13:52:39.381999672Z" level=info msg="StopPodSandbox for \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\"" Jan 30 13:52:39.382049 containerd[1799]: time="2025-01-30T13:52:39.382042146Z" level=info msg="TearDown network for sandbox \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\" successfully" Jan 30 13:52:39.382078 containerd[1799]: time="2025-01-30T13:52:39.382049060Z" level=info msg="StopPodSandbox for \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\" returns successfully" Jan 30 13:52:39.382176 containerd[1799]: time="2025-01-30T13:52:39.382165631Z" level=info msg="RemovePodSandbox for \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\"" Jan 30 13:52:39.382195 containerd[1799]: time="2025-01-30T13:52:39.382180679Z" level=info msg="Forcibly stopping sandbox \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\"" Jan 30 13:52:39.382230 containerd[1799]: time="2025-01-30T13:52:39.382214714Z" level=info msg="TearDown network for sandbox \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\" successfully" Jan 30 13:52:39.383416 containerd[1799]: time="2025-01-30T13:52:39.383364498Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.383416 containerd[1799]: time="2025-01-30T13:52:39.383395797Z" level=info msg="RemovePodSandbox \"41a29e9a768643bd5b15e5653c6657aa1ab159ef1481027b8efe528a2dc37ec4\" returns successfully" Jan 30 13:52:39.383563 containerd[1799]: time="2025-01-30T13:52:39.383501726Z" level=info msg="StopPodSandbox for \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\"" Jan 30 13:52:39.383563 containerd[1799]: time="2025-01-30T13:52:39.383540832Z" level=info msg="TearDown network for sandbox \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\" successfully" Jan 30 13:52:39.383563 containerd[1799]: time="2025-01-30T13:52:39.383548094Z" level=info msg="StopPodSandbox for \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\" returns successfully" Jan 30 13:52:39.383687 containerd[1799]: time="2025-01-30T13:52:39.383677410Z" level=info msg="RemovePodSandbox for \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\"" Jan 30 13:52:39.383712 containerd[1799]: time="2025-01-30T13:52:39.383688078Z" level=info msg="Forcibly stopping sandbox \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\"" Jan 30 13:52:39.383732 containerd[1799]: time="2025-01-30T13:52:39.383717705Z" level=info msg="TearDown network for sandbox \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\" successfully" Jan 30 13:52:39.384798 containerd[1799]: time="2025-01-30T13:52:39.384787013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.384820 containerd[1799]: time="2025-01-30T13:52:39.384805330Z" level=info msg="RemovePodSandbox \"696e676383b121dcb0e3f6d518fc3be067496895939d0078cb09278d81cb39a5\" returns successfully" Jan 30 13:52:39.384917 containerd[1799]: time="2025-01-30T13:52:39.384908266Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\"" Jan 30 13:52:39.384955 containerd[1799]: time="2025-01-30T13:52:39.384947323Z" level=info msg="TearDown network for sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" successfully" Jan 30 13:52:39.384978 containerd[1799]: time="2025-01-30T13:52:39.384954406Z" level=info msg="StopPodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" returns successfully" Jan 30 13:52:39.385135 containerd[1799]: time="2025-01-30T13:52:39.385126376Z" level=info msg="RemovePodSandbox for \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\"" Jan 30 13:52:39.385175 containerd[1799]: time="2025-01-30T13:52:39.385137283Z" level=info msg="Forcibly stopping sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\"" Jan 30 13:52:39.385198 containerd[1799]: time="2025-01-30T13:52:39.385181612Z" level=info msg="TearDown network for sandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" successfully" Jan 30 13:52:39.386347 containerd[1799]: time="2025-01-30T13:52:39.386306976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.386347 containerd[1799]: time="2025-01-30T13:52:39.386332277Z" level=info msg="RemovePodSandbox \"319f80df7094eb42330c8a787f969a8cbf00a0536105daea4600d56444a4a99b\" returns successfully" Jan 30 13:52:39.386589 containerd[1799]: time="2025-01-30T13:52:39.386561281Z" level=info msg="StopPodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\"" Jan 30 13:52:39.386622 containerd[1799]: time="2025-01-30T13:52:39.386615283Z" level=info msg="TearDown network for sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" successfully" Jan 30 13:52:39.386641 containerd[1799]: time="2025-01-30T13:52:39.386621968Z" level=info msg="StopPodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" returns successfully" Jan 30 13:52:39.386770 containerd[1799]: time="2025-01-30T13:52:39.386762214Z" level=info msg="RemovePodSandbox for \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\"" Jan 30 13:52:39.386791 containerd[1799]: time="2025-01-30T13:52:39.386772022Z" level=info msg="Forcibly stopping sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\"" Jan 30 13:52:39.386816 containerd[1799]: time="2025-01-30T13:52:39.386801166Z" level=info msg="TearDown network for sandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" successfully" Jan 30 13:52:39.387886 containerd[1799]: time="2025-01-30T13:52:39.387875775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.387925 containerd[1799]: time="2025-01-30T13:52:39.387891401Z" level=info msg="RemovePodSandbox \"aecb3dfa0f9c86e9665517b4cf2782cace018ace29533640561c9ae55cd698b4\" returns successfully" Jan 30 13:52:39.388139 containerd[1799]: time="2025-01-30T13:52:39.388125320Z" level=info msg="StopPodSandbox for \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\"" Jan 30 13:52:39.388198 containerd[1799]: time="2025-01-30T13:52:39.388188865Z" level=info msg="TearDown network for sandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\" successfully" Jan 30 13:52:39.388221 containerd[1799]: time="2025-01-30T13:52:39.388196743Z" level=info msg="StopPodSandbox for \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\" returns successfully" Jan 30 13:52:39.388338 containerd[1799]: time="2025-01-30T13:52:39.388326509Z" level=info msg="RemovePodSandbox for \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\"" Jan 30 13:52:39.388338 containerd[1799]: time="2025-01-30T13:52:39.388338448Z" level=info msg="Forcibly stopping sandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\"" Jan 30 13:52:39.388394 containerd[1799]: time="2025-01-30T13:52:39.388368925Z" level=info msg="TearDown network for sandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\" successfully" Jan 30 13:52:39.389603 containerd[1799]: time="2025-01-30T13:52:39.389590463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.389635 containerd[1799]: time="2025-01-30T13:52:39.389610735Z" level=info msg="RemovePodSandbox \"5c1e66ac324165a57c1d9817d57c4c9e0b855e65cb259cd6eac13ca80eeecbd2\" returns successfully" Jan 30 13:52:39.389870 containerd[1799]: time="2025-01-30T13:52:39.389790845Z" level=info msg="StopPodSandbox for \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\"" Jan 30 13:52:39.389914 containerd[1799]: time="2025-01-30T13:52:39.389877078Z" level=info msg="TearDown network for sandbox \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\" successfully" Jan 30 13:52:39.389914 containerd[1799]: time="2025-01-30T13:52:39.389883096Z" level=info msg="StopPodSandbox for \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\" returns successfully" Jan 30 13:52:39.390121 containerd[1799]: time="2025-01-30T13:52:39.390111519Z" level=info msg="RemovePodSandbox for \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\"" Jan 30 13:52:39.390142 containerd[1799]: time="2025-01-30T13:52:39.390123593Z" level=info msg="Forcibly stopping sandbox \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\"" Jan 30 13:52:39.390176 containerd[1799]: time="2025-01-30T13:52:39.390169424Z" level=info msg="TearDown network for sandbox \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\" successfully" Jan 30 13:52:39.391390 containerd[1799]: time="2025-01-30T13:52:39.391331731Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.391390 containerd[1799]: time="2025-01-30T13:52:39.391384798Z" level=info msg="RemovePodSandbox \"317a546e268f707184d2e7c9377d0a64f4c79d76657c0fd0ff5af3c3abb97357\" returns successfully" Jan 30 13:52:39.391625 containerd[1799]: time="2025-01-30T13:52:39.391550375Z" level=info msg="StopPodSandbox for \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\"" Jan 30 13:52:39.391670 containerd[1799]: time="2025-01-30T13:52:39.391653663Z" level=info msg="TearDown network for sandbox \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\" successfully" Jan 30 13:52:39.391670 containerd[1799]: time="2025-01-30T13:52:39.391660622Z" level=info msg="StopPodSandbox for \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\" returns successfully" Jan 30 13:52:39.391864 containerd[1799]: time="2025-01-30T13:52:39.391810031Z" level=info msg="RemovePodSandbox for \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\"" Jan 30 13:52:39.391864 containerd[1799]: time="2025-01-30T13:52:39.391823087Z" level=info msg="Forcibly stopping sandbox \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\"" Jan 30 13:52:39.391917 containerd[1799]: time="2025-01-30T13:52:39.391877486Z" level=info msg="TearDown network for sandbox \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\" successfully" Jan 30 13:52:39.393016 containerd[1799]: time="2025-01-30T13:52:39.392976451Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.393016 containerd[1799]: time="2025-01-30T13:52:39.392993518Z" level=info msg="RemovePodSandbox \"ef9149d52b57a4f229e7ce8dd3aabda752d9862c11cd6d608a6da85e1e1ca386\" returns successfully" Jan 30 13:52:39.393214 containerd[1799]: time="2025-01-30T13:52:39.393180940Z" level=info msg="StopPodSandbox for \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\"" Jan 30 13:52:39.393243 containerd[1799]: time="2025-01-30T13:52:39.393236129Z" level=info msg="TearDown network for sandbox \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\" successfully" Jan 30 13:52:39.393267 containerd[1799]: time="2025-01-30T13:52:39.393242042Z" level=info msg="StopPodSandbox for \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\" returns successfully" Jan 30 13:52:39.393473 containerd[1799]: time="2025-01-30T13:52:39.393395002Z" level=info msg="RemovePodSandbox for \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\"" Jan 30 13:52:39.393473 containerd[1799]: time="2025-01-30T13:52:39.393419987Z" level=info msg="Forcibly stopping sandbox \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\"" Jan 30 13:52:39.393557 containerd[1799]: time="2025-01-30T13:52:39.393493830Z" level=info msg="TearDown network for sandbox \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\" successfully" Jan 30 13:52:39.394657 containerd[1799]: time="2025-01-30T13:52:39.394618344Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.394657 containerd[1799]: time="2025-01-30T13:52:39.394635406Z" level=info msg="RemovePodSandbox \"f539c13b627545ce795586c61fbc8bc1e314ebaff90ac5328b918e73d3516672\" returns successfully" Jan 30 13:52:39.394883 containerd[1799]: time="2025-01-30T13:52:39.394831071Z" level=info msg="StopPodSandbox for \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\"" Jan 30 13:52:39.394931 containerd[1799]: time="2025-01-30T13:52:39.394895994Z" level=info msg="TearDown network for sandbox \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\" successfully" Jan 30 13:52:39.394931 containerd[1799]: time="2025-01-30T13:52:39.394915764Z" level=info msg="StopPodSandbox for \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\" returns successfully" Jan 30 13:52:39.395150 containerd[1799]: time="2025-01-30T13:52:39.395114698Z" level=info msg="RemovePodSandbox for \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\"" Jan 30 13:52:39.395150 containerd[1799]: time="2025-01-30T13:52:39.395125746Z" level=info msg="Forcibly stopping sandbox \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\"" Jan 30 13:52:39.395196 containerd[1799]: time="2025-01-30T13:52:39.395175753Z" level=info msg="TearDown network for sandbox \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\" successfully" Jan 30 13:52:39.396365 containerd[1799]: time="2025-01-30T13:52:39.396302694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.396365 containerd[1799]: time="2025-01-30T13:52:39.396340575Z" level=info msg="RemovePodSandbox \"93d4125b7f3f46f4d27f1d11255d45afe5b1e27ceba4acd41c02fe6aceadf180\" returns successfully" Jan 30 13:52:39.396578 containerd[1799]: time="2025-01-30T13:52:39.396538735Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\"" Jan 30 13:52:39.396623 containerd[1799]: time="2025-01-30T13:52:39.396602993Z" level=info msg="TearDown network for sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" successfully" Jan 30 13:52:39.396623 containerd[1799]: time="2025-01-30T13:52:39.396609421Z" level=info msg="StopPodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" returns successfully" Jan 30 13:52:39.396849 containerd[1799]: time="2025-01-30T13:52:39.396810406Z" level=info msg="RemovePodSandbox for \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\"" Jan 30 13:52:39.396849 containerd[1799]: time="2025-01-30T13:52:39.396820722Z" level=info msg="Forcibly stopping sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\"" Jan 30 13:52:39.396898 containerd[1799]: time="2025-01-30T13:52:39.396854037Z" level=info msg="TearDown network for sandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" successfully" Jan 30 13:52:39.398038 containerd[1799]: time="2025-01-30T13:52:39.397993751Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.398038 containerd[1799]: time="2025-01-30T13:52:39.398034518Z" level=info msg="RemovePodSandbox \"df73896c660aec5b4e4e1a111033fc9b29d7a9780af8ecfe23102c2f56730bfe\" returns successfully" Jan 30 13:52:39.398206 containerd[1799]: time="2025-01-30T13:52:39.398168294Z" level=info msg="StopPodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\"" Jan 30 13:52:39.398254 containerd[1799]: time="2025-01-30T13:52:39.398205411Z" level=info msg="TearDown network for sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" successfully" Jan 30 13:52:39.398254 containerd[1799]: time="2025-01-30T13:52:39.398228992Z" level=info msg="StopPodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" returns successfully" Jan 30 13:52:39.398410 containerd[1799]: time="2025-01-30T13:52:39.398333432Z" level=info msg="RemovePodSandbox for \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\"" Jan 30 13:52:39.398410 containerd[1799]: time="2025-01-30T13:52:39.398378136Z" level=info msg="Forcibly stopping sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\"" Jan 30 13:52:39.398494 containerd[1799]: time="2025-01-30T13:52:39.398443863Z" level=info msg="TearDown network for sandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" successfully" Jan 30 13:52:39.399579 containerd[1799]: time="2025-01-30T13:52:39.399533297Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.399579 containerd[1799]: time="2025-01-30T13:52:39.399574943Z" level=info msg="RemovePodSandbox \"64538f8f9f6bf3008247a2525c5353cd19abe2ed3dee217522a7495dd6194247\" returns successfully" Jan 30 13:52:39.399801 containerd[1799]: time="2025-01-30T13:52:39.399760510Z" level=info msg="StopPodSandbox for \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\"" Jan 30 13:52:39.399846 containerd[1799]: time="2025-01-30T13:52:39.399839296Z" level=info msg="TearDown network for sandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\" successfully" Jan 30 13:52:39.399864 containerd[1799]: time="2025-01-30T13:52:39.399845239Z" level=info msg="StopPodSandbox for \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\" returns successfully" Jan 30 13:52:39.400020 containerd[1799]: time="2025-01-30T13:52:39.399980831Z" level=info msg="RemovePodSandbox for \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\"" Jan 30 13:52:39.400020 containerd[1799]: time="2025-01-30T13:52:39.400013514Z" level=info msg="Forcibly stopping sandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\"" Jan 30 13:52:39.400078 containerd[1799]: time="2025-01-30T13:52:39.400041969Z" level=info msg="TearDown network for sandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\" successfully" Jan 30 13:52:39.401509 containerd[1799]: time="2025-01-30T13:52:39.401489829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.401605 containerd[1799]: time="2025-01-30T13:52:39.401592647Z" level=info msg="RemovePodSandbox \"0e66646007c3b5ee93c755eb641b7fa87902f7257c2f470632c857c28de82c2b\" returns successfully" Jan 30 13:52:39.402059 containerd[1799]: time="2025-01-30T13:52:39.402048306Z" level=info msg="StopPodSandbox for \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\"" Jan 30 13:52:39.402143 containerd[1799]: time="2025-01-30T13:52:39.402106617Z" level=info msg="TearDown network for sandbox \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\" successfully" Jan 30 13:52:39.402166 containerd[1799]: time="2025-01-30T13:52:39.402143751Z" level=info msg="StopPodSandbox for \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\" returns successfully" Jan 30 13:52:39.402339 containerd[1799]: time="2025-01-30T13:52:39.402328838Z" level=info msg="RemovePodSandbox for \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\"" Jan 30 13:52:39.402365 containerd[1799]: time="2025-01-30T13:52:39.402341227Z" level=info msg="Forcibly stopping sandbox \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\"" Jan 30 13:52:39.402394 containerd[1799]: time="2025-01-30T13:52:39.402376328Z" level=info msg="TearDown network for sandbox \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\" successfully" Jan 30 13:52:39.403538 containerd[1799]: time="2025-01-30T13:52:39.403526843Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.403566 containerd[1799]: time="2025-01-30T13:52:39.403544296Z" level=info msg="RemovePodSandbox \"73d13c95f311e71dbe42e32076ca32cc0a366a8f3124b7b74dc8f5e5a5c0c66b\" returns successfully" Jan 30 13:52:39.403794 containerd[1799]: time="2025-01-30T13:52:39.403768672Z" level=info msg="StopPodSandbox for \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\"" Jan 30 13:52:39.403847 containerd[1799]: time="2025-01-30T13:52:39.403840458Z" level=info msg="TearDown network for sandbox \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\" successfully" Jan 30 13:52:39.403905 containerd[1799]: time="2025-01-30T13:52:39.403846659Z" level=info msg="StopPodSandbox for \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\" returns successfully" Jan 30 13:52:39.404078 containerd[1799]: time="2025-01-30T13:52:39.404070043Z" level=info msg="RemovePodSandbox for \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\"" Jan 30 13:52:39.404118 containerd[1799]: time="2025-01-30T13:52:39.404080125Z" level=info msg="Forcibly stopping sandbox \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\"" Jan 30 13:52:39.404160 containerd[1799]: time="2025-01-30T13:52:39.404127248Z" level=info msg="TearDown network for sandbox \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\" successfully" Jan 30 13:52:39.405346 containerd[1799]: time="2025-01-30T13:52:39.405311555Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.405404 containerd[1799]: time="2025-01-30T13:52:39.405350801Z" level=info msg="RemovePodSandbox \"091b942d85ca2c65fe40a9333289a88e0f08d294c7f9142ecd2d9a372e0dbd09\" returns successfully" Jan 30 13:52:39.405623 containerd[1799]: time="2025-01-30T13:52:39.405610659Z" level=info msg="StopPodSandbox for \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\"" Jan 30 13:52:39.405686 containerd[1799]: time="2025-01-30T13:52:39.405675851Z" level=info msg="TearDown network for sandbox \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\" successfully" Jan 30 13:52:39.405745 containerd[1799]: time="2025-01-30T13:52:39.405685704Z" level=info msg="StopPodSandbox for \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\" returns successfully" Jan 30 13:52:39.405924 containerd[1799]: time="2025-01-30T13:52:39.405898367Z" level=info msg="RemovePodSandbox for \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\"" Jan 30 13:52:39.406011 containerd[1799]: time="2025-01-30T13:52:39.405924782Z" level=info msg="Forcibly stopping sandbox \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\"" Jan 30 13:52:39.406044 containerd[1799]: time="2025-01-30T13:52:39.406011814Z" level=info msg="TearDown network for sandbox \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\" successfully" Jan 30 13:52:39.407148 containerd[1799]: time="2025-01-30T13:52:39.407135537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.407202 containerd[1799]: time="2025-01-30T13:52:39.407158627Z" level=info msg="RemovePodSandbox \"0e9bb4464ca81dfab4a201a7520a6ddaeaa0b05d77c63d648583d362e8128b1c\" returns successfully" Jan 30 13:52:39.407316 containerd[1799]: time="2025-01-30T13:52:39.407307056Z" level=info msg="StopPodSandbox for \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\"" Jan 30 13:52:39.407494 containerd[1799]: time="2025-01-30T13:52:39.407408609Z" level=info msg="TearDown network for sandbox \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\" successfully" Jan 30 13:52:39.407494 containerd[1799]: time="2025-01-30T13:52:39.407445438Z" level=info msg="StopPodSandbox for \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\" returns successfully" Jan 30 13:52:39.407739 containerd[1799]: time="2025-01-30T13:52:39.407679387Z" level=info msg="RemovePodSandbox for \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\"" Jan 30 13:52:39.407739 containerd[1799]: time="2025-01-30T13:52:39.407708384Z" level=info msg="Forcibly stopping sandbox \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\"" Jan 30 13:52:39.407785 containerd[1799]: time="2025-01-30T13:52:39.407761168Z" level=info msg="TearDown network for sandbox \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\" successfully" Jan 30 13:52:39.408901 containerd[1799]: time="2025-01-30T13:52:39.408862280Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:52:39.408901 containerd[1799]: time="2025-01-30T13:52:39.408879359Z" level=info msg="RemovePodSandbox \"21b7a7748d17c29b982f3df10fdc2a19f79304d1672da10c9d5dcffa88ea8f86\" returns successfully" Jan 30 13:52:49.860489 kubelet[3061]: I0130 13:52:49.860410 3061 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:53:14.033496 systemd[1]: Started sshd@8-139.178.70.53:22-218.92.0.155:60544.service - OpenSSH per-connection server daemon (218.92.0.155:60544). Jan 30 13:53:15.143286 sshd-session[7757]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:53:17.169241 sshd[7755]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:53:17.461990 sshd-session[7760]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:53:19.898635 sshd[7755]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:53:20.191123 sshd-session[7788]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:53:22.372508 sshd[7755]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:53:22.518458 sshd[7755]: Received disconnect from 218.92.0.155 port 60544:11: [preauth] Jan 30 13:53:22.518458 sshd[7755]: Disconnected from authenticating user root 218.92.0.155 port 60544 [preauth] Jan 30 13:53:22.521961 systemd[1]: sshd@8-139.178.70.53:22-218.92.0.155:60544.service: Deactivated successfully. Jan 30 13:55:16.317207 systemd[1]: Started sshd@9-139.178.70.53:22-218.92.0.155:19474.service - OpenSSH per-connection server daemon (218.92.0.155:19474). Jan 30 13:55:17.537569 sshd-session[8042]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:55:19.447838 sshd[8038]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:55:19.768523 sshd-session[8043]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:55:21.952551 sshd[8038]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:55:22.260878 sshd-session[8069]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:55:24.858538 sshd[8038]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:55:25.020742 sshd[8038]: Received disconnect from 218.92.0.155 port 19474:11: [preauth] Jan 30 13:55:25.020742 sshd[8038]: Disconnected from authenticating user root 218.92.0.155 port 19474 [preauth] Jan 30 13:55:25.024256 systemd[1]: sshd@9-139.178.70.53:22-218.92.0.155:19474.service: Deactivated successfully. Jan 30 13:56:19.862394 systemd[1]: Started sshd@10-139.178.70.53:22-101.200.243.197:59078.service - OpenSSH per-connection server daemon (101.200.243.197:59078). Jan 30 13:56:23.096023 sshd[8193]: kex_exchange_identification: read: Connection reset by peer Jan 30 13:56:23.096023 sshd[8193]: Connection reset by 101.200.243.197 port 59078 Jan 30 13:56:23.097662 systemd[1]: sshd@10-139.178.70.53:22-101.200.243.197:59078.service: Deactivated successfully. Jan 30 13:56:23.301654 systemd[1]: Started sshd@11-139.178.70.53:22-101.200.243.197:52892.service - OpenSSH per-connection server daemon (101.200.243.197:52892). Jan 30 13:56:23.970857 sshd[8217]: Invalid user from 101.200.243.197 port 52892 Jan 30 13:56:24.138008 sshd[8217]: Connection closed by invalid user 101.200.243.197 port 52892 [preauth] Jan 30 13:56:24.141250 systemd[1]: sshd@11-139.178.70.53:22-101.200.243.197:52892.service: Deactivated successfully. Jan 30 13:57:16.296617 systemd[1]: Started sshd@12-139.178.70.53:22-218.92.0.155:23041.service - OpenSSH per-connection server daemon (218.92.0.155:23041). Jan 30 13:57:17.407516 sshd-session[8339]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:57:19.457647 sshd[8335]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:57:19.751770 sshd-session[8340]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:57:22.077698 sshd[8335]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:57:22.370976 sshd-session[8369]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:57:24.441601 sshd[8335]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:57:24.589087 sshd[8335]: Received disconnect from 218.92.0.155 port 23041:11: [preauth] Jan 30 13:57:24.589087 sshd[8335]: Disconnected from authenticating user root 218.92.0.155 port 23041 [preauth] Jan 30 13:57:24.592678 systemd[1]: sshd@12-139.178.70.53:22-218.92.0.155:23041.service: Deactivated successfully. Jan 30 13:58:56.928639 systemd[1]: Started sshd@13-139.178.70.53:22-139.178.89.65:35468.service - OpenSSH per-connection server daemon (139.178.89.65:35468). Jan 30 13:58:56.993121 sshd[8591]: Accepted publickey for core from 139.178.89.65 port 35468 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:58:56.994026 sshd-session[8591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:56.997667 systemd-logind[1781]: New session 10 of user core. Jan 30 13:58:57.010591 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:58:57.135705 sshd[8593]: Connection closed by 139.178.89.65 port 35468 Jan 30 13:58:57.136028 sshd-session[8591]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:57.138459 systemd[1]: sshd@13-139.178.70.53:22-139.178.89.65:35468.service: Deactivated successfully. Jan 30 13:58:57.139997 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:58:57.141182 systemd-logind[1781]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:58:57.142248 systemd-logind[1781]: Removed session 10. Jan 30 13:59:02.170549 systemd[1]: Started sshd@14-139.178.70.53:22-139.178.89.65:41638.service - OpenSSH per-connection server daemon (139.178.89.65:41638). Jan 30 13:59:02.196776 sshd[8638]: Accepted publickey for core from 139.178.89.65 port 41638 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:59:02.197452 sshd-session[8638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:02.200230 systemd-logind[1781]: New session 11 of user core. Jan 30 13:59:02.213472 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:59:02.304075 sshd[8641]: Connection closed by 139.178.89.65 port 41638 Jan 30 13:59:02.304278 sshd-session[8638]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:02.306014 systemd[1]: sshd@14-139.178.70.53:22-139.178.89.65:41638.service: Deactivated successfully. Jan 30 13:59:02.307056 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:59:02.307831 systemd-logind[1781]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:59:02.308376 systemd-logind[1781]: Removed session 11. Jan 30 13:59:07.317706 systemd[1]: Started sshd@15-139.178.70.53:22-139.178.89.65:41650.service - OpenSSH per-connection server daemon (139.178.89.65:41650). Jan 30 13:59:07.346608 sshd[8666]: Accepted publickey for core from 139.178.89.65 port 41650 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:59:07.347402 sshd-session[8666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:07.350367 systemd-logind[1781]: New session 12 of user core. Jan 30 13:59:07.363525 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:59:07.456264 sshd[8668]: Connection closed by 139.178.89.65 port 41650 Jan 30 13:59:07.456469 sshd-session[8666]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:07.472025 systemd[1]: sshd@15-139.178.70.53:22-139.178.89.65:41650.service: Deactivated successfully. Jan 30 13:59:07.472879 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:59:07.473579 systemd-logind[1781]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:59:07.474356 systemd[1]: Started sshd@16-139.178.70.53:22-139.178.89.65:41662.service - OpenSSH per-connection server daemon (139.178.89.65:41662). Jan 30 13:59:07.474958 systemd-logind[1781]: Removed session 12. Jan 30 13:59:07.503796 sshd[8693]: Accepted publickey for core from 139.178.89.65 port 41662 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:59:07.504466 sshd-session[8693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:07.507225 systemd-logind[1781]: New session 13 of user core. Jan 30 13:59:07.524892 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:59:07.711121 sshd[8695]: Connection closed by 139.178.89.65 port 41662 Jan 30 13:59:07.711844 sshd-session[8693]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:07.735689 systemd[1]: sshd@16-139.178.70.53:22-139.178.89.65:41662.service: Deactivated successfully. Jan 30 13:59:07.741251 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:59:07.744919 systemd-logind[1781]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:59:07.763708 systemd[1]: Started sshd@17-139.178.70.53:22-139.178.89.65:41664.service - OpenSSH per-connection server daemon (139.178.89.65:41664). Jan 30 13:59:07.764882 systemd-logind[1781]: Removed session 13. Jan 30 13:59:07.806432 sshd[8717]: Accepted publickey for core from 139.178.89.65 port 41664 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:59:07.807802 sshd-session[8717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:07.812892 systemd-logind[1781]: New session 14 of user core. Jan 30 13:59:07.828537 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:59:07.980557 sshd[8721]: Connection closed by 139.178.89.65 port 41664 Jan 30 13:59:07.980727 sshd-session[8717]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:07.982363 systemd[1]: sshd@17-139.178.70.53:22-139.178.89.65:41664.service: Deactivated successfully. Jan 30 13:59:07.983314 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:59:07.984068 systemd-logind[1781]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:59:07.984785 systemd-logind[1781]: Removed session 14. Jan 30 13:59:13.011744 systemd[1]: Started sshd@18-139.178.70.53:22-139.178.89.65:56806.service - OpenSSH per-connection server daemon (139.178.89.65:56806). Jan 30 13:59:13.044728 sshd[8749]: Accepted publickey for core from 139.178.89.65 port 56806 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:59:13.045328 sshd-session[8749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:13.048019 systemd-logind[1781]: New session 15 of user core. Jan 30 13:59:13.069643 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:59:13.159328 sshd[8751]: Connection closed by 139.178.89.65 port 56806 Jan 30 13:59:13.159546 sshd-session[8749]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:13.167895 systemd[1]: sshd@18-139.178.70.53:22-139.178.89.65:56806.service: Deactivated successfully. Jan 30 13:59:13.168650 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:59:13.169259 systemd-logind[1781]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:59:13.169943 systemd[1]: Started sshd@19-139.178.70.53:22-139.178.89.65:56814.service - OpenSSH per-connection server daemon (139.178.89.65:56814). Jan 30 13:59:13.170305 systemd-logind[1781]: Removed session 15. Jan 30 13:59:13.199229 sshd[8775]: Accepted publickey for core from 139.178.89.65 port 56814 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:59:13.199869 sshd-session[8775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:13.202709 systemd-logind[1781]: New session 16 of user core. Jan 30 13:59:13.214484 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:59:13.341079 sshd[8777]: Connection closed by 139.178.89.65 port 56814 Jan 30 13:59:13.341232 sshd-session[8775]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:13.352243 systemd[1]: sshd@19-139.178.70.53:22-139.178.89.65:56814.service: Deactivated successfully. Jan 30 13:59:13.353170 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:59:13.353872 systemd-logind[1781]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:59:13.354723 systemd[1]: Started sshd@20-139.178.70.53:22-139.178.89.65:56822.service - OpenSSH per-connection server daemon (139.178.89.65:56822). Jan 30 13:59:13.355200 systemd-logind[1781]: Removed session 16. Jan 30 13:59:13.388688 sshd[8797]: Accepted publickey for core from 139.178.89.65 port 56822 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:59:13.389562 sshd-session[8797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:13.393160 systemd-logind[1781]: New session 17 of user core. Jan 30 13:59:13.408543 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:59:14.572284 sshd[8799]: Connection closed by 139.178.89.65 port 56822 Jan 30 13:59:14.572896 sshd-session[8797]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:14.590955 systemd[1]: sshd@20-139.178.70.53:22-139.178.89.65:56822.service: Deactivated successfully. Jan 30 13:59:14.593435 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:59:14.595179 systemd-logind[1781]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:59:14.604958 systemd[1]: Started sshd@21-139.178.70.53:22-139.178.89.65:56828.service - OpenSSH per-connection server daemon (139.178.89.65:56828). Jan 30 13:59:14.606684 systemd-logind[1781]: Removed session 17. Jan 30 13:59:14.645161 sshd[8830]: Accepted publickey for core from 139.178.89.65 port 56828 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:59:14.646202 sshd-session[8830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:14.649856 systemd-logind[1781]: New session 18 of user core. Jan 30 13:59:14.660606 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:59:14.846549 sshd[8833]: Connection closed by 139.178.89.65 port 56828 Jan 30 13:59:14.846753 sshd-session[8830]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:14.856344 systemd[1]: sshd@21-139.178.70.53:22-139.178.89.65:56828.service: Deactivated successfully. Jan 30 13:59:14.857294 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:59:14.858124 systemd-logind[1781]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:59:14.858993 systemd[1]: Started sshd@22-139.178.70.53:22-139.178.89.65:56838.service - OpenSSH per-connection server daemon (139.178.89.65:56838). Jan 30 13:59:14.859646 systemd-logind[1781]: Removed session 18. Jan 30 13:59:14.890492 sshd[8856]: Accepted publickey for core from 139.178.89.65 port 56838 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:59:14.891402 sshd-session[8856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:14.894555 systemd-logind[1781]: New session 19 of user core. Jan 30 13:59:14.911884 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:59:15.045392 sshd[8859]: Connection closed by 139.178.89.65 port 56838 Jan 30 13:59:15.045534 sshd-session[8856]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:15.047028 systemd[1]: sshd@22-139.178.70.53:22-139.178.89.65:56838.service: Deactivated successfully. Jan 30 13:59:15.048004 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:59:15.048739 systemd-logind[1781]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:59:15.049266 systemd-logind[1781]: Removed session 19. Jan 30 13:59:17.414258 systemd[1]: Started sshd@23-139.178.70.53:22-218.92.0.155:17960.service - OpenSSH per-connection server daemon (218.92.0.155:17960). Jan 30 13:59:18.586945 sshd-session[8890]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:59:20.066486 systemd[1]: Started sshd@24-139.178.70.53:22-139.178.89.65:56850.service - OpenSSH per-connection server daemon (139.178.89.65:56850). Jan 30 13:59:20.109834 sshd[8919]: Accepted publickey for core from 139.178.89.65 port 56850 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:59:20.110491 sshd-session[8919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:20.113118 systemd-logind[1781]: New session 20 of user core. Jan 30 13:59:20.120550 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:59:20.252855 sshd[8921]: Connection closed by 139.178.89.65 port 56850 Jan 30 13:59:20.253041 sshd-session[8919]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:20.254961 systemd[1]: sshd@24-139.178.70.53:22-139.178.89.65:56850.service: Deactivated successfully. Jan 30 13:59:20.255863 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:59:20.256220 systemd-logind[1781]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:59:20.256872 systemd-logind[1781]: Removed session 20. Jan 30 13:59:21.249679 sshd[8885]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:59:21.560018 sshd-session[8945]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:59:23.299791 sshd[8885]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:59:23.609778 sshd-session[8946]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Jan 30 13:59:25.280654 systemd[1]: Started sshd@25-139.178.70.53:22-139.178.89.65:46692.service - OpenSSH per-connection server daemon (139.178.89.65:46692). Jan 30 13:59:25.307655 sshd[8948]: Accepted publickey for core from 139.178.89.65 port 46692 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:59:25.308438 sshd-session[8948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:25.311254 systemd-logind[1781]: New session 21 of user core. Jan 30 13:59:25.330532 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:59:25.421167 sshd[8950]: Connection closed by 139.178.89.65 port 46692 Jan 30 13:59:25.421429 sshd-session[8948]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:25.423326 systemd[1]: sshd@25-139.178.70.53:22-139.178.89.65:46692.service: Deactivated successfully. Jan 30 13:59:25.424236 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:59:25.424726 systemd-logind[1781]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:59:25.425234 systemd-logind[1781]: Removed session 21. Jan 30 13:59:25.624725 sshd[8885]: PAM: Permission denied for root from 218.92.0.155 Jan 30 13:59:25.779048 sshd[8885]: Received disconnect from 218.92.0.155 port 17960:11: [preauth] Jan 30 13:59:25.779048 sshd[8885]: Disconnected from authenticating user root 218.92.0.155 port 17960 [preauth] Jan 30 13:59:25.782583 systemd[1]: sshd@23-139.178.70.53:22-218.92.0.155:17960.service: Deactivated successfully. Jan 30 13:59:30.438397 systemd[1]: Started sshd@26-139.178.70.53:22-139.178.89.65:46698.service - OpenSSH per-connection server daemon (139.178.89.65:46698). Jan 30 13:59:30.469923 sshd[8975]: Accepted publickey for core from 139.178.89.65 port 46698 ssh2: RSA SHA256:Fv87ZGRMFbI78Ok5ZJXdtCjEHVQ61Y0emELFa7FC5bQ Jan 30 13:59:30.470583 sshd-session[8975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:30.473015 systemd-logind[1781]: New session 22 of user core. Jan 30 13:59:30.492594 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:59:30.577936 sshd[8977]: Connection closed by 139.178.89.65 port 46698 Jan 30 13:59:30.578086 sshd-session[8975]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:30.579749 systemd[1]: sshd@26-139.178.70.53:22-139.178.89.65:46698.service: Deactivated successfully. Jan 30 13:59:30.580743 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:59:30.581407 systemd-logind[1781]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:59:30.581917 systemd-logind[1781]: Removed session 22.